Site icon Youth Ki Awaaz

Without These Ethics, Artificial Intelligence Will Become Our Darkest Nightmare

Co-authored by Aadhar Sharma and Raamesh Gowri Raghavan:

Artificial intelligence is a great help to humanity. Unfortunately, the scope for its misuse is also huge. Given the tendencies among corporate and state entities to establish dominance, we would like to analyse ethical concerns and advocate the need for democratisation.

“It is not enough to be electors only. It is necessary to be law-makers; otherwise those who can be law-makers will be the masters of those who can only be electors.” – BR Ambedkar

Consider the following scenario: you are late for your dream job interview. You grab your messenger bag and dash out of your apartment. Even though you are not bothered about the lights or door locks, they sense your absence and act accordingly. While you’re anxiously twitching in the elevator, your smartphone – being aware of the appointment – calls a driver-less taxi. Leaping into the vehicle, you open a music-streaming service, and it automatically tunes you into a Brandenburg Concerto deciding it needs to calm you down – the sweat on your palm conveys a lot to the device.

On arriving, you dash into the building because your digital wallet automatically pays for the trip. After finishing the interview, you feel confident about your chances, but later, you find out you’ve been rejected due to your closet sexual orientation — detected by that strange-looking camera on the panel’s desk that kept staring you down.

How would you feel?

This isn’t even something futuristic. Artificial Intelligence (AI) permeates our lives today. From finding Tinder dates on the weekend to finding out new planets in the universe, almost all fields employ it. AI is a great leap in technology, but it’s also continuously shrouded in controversies. On one hand it’s an enormous help to geologists to predict earthquakes, but on the other hand, it’s still naive enough for people to train it to practise racial discrimination.

While any openness or transparency in the technology is commendable, the secrecy behind most products becomes a cause of concern for potential exploitation. In its current state, AI appears to be custom-made for the tech-titans (perhaps convenient for authoritarianism too). And while organisations defend their investments, as humble consumers, we are obligated to discern their true intentions and fight for our rights.

Data Is Not Just Data

Pedro Domingos, a professor of machine learning at the University of Washington and the author of “The Master Algorithm”, says: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

Social media, search engines, the Internet of Things and similar platforms produce enormous volumes of data today, which may provide invaluable insights into our world (for instance, they sure know when we need to re-order toilet paper). According to some estimates, there will be more than 3 billion active smartphone devices by 2020 and more than 150 billion networked sensors (that’s 18 times the estimated human population) by the next decade. In other words, the data produced in volumes will be oodles more than what we generate today. This appeal of big data is ensnaring companies and governments to invest and capitalise.

So, how may data change the dynamics of society? A data-controlled society will be one where data will hold ascendency in determining the organisation of defence, economy and all other governmental policies. The futuristic question – would such a society ever exist – is debatable. Yet, one must acknowledge that we already live in a world where data influences a majority of decisions.

AI complements the human art of decision-making by studying data (data analysis) to discover interesting patterns and suggesting candidate solutions to the problem at hand. A tool that not only aids decision making by utilising resources (time, workforce, capital, etc.) but also complies with explicit criteria (customisable: accuracy, speed, generality, etc.), undoubtedly, has a great potential for being employed in any industry.

For example, when you report an e-mail as spam, the AI within Gmail learns about it and updates the ‘spam filter’. Whenever a new spam-like email arrives, the AI decides to throw it directly in the spam bin. Therefore, you don’t see similar e-mails in the inbox, but in the spam list. However, if you believe that it has made a wrong decision (mis-classification: a false positive), you can always un-spam it and make the AI learn this too.

It is applications like these which earn AIs an enormous amount of attention and interest among industries and governments alike. An AI that proposes solutions for the scrutiny of humans is limited in its powers. However, the same can’t be said for an autonomous decision-making AI which has the liberty to take actions without human approval. The power of autonomy doesn’t necessarily imply that the AI will just be malevolent or benevolent – it’s the application and the circumstances that will truly govern the behavior of such an AI.

A case in point: at present, oncologists take more than four hours to identify and classify cancerous tissues. To effectively target radiotherapy treatment, it’s essential to swiftly and accurately analyse the scans. AI algorithms such as image segmentation (is there a green blob?) and classification (is this green blob cancerous?) help in the detection of tumors with astonishing speed and accuracy. Researchers at DeepMind Health have collaborated with UCL Hospitals to train AI models that rapidly perform the segmentation and elevate the quality of treatment offered to the patients.

Ethical Dilemmas

Evidently, AIs are immensely beneficial to humanity. Yet, the autonomy in AIs can also become a massive roadblock to the desire for ethical salience. The medical industry experiences a shortage of transplant-ready organs, and doctors must quickly decide which candidate gets it. The process involves many ethical dilemmas hard enough for doctors. And when AIs start taking those decisions, the ethical boundaries become even fuzzier.

For instance, researchers at Duke University have developed a decision-making system that picks a receiver for a kidney transplant. It was asked to choose between a young alcoholic and an elderly cancer survivor – it picked the young fellow. But the ethical dilemma is this: if the youngster goes back to his binge-drinking lifestyle, he’s likely to destroy the new kidney too. Perhaps, the elder person (with a new lease of life) might have done something more positive. This is a decision the most seasoned of doctors say that they are unable to take, for it is based on scenarios that are near impossible to predict – what if the young man didn’t drink? What if the elder person’s cancer came back? Here, the algorithm can be questioned for making a decision it may not even have understood. In this context, how ethical is it for such a system to make decisions?

Another way AI aids decision-making is by countering cognitive biases. However, autonomy may also make systems vulnerable to security threats and unexpected incidents. Here’s an example: On May 6, 2010, the US stock market collapsed (the infamous ‘Flash Crash’), sweeping away billions of dollars from financial juggernauts. Why? It’s been alleged that the high-frequency anomaly detection system (autonomous decision-making AI) failed to detect nearly 19,000 fake-selling orders. Within minutes, prices fell, wiping out nearly a trillion dollars from the market.

Later on, a criminal investigation concluded that a trader had exploited previously unknown vulnerabilities (of the anomaly detection system) and the market structure, which resulted in the debacle. Testing simple software is hard enough – and such high throughput AI is practically impossible to test exhaustively or track its behavior in real-time. If the system violates any boundary condition in unprecedented ways, the whole machinery will succumb to chaos.

Transparency in AIs is another issue. The most advanced algorithms in AIs are essentially black-boxes – no one knows entirely how they technically make the decisions. Also, the fundamental designs of proprietary algorithms are often kept secret by companies. A programme can predict the sexual-orientation of a person just by looking at facial images. Other facial recognition programs can predict people’s emotional statespolitical alignment and IQ. Some public studies create ethical controversies. However, the number of unethically-deployed algorithms in the industry or governmental institutions may never be truly known.

“I think political systems will use [AI] to terrorize people.” – Geoffrey Hinton

No matter how much we may trust them, these algorithms can inherit a bias, if we train them on data that’s already biased. For instance, if one trains an AI model on a dataset of criminal records that has a bias against African-American people, the algorithm will start out with this inbuilt bias. Several states in the US issue a ‘risk assessment score’ to criminals, which predicts the likelihood of them committing a future crime and assists the court in determining the optimal punishment.

However, now it’s believed that this can inject racial bias in the court. There have been multiple instances when a dark-skinned person, while being prosecuted for the first time, got a significantly higher risk-score than a notorious fair-skinned criminal. The company who developed the system refuses to reveal the calculations it uses to assign this score.

A lack of democratic control further exacerbates the problem of transparency. It compels one to ponder whether data accidentally manifests bias in AIs, or is it the companies and governments that covertly regulate AIs to establish dominance or discriminate on the basis of subjective criteria?

The Way Ahead

As we depend more and more on technologies such as search engines and social media, the potential for possible exploitation (by explicitly inducing bias in the data scene) is simultaneously growing. For instance, there’s a chance that control over elector data can tempt governments to fiddle with elections, to ensure that they keep winning. Companies too can benefit financially by tweaking algorithms that promote a sister business over a competitor’s business. Governments or organisations that possess such powers may establish their dominance over all others – and this is a threat to social cohesion.

If the data set is clean, but the systems are genuinely performing ill (rather than being subjected to external manipulation), then there is something wrong with the technology (since, all algorithms aren’t perfect, and bugs are common in codes). In such cases, researchers must analyse the underlying algorithms to make them ailment-free. However, most of these systems are strongly protected by intellectual property rights.

Companies often oversell their products and only publicise vague details which seldom divulge meaningful information about the machinery. Encapsulation of details (such as study methods and algorithmic calculations) is essential in a competitive industry, but a substantial lack of public information (transparency) against the backdrop of controversial episodes naturally leads us to be suspicious.

Conclusively, from a technocracy of social control (using artificially intelligent machines), a new form of dictatorship may emerge. To inhibit the emergence of a top-down global control, we must democratise AI, which requires transparency and the protection of constitutional and civil rights. It also seems to be the best time to ask for additional rights that defend not only the identity and agency but also the social dynamics and diversity.

The path to a better future with AI will have many highs and lows. At times we may find ourselves in unanticipated debacles, but it’s in such situations that our social cohesion and governmental policies will help us prevail.

A version of this post was first published here.


Adhar Sharma was a researcher working with Dr Sukant Khurana’s group, focussing on the ethics of artificial Intelligence.

Raamesh Gowri Raghavan is collaborating with Dr. Sukant Khurana on various projects, ranging from popular writing of AI, influence of technology on art, and mental health awareness.

Mr. Raamesh Gowri Raghavan is an award winning poet, a well-known advertising professional, historian, and a researcher exploring the interface of science and art. He is also championing a massive anti-depression and suicide prevention effort with Dr. Khurana and Farooq Ali Khan.

You can know more about Raamesh here and here.

Dr Sukant Khurana runs an academic research lab and several tech companies. He is also a known artist, author, and speaker. You can learn more about Sukant at www.brainnart.com or www.dataisnotjustdata.com.

If you wish to work on biomedical research, neuroscience, sustainable development, artificial intelligence or data science projects for public good, you can contact him at skgroup.iiserk@gmail.com or by reaching out to him on LinkedIn.


Exit mobile version