AI, Future and Public Policy Challenges

By: Dr Sangeeta Goel

Artificial intelligence (AI) is here to stay, not just to stay but to rule, in the times to come. Its going to be a major game changer and a huge disruptor in every arena of human life - health care, politics, education, defense , legal and executive systems et al. It has all the potential to lead us to a Brave New World with Orwellian Big Brother not just watching us  all the time but also controlling our lives and over powering our senses and unnerving sensibilities. A scary thought indeed! And yet its growing appeal and utility are irrefutable. No wonder , the worldwide business spending on AI was predicted to be around $50 billion in 2020 with forecast of $110 billion annually by 2024 by technology research firm IDC (International Data Corporation) in August 2020 . However , the world market is expected to break the $500 billion mark in 2023 itself , as per the latest reports of IDC.i As per Mckinsey report, Artificial Intelligence is set to add $ 13 Trillion to the global economy by 2030 which is about 16% of the total global share. So , exciting AI driven times, ahead, indeed

Undoubtedly , AI offers to make human life easier and happier in many ways . It can make the repetitive and routine tasks easier and cheaper, do things faster than humans while be  relatively objective with potential to bring in total transparency. It promises to be more democratic and inclusive too. In realm of public policy , it can identify patterns and make reliable inferences ; can forecast demands ; provide evidence and analyze outcomes . ‘AI can deliver on the promise of the government of the future that is more responsive and leaves no one behind.’ ii Also , it is likely to be free from human errors , for example , in weather forecasting , patient diagnosis , organizing logistics , disbursing loans, etc. It can do risky and humanly impossible things , for example , the robots working in nuclear radiation, hazardous zones. During Covid 19 , in many parts of the world ,AI driven robots were seen doing jobs such as sanitising hospitals and delivering food and medicines. iii AI has the potential to revolutionise the world of medical sciences and solve some of the most complex problems facing modern biology. It could diagnose skin cancer like a dermatologist would , pick out a stroke on a CT scan like a radiologist , and even detect potential cancers on a colonoscopy like a gastroenterologist. iv The rapid development of two highly effective Covid-19 vaccines was made possible through AI technology and innovative collaboration among researchers around the world.v

Excitingly yet little eerily , AI promises to take edge over its human creators in many ways. It doesn’t get tired or wears out easily. And “the combination of these two (qualities) will allow human intents and business process to scale 10x, 100x, and beyond that in the coming years.”vi The preceding capabilities of AI are only illustrative and not exhaustive . In , nutshell , it has  infinite potential to overcome numerous human limitations.

However, AI has not just great potential to transform the ways we live but has greater potential for peril . vii At the end of the day , AI may be as good as the data it is fed on. Therefore is likely to embed and replicate our prejudices and biases , while putting an stamp of scientific credibility on it , pre-empting scope for review appeal . Amazon AI-based algorithm for same-day delivery which was inadvertently biased against black neighbourhood being just one example . Though Amazon’s explanation was that these markets were not profitable. viii In the medical arena , huge errors can happen if the algorithm is not trained on adequately representative diverse dataset .World Economic Forum research showing that AI is often biased , listed five types of major human biases that could be transferred to Algorithms : (i)implicit (ii)sampling bias (iii) temporal (iv)over-fitting to training data, and (v)edge cases and outliers.ix

There could be other multiple ethical challenges too.

It may create job loss on a massive scale causing huge unemployment especially in sectors that require repetitive and iterative skills and are built upon constant improvisations, e.g Car , truck and train drivers, bankers , radiology , pathology , accounting , stock trading ; to name a few . As per an Oxford Study, more than 47% of American jobs will be under threat due to automation by the mid-2030s. As per the World Economic Forum, Artificial Intelligence automation will replace more than 75 million jobs by 2022. As per Mckinsey report, AI-bases robots could replace 30% of the current global workforce. As per the AI expert and Venture Capitalist Kai-Fu Lee, 40% of the world jobs will be replaced by AI-based bots in the next 10-15 years. x Futuristic yet daunting figures!

Also , AI may not have sensitivity and compassion to deal with special situations . It may encroach into people’s privacy. It may not think out of box and thus may fail or let down in exceptional situations. It may be relatively expensive and affordability may become an serious concern . It may create huge divide of AI Haves and Ai Have nots , not only within a community or a country but also between countries . President Putin of Russia has said, “Whoever wins the race in AI will probably become the ruler of the world.”

At its worst , it may overpower human psyche , make them addicted ,overdependent and lazy. People may find it convenient to use it and may use it indiscreetly without realizing as to what all goes into it. Unscrupulous elements may use it for wrong purposes. Possibly, it may develop to the extent of improvising itself with each iteration and may go past its intended reach and outcome. The big companies like Facebook and Google already have so much of data and know so much about people than they themselves know. Our digital doppelgangers may become commonplace and could be misused . However , it may not be feasible to fix responsibility in case of misuse while ascertaining ownership in case of creative use , such as, writing a piece of music or a making a digital painting .

In nutshell , the concerns that AI forming “mind of its own” and may not value human life, are not unfounded.

Again, this list of potential threats is only indicative of what all could go wrong with AI.

Major ethical issues

All pervasive major cyber-attacks and data Hacks may become very easy if not commonplace . The safety and securities of countries may be totally compromised. Recent all out cyber war by hacker group Anonymous on Russia in the wake of its Ukraine attack may send a chill down the spine of any statesman or an Army Chief . Cyber terrorism may be a real thing to deal with . Deep fake and Generative Adversarial Network (GAN) with its deadly potential to manipulate media and spread propaganda and false information may become easier and easier. Deep Fakes are created through AI, however, don't require considerable skill , just about anyone could create a Deep Fake to promote their chosen agenda. The usage of automated weapons systems with little or no human judgement could violate all the rules of the game and wreak havoc during wars like Ukraine and Russia.

How real is it ? Experts’ take .

In nutshell , AI may be such a wonderful tool in the hands of humans if put to judicious and constructive use . However , it has the potential of turning into a diabolic , self-perpetuating and self-serving technology . So , when Elon Musk says “AI is far more dangerous than nukes, ” it needs to be taken seriously . Stephen Hawking had similar views when he told an audience in Portugal that AI’s impact could be cataclysmic unless its rapid development is strictly and ethically controlled. “Unless we learn how to prepare for, and avoid, the potential risks,” he warned , “AI could be the worst event in the history of our civilization.” Research fellow Stuart Armstrong from the Future of Life Institute terms AI as an “extinction risk” were it to go rogue. Even Nuclear war has relatively lesser risk because it would “kill only a relatively small proportion of the planet. “If AI went bad, and 95 percent of humans were killed,” he said, “then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.”xi Recent stories such as the Cambridge Analytica data scandal, Google’s eerily accurate voice assistant, and Amazon’s Rekognition technology have demonstrated to the public, the ability of AI to erode democracy, trust, and civil liberties. xii

In the times to come, AI is to cause the biggest change and pose biggest challenge to the way human life is organized. If we refuse to believe this, then either we are too ignorant or too complacent. Before the civilization rides the massive and all-pervasive wave of AI, Policy makers and Law makers need to harness its potential and guide its way towards a socially and ethically responsible world order. And that can happen only if there is right kind of oversight and regulations in place to ensure that AI applications are built and implemented around culture of ethics , transparency and explainability and trustworthiness. The population which is going to be impacted by the decisions taken or facilitated by AI , must be given the first right to have meaningful information in understandable language . For example , while rejecting a bail or making a medical prognosis , both the agencies , the one who is takingsupport of AI and the one about whom the decision is being made, should have a legal right to know not only about the inferences made about them by AI systems but also what kind data and logic was used to make those inferences. In short , the end-users (including non-technical ones) deserve to understand the underlying decision-making processes of the systems they are expected to employ, especially in high-stakes situations. xiii

The Government need to develop ethical codes and standards for the use and development of AI . It need to focus on adequate regulation of AI driven devices like Cars ,weapons et al. Well laid out strategy and enforcement towards evolving inclusive AI to rule out any scope for discrimination and harm to certain sections of society, is badle needed . Niti Ayog has taken a step by coming up with document which “is intended to serve as an “essential pre-read” in building a truly transformative approach in pursuit of #AIforAll.” Certain existing laws like Copy Right Act 1957,The Patents Act 1970 , Information Technology Act 2000 do touch upon a few aspects like Data ownership , Application patenting etc., but, these are very far from addressing the gravity and enormity of the issues involved . Therefore , there is an immediate need to come up with an all-encompassing Policy and Regulatory Framework aimed at building an enabling and flourishing echo system towards cultivating and harnessing a responsive and responsible AI. And the time starts now.














Notice: Copyright Content. This content shouldn't be reproduced or used in any form. Contact SAPI for more information.

Created By Akshay Kharade At Widespread Solutions

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram