US AI Policies – Can India Benefit?

US AI Policies – Can India Benefit?

While Prime Minister Narendra Modi may have waxed eloquent over India’s desire to become a global AI powerhouse during a recent chat with Bill Gates, the fact remains that post the upcoming elections, New Delhi has its task cut out when it comes to formulating regulations around artificial intelligence and its use. 

Whether in jest or in the utmost seriousness, Modi told Gates that in India, kids first say “Aayi” (mother) and immediately follow it up with “AI” (artificial intelligence). However, earlier in March, his government announced a regulation that requires tech companies to seek prior approvals before the public launch of AI-led tools that are under trial. 

The advisory, issued by the junior minister in the Ministry of Electronics and IT, isn’t binding but does signal the future of India’s regulatory concerns. What prompted this reaction seems to be the upcoming elections to choose 543 lawmakers for Parliament as well as a mess-up by Google’s Gemini where the GenAI chatbot spewed inappropriate content around the Prime Minister himself. 

India needs to have a well-rounded AI policy regulation

Given this challenge, it would be incumbent upon the government that takes over in June (Modi is seeking an unprecedented third five-year term as Prime Minister) to frame clearer and precise rules around AI and its uses. Towards this end, the White House’s recent policy roadmap for federal agencies to use AI safely and responsibly could serve as a benchmark. 

Of course, there are those who believe that India’s so-called “crackdown” early in March was much stiffer than what the US has come out with, and more in line with what China and the European Union have laid out as governance guide rails for AI-led innovation. The latest policy unveiled by VP Kamala Harris follow’s President Biden’s executive order last year.

How much can the recent US policy framework help?

Per that order, the US administration had listed out actions to address risks from AI usage, steps to expand transparency towards advancing responsible AI innovation and expanding on the AI workforce. Per Harris, the first step is to protect rights and safety whereby government agencies using AI tools are now required to verify that these do not endanger the rights and safety of the American people. 

The rules require than by December 1 this year, federal agencies must deploy concrete safeguards when using AI to ensure Americans’ rights or safety that includes actions to reliably assess, test and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI.

As an example Harris said, “if the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.” The idea is to bolster public transparency and accountability in AI usage. 

Transparency and accountability are the keywords

Henceforth US government agencies would be required to publish a list of their AI systems, an assessment of the risks they may pose and how they’re being managed each year. It also requires federal agencies to designate Chief AI Officers and establish governance boards to coordinate the use of AI across federal agencies. 

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said while making a clarion call to other countries to follow suit and put the public interest before those of the government when it comes to AI use. 

The federal announcement comes at a time when AI use in business and government is growing at a frenetic pace. Zscaler‘s 2024 AI Security Report notes a 600% spike in enterprise-level AI/ML transactions between April 2023 (521 million a month) to January 2024 (3.1 billion a month). 

Amidst this spike, researchers have warned that growing GenAI tools also introduced major cybersecurity risks across three specific areas, namely data leakage, data quality concerns and above all data privacy. 

India’s growing focus on AI-led solutions for a large section of its population would require forward-looking policies that ensure transparency and accountability without in any way throttling the growth of this technology advancement. 



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *