Contacts
Get in touch
Close

Contacts

333 Sunset Dr, Apt 204 Fort Lauderdale, FL 33301, USA

+ 1 92940-03096

Mary Major, Mundakkal West Kollam, Kerala 691001, India

+91 91489-74612

Call us: +1 929-400-3096

Govt mulls setting up Artificial Intelligence Safety Institute

Govt mulls setting up Artificial Intelligence Safety Institute

The institute could help set standards, frameworks and guidelines for AI development without acting as a regulatory body or stifling innovation.

should set up an artificial intelligence safety institute (AISI) that could help set standards, frameworks and guidelines for AI development without acting as a regulatory body or stifling innovation, top government officials told stakeholders in a consultation meeting on October 7, at least seven people aware of the matter told HT on the condition of anonymity.

The consultation, helmed by the ministry of electronics and information technology (Meity) additional secretary Abhishek Singh, who oversees India AI, was a preliminary meeting that sought to seek inputs on how the Indian AISI should be structured, what should be its mandate, and how it could work with other AISIs across the world.

The UK became the first country in the world to announce an AISI during the AI Safety Summit held in Bletchley Park in November 2023 with an initial investment of £100 million (~ ₹1,100 crore). It was closely followed by the US, which established it as a part of its National Institute of Standards and Technology (NIST). Japan launched its AISI in February 2024. While the UK’s AISI is housed within the government and has an enforcement element to it, the US’ AISI is primarily a standard-setting body.

In May 2024, the European Union and eleven other nations, including the US, the UK, South Korea, Australia, Canada, France, Germany, Italy, Japan, South Korea, and Singapore, signed the Seoul Declaration at the Seoul Summit which, amongst other things, aimed to create or expand AISIs, research programmes and other relevant institutions to promote cooperation on safety research and share best practices.

Ahead of the October 7 meeting, MeitY shared two categories of questions with stakeholders that included companies such as Meta, Google, Microsoft, IBM and OpenAI; industry bodies such as Nasscom, Broadband India Forum, BSA-The Software Alliance; multiple IITs; consulting firms such as The Quantum Hub and Dialogue; and civil society organisations like Digital Empowerment Foundation and Access Now.

The first category of questions was about AISI’s focus – its core objectives, the organisational structure best suited for its mission and scalability, how it could develop indigenous AI safety tools “that are contextualized to India’s unique challenges”, and who should be the AISI’s strategic partners.

The second focused on how AISI could develop “strong partnerships and gain stakeholder support” –- what strategies could engage key stakeholders in supporting AI safety, how could AISI establish and maintain effective national and international partnerships, and what role should it play in global AI safety discussions and standards.

HT’s conversations with participants suggested that there was consensus that first, the government is serious about setting up AISI; second, AISI is not about regulation but about identifying harms, identifying risks, setting standards, etc. and these could eventually inform future regulations; and third, interoperable systems are required so that silos are not developed.

The AISI could also evolve a risk assessment toolkit, or a voluntary compliance toolkit that the industry could use. The government does not intend to set AISI up as a regulatory body and thus none of its frameworks are intended to be binding but this could change depending on the inputs the government receives from stakeholders. Research supported by the AISI could be used to inform any eventual AI policy that the Indian government drafts.

An expert aware of the government’s thinking around AISI explained to HT that three questions need to undergird the discussion around an Indian AISI: first, why do we need an AISI for India? Second, how should it be set up? And third, how does India want to look at AI safety, that is, should the Indian AISI work on what use cases and applications might be high risk or low risk, or should it focus on innovation, or some combination of the two?

This person explained that the primary purpose of AISIs set up elsewhere in the world is to understand and mitigate risks associated with AI development and deployment, and to then share the parameters for quantification of risk with each other.

Leave a Comment

Your email address will not be published. Required fields are marked *