India has dropped plans to require approval for social media subsidiaries and internet platforms to launch artificial intelligence (AI) models, in a significant revision to its recent advisory that mandated the government's permission to deploy models deemed “under-tested” or “unreliable.”
The Centre has already clarified that the permission to launch new AI models will not apply to startups.
The Ministry of Electronics and Information Technology in a fresh advisory issued on Friday said, "Unreliable AI foundational models, LLMs, generative AI, software or algorithms or any such models should be made available to Indian users only after appropriately labelling the possible inherent fallibility or unreliability of the output generated."
The advisory emphasised that these platforms should not permit users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content breaking Indian laws.
They have also been asked to ensure all AI algorithms do not permit any bias, discrimination or threaten the integrity of the electoral process.
The new circular has also focused on identifying deepfakes and misinformation, directing social media platforms to label or embed content with unique identifiers in a manner that helps to identify the computer resource of the intermediary.
Further, if any changes are made by a user, the metadata should be configured to enable identification of such user or computer resource so that the person or computer used to make the change can be tracked down.
The digital platforms have been asked to comply with new AI guidelines with immediate effect.