Recent Topic

10/recent/ticker-posts

About Me

Gujarat High Court Issues Notice On PIL Flagging Alarming Rise Of Deepfakes And AI Generated Fake Content

 

Gujarat High Court Issues Notice On PIL Flagging Alarming Rise Of Deepfakes And AI Generated Fake Content

The Gujarat High Court issued notice on a public interest litigation raising concerns about the increasing circulation of deepfakes and artificially generated digital content across online platforms. The petition highlighted the rapid growth of manipulated videos and images created using artificial intelligence and the potential dangers associated with their widespread dissemination on social media platforms, messaging applications, and other digital networks. A Division Bench of the Court took cognizance of the issues raised in the petition and sought responses from the Central Government, the State Government, and the Director General of Police of Gujarat regarding the concerns presented in the plea.

The petition pointed out that the circulation of fake and AI-generated visual content has increased significantly in recent times. According to the petition, this content often takes several forms that raise serious concerns from legal, social, and technological perspectives. These include deepfake impersonations of individuals, artificially generated material that depicts obscene or vulgar imagery, and manipulated digital content that mocks or demeans people. The petition stated that such content can spread rapidly on digital platforms and may cause serious reputational harm, privacy violations, and social disruption.

The plea further highlighted that although India has an existing legal framework that governs digital communication and regulates the activities of online intermediaries, practical difficulties remain in addressing the spread of deepfake content. The Information Technology Act provides provisions intended to regulate unlawful online activities and to hold individuals accountable for certain forms of digital misconduct. However, the petition asserted that enforcement challenges arise when authorities attempt to respond to artificially generated content that spreads quickly and reaches large audiences within a short time.

According to the petitioner, one of the major challenges relates to the lack of real-time support mechanisms for law enforcement agencies when they encounter deepfake content online. The petition emphasized that cybercrime units and police authorities may require immediate technical assistance and operational support when dealing with cases involving manipulated videos or images generated through artificial intelligence. When such content spreads rapidly across multiple platforms, delays in identifying and responding to it may allow the misinformation to circulate widely before corrective steps can be taken.

Another issue highlighted in the petition concerns coordination between the various agencies responsible for regulating digital communication and enforcing cyber laws. The petitioner stated that effective regulation of deepfake content requires cooperation between multiple institutions, including government agencies, state police authorities, and digital platform operators. Systems such as government blocking mechanisms and content removal portals are designed to deal with unlawful online material. However, the petition argued that the coordination between these systems and law enforcement agencies requires improvement in order to ensure prompt responses to incidents involving manipulated digital media.

The petition also addressed concerns regarding the potential misuse of deepfake technology to impersonate constitutional authorities or public officials. It stated that manipulated digital content involving individuals in positions of authority may create risks for institutional credibility and public order. According to the plea, when such content appears online, authorities should have clear procedures to prioritize investigation and removal of the material. The absence of structured escalation protocols may delay the response of authorities and allow such content to continue circulating across digital platforms.

While acknowledging that current laws criminalize several forms of online misconduct, the petition stated that the speed and scale of dissemination made possible by artificial intelligence technologies present new challenges for regulators and investigators. The ability of modern artificial intelligence systems to generate highly realistic images, videos, and audio has increased the potential for producing convincing fake media. As a result, the enforcement of existing laws becomes more complex when dealing with technologically sophisticated forms of digital manipulation.

The petition also referred to the legal obligations imposed on online intermediaries under the Information Technology Act. Section 79 of the Act provides certain protections to intermediaries that host third-party content, provided they comply with due diligence requirements and operate as neutral platforms. These obligations are further elaborated through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The petitioner noted that these rules establish responsibilities for intermediaries to remove unlawful content and cooperate with government authorities when required.

In addition, the plea referred to amendments introduced to the Information Technology Rules in 2025 which specifically address issues relating to synthetically generated information and deepfakes. According to the petition, these amendments impose additional obligations on intermediaries dealing with such content. The rules include provisions requiring platforms to label artificially generated content and to comply with traceability standards intended to help authorities identify the origin of certain digital communications. The amendments also introduce stricter timelines for compliance with government takedown orders issued for the removal of harmful or unlawful material.

The petition explained that under the amended rules, intermediaries may also be required to report specified offences to the appropriate authorities when such offences come to their notice. These measures are intended to strengthen the regulatory framework governing digital platforms and to improve the ability of authorities to monitor and address harmful online content. However, the petitioner maintained that despite the presence of these regulatory mechanisms, practical gaps continue to exist in their implementation.

According to the petition, these gaps in implementation may prevent authorities from responding quickly and effectively to incidents involving deepfake or artificially generated digital content. The petitioner argued that additional procedural frameworks may be required to ensure that the existing rules function effectively in real-world situations where manipulated media spreads rapidly through online networks. The plea emphasized the importance of creating clear operational procedures for authorities dealing with such incidents.

The petition also invoked the powers available under Section 69 of the Information Technology Act. This provision authorizes the government to issue directions for the interception, monitoring, or decryption of information transmitted through computer resources under certain circumstances. The petitioner stated that these powers could be utilized in addressing the spread of deepfake content if accompanied by clear procedural rules governing their application. According to the plea, properly structured rules under this provision could provide authorities with the tools necessary to monitor and intercept harmful digital communications created through artificial intelligence technologies.

The petitioner therefore requested the formulation of specific procedural rules and institutional mechanisms to address the issue of deepfake and AI-generated misinformation. The plea suggested that a more organized monitoring and coordination framework could enable authorities to detect manipulated content more quickly and respond more effectively when such material begins to circulate online. It also emphasized the importance of establishing clear lines of cooperation between different agencies and digital platforms when dealing with such incidents.

After considering the issues raised in the public interest litigation, the Gujarat High Court issued notice to the relevant authorities and sought their responses. The Court directed the Central Government, the State Government, and the Director General of Police to respond to the concerns raised in the petition. By issuing notice, the Court indicated that it would examine the issues relating to the spread of deepfake and artificially generated digital content after receiving the responses of the authorities concerned.

The matter has been listed for further hearing, during which the Court will consider the replies submitted by the government authorities and evaluate the legal and regulatory questions raised in the petition. The proceedings will address concerns regarding the growing presence of deepfake and AI-generated content on digital platforms and the adequacy of the existing legal framework in dealing with these technological developments.

WhatsApp Group Invite

Join WhatsApp Community

Post a Comment

0 Comments

'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })();