Taking on deepfakes requires collaboration among all stakeholders

Advances in Artificial Intelligence (AI) and Machine Learning (ML) technology has allowed bad actors to incorporate hyper-realistic digital falsification, and thus, the near real image/audio clip can potentially be used to damage reputations, exploit people, sabotage elections, spread large-scale misinformation, fabricate evidence, and, in general, undermine trust.

When AI and ML are used to edit and manipulate digital media (video, audio and still images), deepfakes are created. Deepfakes blur lines between fiction and reality.

Sajai Singh and Mary Julie John, Partners and Attorneys, JSA Advocates and Solicitors, in conversation with IANS, help us acquire a better understanding of the matter:

IANS: How is the advancement of AI impacting the creation and detection of deep fake videos, and what steps are being taken to address the potential misuse of this technology?

SINGH: The current concern is the forthcoming 2024 general elections and how deepfakes will be strategically used political messaging.

Currently, India does not have specific laws or regulations that ban or regulate the use of AI, ML or for that matter, deepfake technology.

If, however, an offence is committed, there are several laws under which recourse may be found. For example, violation of an individual’s privacy and manipulating his/her images maliciously is covered under Information Technology Act, 2000. Social media intermediaries may be tasked with removal of content that impersonates another person. Then there may be protection available under the Indian Penal Code. If there is a copyright violation on the images and videos, then the Copyright Act of 1957 may be referred to.

So, we have come a long way from using editing tools, like Photoshop, to create, improve or enhance a digital image. This is a negative take on the AI and ML technology, which have huge benefits in the field of education, film production, criminal forensics, and artistic expression.

The other concern is that the use of AI and ML technology to create deepfakes is no longer the elite domain of some skilled software engineers. This technology is today easy to use, cheaper, faster and in the reach of semi-skilled and unskilled individuals.

While there is a gender bias with deepfakes being used against women, there is a growing concern where deepfakes may perpetrate technology-facilitated online gendered violence.

In India, we have experienced the use of deepfakes since 2020.

Public discussion on this topic started with the videos of Bharatiya Janata Party (BJP) leader Manoj Tiwari supposedly making allegations against Arvind Kejriwal, before the Delhi elections. Then came the doctored video of Madhya Pradesh Congress chief Kamal Nath on the State government’s Laadli Behna Scheme.

Most recent was the video featuring likeness of actor Rashmika Mandanna entering a lift in a bodysuit (probably based on the original video of British Indian influencer Zara Patel).

The issue is becoming challenging for actors across the globe, with the rise of ‘digital replicas.’ This form of a deepfake may deprive an artist of his/her livelihood if the creation was done without consent and compensation.

Until the government introduces an AI regulation draft, one will need to wait and watch the view the government is taking on this disturbing trend.

Of course, it would be advisable for the government to not rush into a comprehensive law which technology may soon be outdated.

IANS: What ethical and legal challenges does the proliferation of AI-driven deepfake technology pose, and how are regulatory bodies and tech industries collaborating to mitigate these challenges?

SINGH: For a regulator on the issue, one would need to look at the Ministry of Electronics and Information Technology (MeitY). MeitY is working on either new regulations or an amendment of existing laws to deal with the deepfakes,” said Singh.

From available information the law on the subject would be based on detecting deepfakes, preventing them, creating a grievance and reporting mechanism and raising awareness on the issue. ”

Hopefully, this four-pronged approach will provide comfort for digital nagriks in online spaces.

Awareness on how to detect deepfakes should also be on the agenda of the government as it drafts the new legislation.

Advocacy on this point is essential for anyone to weigh on what to believe from what they see.

IANS: Is India’s current legal framework equipped to effectively govern the use of AI technologies across diverse sectors?

JOHN: In 2018 and 2021, the Government had rolled out sets of guidelines for use of AI titled ‘National Strategy for AI’ and ‘Responsible AI Guidelines, respectively; but these guidelines are not mandatory and, therefore, not enforceable as law.

While the EU has recently come up with a dedicated AI law to “ensure the safety, legality, trustworthiness, and respect for fundamental rights within AI systems”, there is nothing quite so comparable in India.

However, these protections may be harmoniously constructed from a combined reading of current data protection laws and other sector specific laws and a standard of care or caution can be drawn out.

IANS: How robust are India’s data protection laws in managing the ethical and privacy concerns posed by AI advancements?

JOHN: In India, the recently published Digital Personal Data Protection Act, 2023 (DPDP Act) is full of promise and we look forward to a better and more effective data protection regulatory framework through further implementation of the DPDP Act and the roll out of subordinate legislation.

Currently, as the relevant provisions of the Information Technology Act, 2000, and Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, have not been repealed, they continue as law.

We also await the much publicised ‘Digital India Act’ which will seek to regulate AI among other emerging technologies. Even though the current data protection framework does not specifically address AI, these laws can be interpreted to address any data privacy concerns arising from AI-based technologies.

In a nutshell, tackling the issue of deepfakes requires collaboration amongst all the relevant stakeholders. It is not just the laws that need to be created/amended, but also technology players and civil society need to play their part in the initiative.

“Foreign governments (like the US and EU) may be tapped on how they are currently dealing with the issue. Cross border collaboration could be key.

Finally, a deeper understanding of how generative AI technology works is essential for the common man. Law enforcement should be skilled in investigation and apprehending bad actors,” Singh said.

Comments are closed.