In an era of rapidly advancing technology, artificial intelligence (AI) has undoubtedly transformed the way we create, share, and consume content. AI promises incredible benefits, but it can also be exploited. One of the recent cases that revealed the misuse of AI is that of Rashmika Mandana's viral elevator video. The case raises questions about privacy, consent, and the legal implications of deepfake technology.
The Incident
Recently, a video featuring popular actor Rashmika Mandanna entering an elevator went viral on social media, capturing the imagination of millions. However, this seemingly candid moment turned out to be anything but real. Fact-check journalist Abhishek Kumar's revelation on Sunday sent shockwaves through the digital realm when he exposed the video as a deepfake—a fabricated piece of content created using advanced AI technology.
The video, initially shared by British Indian content creator Zara Patel on Instagram on October 9, has since garnered over 18 million views on Kumar's thread on X (formerly Twitter). In the thread, he presented both the deepfake video and the original, raising a crucial issue that has far-reaching implications.
What is Deepfake Technology?
Deepfake AI technology is a kind of artificial intelligence that's all about making things that look super realistic but are often quite misleading. It's mainly used to change how someone looks and sounds in videos or audio recordings. It uses fancy computer programs and smart learning methods to mess around with and make multimedia stuff that seems real, so it's hard to tell what's real and what's been messed with. In this case, Rashmika Mandanna's face is superimposed on another person's body, creating a believable but fully executed model.
The Legal Implications
This incident raises important legal questions about the misuse of malicious AI technology. Deepfakes blur the line between fact and fiction, making it difficult to distinguish fact from fiction. This can lead to defamation, harassment and invasion of privacy, all of which have serious legal consequences and this case underscores the need for a legal and regulatory framework to address deepfakes in the country, emphasizing the importance of preserving personality rights and curbing the misuse of AI tools to portray public figures in fictional scenarios. It has brought to the forefront the ethical and legal challenges posed by AI-generated content, memes, and the boundary between creative expression and misrepresentation.
This emerging scenario may lead to the development of specific laws and regulations governing AI-generated content and memes, potentially impacting online speech and creative expression. It also raises questions about freedom of expression, especially in the context of memes, as it addresses the legal status of AI-generated content in comparison to human-created content.
This incident can lead to defamation, harassment, and invasion of privacy, all of which have severe legal consequences:
Preservation of Individual Rights: Deepfake technology poses a significant threat to the privacy and individual rights of public figures. As seen in the case of Rashmika Mandanna, it can be used to create convincing fake videos that can potentially harm a person's reputation or even incite legal action.
Ethical Concerns: Deepfakes raise ethical concerns regarding the manipulation of content. The ability to impersonate someone convincingly can be exploited for various purposes, from spreading misinformation to committing identity theft.
Impact on Public Discourse: Deepfakes can undermine the credibility of digital content and news sources. When the authenticity of visual and auditory information is in question, it becomes increasingly challenging for the public to discern what is real and what is not, thereby eroding trust in digital media.
National Security: Deepfake technology can be weaponized to create deceptive content that poses a threat to national security. It can be used to manipulate public sentiment, create forged videos of politicians or leaders, and potentially incite chaos or conflicts.
Misuse in Cyberbullying and Harassment: The ease of creating deepfake content opens the door to cyberbullying and harassment on an unprecedented scale. Perpetrators can use this technology to create explicit, humiliating, or false content aimed at defaming or harassing individuals.
Here is what Union Minister Rajeev Chandrasekhar responded to Deepfake:
Under Narendra Modi-led government which is dedicated to safeguarding the Safety and Trust of all Digital Citizens using the Internet. The IT rules, which were introduced in April 2023, establish a legal requirement for platforms to prevent the posting of any false information by users.
Platforms are also obliged to remove reported misinformation within 36 hours, whether reported by a user or the government.
Failure to comply with these rules results in the application of rule 7, which allows aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code (IPC).
Conclusion:
The Rashmika Mandanna elevator video case serves as a reminder of the misuse of AI technology, especially in the form of deep data. While adapting the legal framework to address these challenges, it is important to protect individuals, reputations, and good feelings. The event caused a legal awakening and confirmed the need for a concerted effort to ensure that AI technologies are used responsibly and ethically in our increasingly digital world.