In 2017, an anonymous Reddit user superimposed faces of celebrities onto performers’ in pornographic films using Artificial intelligence (AI) and the discussion around “deepfakes” began. Deepfakes are technologically manipulated media, created using a sophisticated form of AI and deep learning. With using this technology, it is now possible to make people say or do things they never said or did, it has been described as “Fake news on steroids”. Even though Deepfakes aren’t very easy to create and require technical expertise and high-end resources, apps such as ‘Zao’ and ‘Doublicat’ have driven this technology to the masses.
The frightening consequences of deepfakes are (i) ‘reality apathy’- when people are put in a constant state of suspicion, it compels them to lose trust in everything they see. (ii) plausible deniability, because it provides public figures with the opportunity to deny even true events as fake if it turns out to be embarrassing to them. Deepfakes have been used as a weapon against women by using it to create revenge porn, this could leave them utterly traumatized. It can also be employed in elaborate plans to cheat or commit fraud. It is a looming threat to our democracy because deepfakes can be misused to a greater extent in politics. Imagine a scenario when a well-timed deepfake is released the night before the elections, dragging a promising candidate’s reputation through the mud. Any attempts by the authority to seek out the truth will be too little too late. Global pandemics are hard to handle as is, but waves of misinformation have only made it worse. In India, after the Tablighi Jamaat incident, fake news, doctored videos and conspiracy theories began floating around in social media causing communal tension in the country. Disinformation has also resulted in heavily armed white vigilantes standing guard for the made-up ‘Antifa invasions’ across many parts of the US. In such highly-strung situations, a single deepfake could wreak significant havoc.
Deepfakes have been muddying the legal waters since its inception, and its impact is felt in the areas of Intellectual property, Torts, Criminal law, Privacy and data protection, National security etc.
It has garnered a lot of attention in the United States and has become a significant cause for concern. The year 2019 saw the introduction of a dozen new legislations such as the Deepfake Report Act, the DEEP FAKES Accountability Act and the first federal legislation on deepfakes as part of the National Defense Authorization Act (NDAA). However, these Acts have limitations, the NDAA only considers deepfakes as a threat to national security and the US elections. It fails to provide legal remedies for individual violation of rights. On the other hand, the DEEPFAKES Accountability Act fails to make a clear distinction between deepfakes created with a malicious intent and deepfakes created for the purpose of entertainment and satire. States such as Virginia, Texas and California have also passed strict laws, but jurisdictional limitations bound them.
Except for these scattered legislations in the US, there is no existing legal framework in other parts of the world to govern deepfakes. In an age of social media, misinformation spreads faster than a pandemic, and it is the responsibility of the State to ensure there are adequate measures to battle it. Subject-specific legislation governing deepfakes is now the need of the hour and Delhi’s BJP President- Manoj Tiwari has given a reason to expedite the process as well. Although the intention behind their campaigning video was positive, such incidents could inevitably lead to improper use of this technology and result in a skewed election or instantly sour a healthy competition among candidates, if it remains unregulated.
In India, there is no specific legislation to combat deepfakes, and the existing laws fall short in addressing the issue as a whole. The current provisions in the Information Technology Act, 2000 penalize publication of sexually explicit material in electronic form, however, this Act is incompetent to tackle various forms of deepfakes, other than sexually explicit material. For example, if a deepfake of a company’s CEO affects the reputation and stock price of that company, the law falls short of remedying this. If the Personal Data Protection Bill of 2019 is passed, then personal data can be processed for a legal purpose only and any deepfakes of revenge porn can be taken down under the individual’s ‘right to be forgotten’. However, the right to be forgotten and its territorial applicability is a widely debated topic and this requires considerable attention while regulating deepfakes. The Bill also doesn’t protect the data of deceased persons. This could become a loophole vulnerable for exploitation especially in cases of deceased religious and political leaders, as deepfakes of their speeches may be used to manipulate their followers. Indian legislations also fail to consider deepfakes as a threat to national security.
Ironically the solution to this is AI and researchers across the globe are working towards creating deepfake-detection technologies, one such example is ‘Reality Finder’ which is a plug-in for web browsers to detect deepfakes. Various platforms, including Facebook and Twitter, have created policies to prohibit the use of deepfakes or to label them as altered in light of the 2020 US elections being just around the corner. However, technology alone cannot be successful in eradicating the negative effects of deepfakes, it needs to be coupled with sturdy legal action. Legal remedies can only help individuals after the impact of deepfakes has been felt; therefore, legislators also need to consider criminalizing the very act of creating and distributing deepfakes with malicious intent. They will further have to ensure that ‘malicious intent’ and related terms are defined to suit the context of deepfakes without infringing freedom of expression. It is also important to ensure that these laws don’t step on the toes of genuine deepfake creators who use it in good faith. Deepfakes hold vast potential to do greater good as well. For instance, the company ‘Cereproc’ is using deepfakes to create digital voices for people who lose theirs to disease. Therefore, the regulation would support and encourage the positive use of technology. In conclusion, governments across the world need to take a two-fold measure to tackle the dangers posed by deepfakes. India needs to begin with spreading awareness about the technology to blunt its impact and enact laws to effectively provide reparations for victims of deepfakes and be well-equipped to deal with the tide of disinformation that deepfakes are capable of unleashing. As David Doermann remarked, “A lie can go halfway around the world before the truth can get its shoes on”.
Sindhu A is a final year law student.