In our previous blog, The Dangers of Deepfakes and How to Spot It, we discussed how deepfake technology works, its threats, and its potential consequences. We also provided some basic guidelines on identifying deepfakes and avoiding becoming victims of such scams. Now, we focus on the regulations addressing this issue and the strategies individuals and organizations can use to protect themselves.
The evolving challenges posed by deepfakes require constant vigilance. As technology advances, so does the sophistication of those who misuse it for harmful purposes. For this reason, it is crucial to understand the preventive measures that users can take to protect themselves and the mitigation strategies that organizations can implement.
This blog will cover these topics and an overview of how different countries approach deepfake regulation.
Understanding Deepfake Regulations
Deepfake technology presents significant risks to society. It is often used to spread false information, manipulate public opinion, or damage reputations. Combating deepfakes is not a one-time task.
It requires continuous and collaborative efforts from individuals and organizations to mitigate the damage that this technology can cause.
To effectively detect and address deepfakes, many countries are developing legal frameworks aimed at curbing their negative impact.
Below are examples of how some countries are approaching deepfake regulation.
EU
The European Union (EU) is strengthening its regulations on artificial intelligence in response to growing concerns about deepfake technology. Key regulations aimed at balancing innovation with protection against disinformation, privacy violations, and potential harm include:
- AI Act: Seeks to ensure that artificial intelligence systems used in the EU are safe, transparent, and respect fundamental rights. This act classifies AI systems into different categories based on their risk level and imposes corresponding requirements.
- Digital Services Act (DSA): Enforced as of November 2022. This act requires providers that model user-generated content to be transparent about their moderation rules, including monitoring and removing illegal content.
- Privacy Regulations: The General Data Protection Regulation (GDPR) impacts how personal data is used in the creation of deepfakes, with a focus on individuals’ consent and rights.
Canada
To prevent the creation and distribution of deepfakes, the Canadian government is raising public awareness and investing in research and development of detection technologies. Additionally, the Criminal Code of Canada (CCC) contains several provisions that can be applied in cases of unauthorized deepfake use, including laws on forgery, fraud, defamation, identity theft, criminal harassment, and threats.
China
China was the first country to establish rules governing deepfakes, with its Provisions on the Administration of Deep Synthesis of Internet-based Information Services, effective January 10, 2023. The Chinese government is also drafting a comprehensive law to regulate deepfake development and implementation, with a particular focus on security, privacy, and ethical standards.
Deepfake Threat Mitigation Strategy for Organizations
As deepfake technology continues to develop and become more sophisticated, organizations must implement effective risk mitigation strategies. Combating deepfake attacks requires deploying systems capable of detecting artificial intelligence, which must be continually updated with evolving algorithms to recognize patterns unique to deepfakes.
Some approaches to reducing the risk of deepfake attacks for organizations include:
- Adapting and improving detection technologies, such as watermarking audio files and using photoplethysmography (PPG)
- Continuously monitoring and authenticating incoming voice and video fingerprints used for biometric authentication
- Implementing multi-factor authentication, combining biometric authentication with passwords or PINs, to reduce the risk of compromise
- Raising user awareness about deepfake attacks and their associated risks
- Analyzing user behavior in videos, such as synchronization of lips and sound
- Establishing a rapid response plan if deepfake content is discovered, to protect the organization’s reputation and security
Preventive Approaches for Users
Preventing threats posed by deepfakes requires a combination of technical knowledge, education, and strategic measures.
Some prevention strategies for end users include:
- Exercising caution when sharing information online, as it can be misused
- Ensuring that information comes from a trusted source before believing or sharing audio, video, or images
- Being careful of content shared from non-reputable sources
- Enabling two-factor authentication as an additional layer of security
- Regularly updating software and operating systems
- Using strong, unique passwords for all accounts to reduce the risk of unauthorized access
- Verifying the identity of anyone contacting you via digital media, especially if personal information is requested
Conclusion
Deepfake technology is advancing rapidly, and while it offers advantages, it also poses significant threats to individuals, organizations, and society as a whole. Therefore, it is crucial to implement comprehensive strategies to mitigate these risks.
Countries such as the European Union, Canada, and China have already begun developing their regulations to combat the deepfake problem. The European Union is working on comprehensive regulations that would require platforms to flag manipulated content, while Canada is considering laws to reduce the abuse of deepfakes. China has adopted strict measures against spreading false information, including deepfake content, through regulations that impose restrictions on the use of advanced technologies in media creation.
Education and awareness play key roles in empowering users to recognize and critically evaluate manipulated media, reducing their vulnerability to deepfake threats. Ultimately, a comprehensive and multi-layer approach is necessary to effectively address the challenges posed by deepfake technology and ensure users’ safety in the digital world.
References:
- [1] European Parliament. (2021). Artificial intelligence act. https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
- [2] Criminal Code, RSC 1985, c. C-46. https://laws-lois.justice.gc.ca/eng/acts/c-46/
- [3] Zhang, L. (2023). China: Provisions on Deep Synthesis Technology Enter into Effect. Law Library of Congress. https://www.loc.gov/item/global-legal-monitor/2023-04-25/china-provisions-on-deep-synthesis-technology-enter-into-effect/