The Cyberspace Administration of China, the country’s cyberspace watchdog, is rolling out new regulations, to be effective from January 10, to restrict the use of deep synthesis technology and curb disinformation.
Deep Synthesis
Deep synthesis is defined as the use of technologies, including deep learning and augmented reality, to generate text, images, audio and video to create virtual scenes.
One of the most notorious applications of the technology is deepfakes, where synthetic media is used to swap the face or voice of one person for another. Deepfakes are getting harder to detect with the advancement of technology.
It is used to generate celebrity porn videos, produce fake news, and commit financial fraud among other wrongdoings.
What is a deepfake?
Deepfakes are a compilation of artificial images and audio put together with machine-learning algorithms to spread misinformation and replace a real person’s appearance, voice, or both with similar artificial likenesses or voices. It can create people who do not exist and it can fake real people saying and doing things they did not say or do.
The term deepfake originated in 2017, when an anonymous Reddit user called himself “Deepfakes.” This user manipulated Google’s open-source, deep-learning technology to create and post pornographic videos. The videos were doctored with a technique known as face-swapping. The user “Deepfakes” replaced real faces with celebrity faces.
Deepfake technology is now being used for nefarious purposes like scams and hoaxes, celebrity pornography, election manipulation, social engineering, automated disinformation attacks, identity theft and financial fraud, cybersecurity company Norton said in a blog.
Deepfake technology has been used to impersonate notable personalities like former U.S. Presidents Barack Obama and Donald Trump, India’s Prime Minister Narendra Modi, Facebook chief Mark Zuckerberg and Hollywood celebrity Tom Cruise, among others.
China’s new policy to curb deepfakes
Under the guidelines of China’s new rules, companies and platforms using the technology must first receive consent from individuals before they edit their voice or image.
The policy rightly requires deep synthesis service providers and users to ensure that any doctored content using the technology is explicitly labelled and can be traced back to its source.
The regulation also mandates people using the technology to edit someone’s image or voice, to notify and take the consent of the person in question.