The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 is an important bill. It will create new criminal offences to ban the sharing of non-consensual deepfake pornography, acting on this kind of digitally created and altered sexually explicit material that is a damaging form of abuse against women and girls and that we know can inflict and, in fact, is inflicting deep harm on victims. Sexually explicit deepfakes created and shared without consent are being used to degrade and dehumanise women and to target women. Of course, in many cases, they are also perpetuating harmful gender stereotypes and contributing to gender based violence. When these deepfakes are shared with others or posted online without the consent of the person depicted, this is, of course, a serious breach of a person’s privacy and it has long-lasting harmful impacts on those victims, who never consented to having their image shared.

Our government is really aware of the growing issue of deepfake technology, and we are responding to this issue, particularly through this bill, where it is being used to create false and damaging depictions of real people engaging in sexually explicit acts. These reforms will make clear that those who share sexually explicitly material without consent using technology like AI will be subject to serious criminal penalties, as it should be. The new offences will have a maximum penalty of six years imprisonment for transmitting sexual material without consent and seven years imprisonment for aggravated offences, including where the person created the material. The offences will only apply to sexual material depicting adults, while child abuse material, including AI generated material, will continue to be dealt with under dedicated separate offences in the Criminal Code.

These reforms are a high priority for our government, and they do follow the public commitment we made on 1 May this year to introduce a suite of measures to tackle online harms, particularly those targeting women and girls. I do know that this is an issue that is important to my community. Like so many around Australia, they are concerned by the trend they see of harmful, misogynistic material being shared online and, of course, the devastating consequences that that has for too many women and girls in my community and, again, right around the country. So this bill is one part of a number of measures our government is taking to tackle misogyny online and to make online spaces safer for women and girls. These reforms and others coming through the parliament—other mechanisms as well—are making women and girls safer online.

These changes ensure that the offences apply to both real material, such as unaltered images and recordings, and fake or doctored material that has been created or altered using technology such as deepfakes, so it’s bringing those two together. As I said, it does build on other actions being taken by our government, including increasing funding for the eSafety Commissioner, advancing the review of the Online Safety Act a year ahead of schedule, our commitment to address harmful practices such as doxxing and the work we are doing to overhaul the Privacy Act, to give all Australians, particularly women who are experiencing domestic and family violence, greater control over their own personal information.

Here in Australia and across the world, we have seen the rapid growth in deepfakes and sexually explicit deepfakes in recent years. There are some estimates that suggest that 96 per cent of deepfakes online are pornographic in nature. That is an absolutely shocking statistic. The Palgrave Handbook of Gendered Violence and Technology tells us that, in 2017, the first documented instance of amateur deepfakes appeared, with a Reddit user sharing several images of celebrities’ faces manipulated by AI onto the bodies of pornography actors. Only two years later an app using AI tech was released allowing pretty much anyone to create convincing deepfakes of any female, followed by an AI bot becoming available on the Telegram app, making use of that technology, in early 2020. In a mere matter of months after this a review found that almost 105,000 manufactured images of different women had been created and shared using this technology.

In the four years or so since then the technology has gone even further. It is now at the point, as law enforcement here in Australia have reflected, that in a six-month period in the last year or so, their experts in identifying deepfake material have gone from being able to identify real or manufactured material to being unable to tell the difference. We have a genuine problem here. We have people, particularly women and girls, being harmed by material that is deeply offensive being created and shared online without their consent.

We do know artificial intelligence is rapidly developing, and I don’t want to say that is all a bad thing. There are some positive contributions artificial intelligence can make. But in the context of this bill there are certainly some very harmful contributions that it is currently making, and it is particularly concerning if advances in AI technology deepen existing misogyny in our society, if those technologies are actually used to embed further the inequalities and dangers that exist for women already in our society. Again, that is why laws such as these are so important.

As I said before, this type of attack can happen to anyone. It isn’t just celebrities. We have seen celebrities like Taylor Swift being subject to deepfake material that is sexually explicit but, while those have been the high-profile victims, many women here in Australia and around the world have had the same thing happen to them. Our government wants to ensure the people who are creating and sharing these kinds of deepfakes are held accountable and are punished for their actions. There has been existing work on this as well. The eSafety Commissioner has been trying to fill the gap on image based abuse. With several test cases now under their belt, they report a 90 per cent success rate on take-downs. We need of course a multisystem approach here, both penalties and regulations that apply to service providers, to technology platforms and to the generative AI tool companies and that education and social norm piece about the creation of this material as well.

This bill goes to the legislative side and strengthening the legislation framework to deal with this issue. It also builds on work being done in some of our states, where there are specific offences involved image based abuse. Victoria and South Australia have introduced summary offences involving the non-consensual sharing of intimate images in response to the issue of deepfake porn. In my home state of Victoria the Justice Legislation Amendment (Sexual Offences and Other Matters) Act 2022 made a range of changes, including updating Victoria’s definition of consent, criminalising stealthing and introducing new offences involving intimate images that can be used to prosecute users of deepfake pornography.

It is a difficult area of law, but that is certainly no excuse for our not acting. Again, those difficulties have been taken into account as our government has moved forward with this bill and with these changes. We know that deepfake creators may be liable for defamation under Australian law, but the current legal landscape means defamation recourse was designed mainly for the written or spoken word. This bill aims to address the gap we have in existing laws and to meet technology where it is. It will be important in addressing an issue that, if we allow it to continue in the manner in which it has, will continue to do too much harm to too many women across our country.

This bill was developed in consultation with the Australian Federal Police, the Commonwealth Director of Public Prosecutions and other Commonwealth agencies. I want to highlight that it responds to calls from victim-survivors, and I thank all of those victim-survivors who have done the work with us on this. It responds to calls from community groups and from other members of the public who’ve contacted me—and, I know, other members in this place—asking us to do more to address the non-consensual distribution of sexually explicit deepfake material. We are clear that without taking action this situation only gets worse. As the Australian Federal Police commissioner said earlier this year at the National Press Club: ‘It is likely that we have a tsunami of AI generated abuse material coming at us. We’ve got to get ahead of that tsunami.’ This action will be an important part of getting that work done and getting appropriate laws in place.

The laws are supported by experts in the space—including Dr Asher Flynn, who has publicly written about the laws in Lawyers Weekly. She’s the chief investigator and deputy lead at the Australian Research Council Centre of Excellence and an associate professor of criminology at Monash University. She says the laws:

… will address a major legal gap in Australia, in which the creation of sexualised deepfakes was not illegal anywhere in Australia, except in the state of Victoria. The new laws will see Australia once again leading the way in legal responses to online sexual harm.

The laws may also go some way towards curbing the accessibility of sexualised deepfake technologies.

It is good to have that endorsement from a leader in the field.

It’s also important to reflect on that final point there, about curbing the accessibility of sexualised deepfake technologies, and also to reflect on curbing the types of societal norms we are creating, where people think it is okay to create these types of images. It is not okay. It should not be seen as just a normal part of interacting in the world or of interacting with women or of dating women—that you create this type of material. There is a legal response we are putting in place, but I very much hope that, alongside that, there is a norm setting response, where we are very clearly saying that this has no place in Australian society, at both a government level and a community level that we are firmly against this type of behaviour.

The bill amends the Criminal Code to modernise and strengthen offences for the nonconsensual sharing of simulated and real sexual material online. The existing Criminal Code has criminalised nonconsensual sharing of private sexual material since 2018, but those existing offences did not clearly apply to material that has been created or altered using technology. Those offences will be repealed by the amendments and our new offences will apply to nonconsensual sharing of both real material, such as unaltered images and recordings, and fake or doctored material that’s been created or altered using technology such as deepfakes. The offences do not cover private communications between consenting adults or interfere with private sexual relationships involving adults. It is a complex area but it is an area that demands action through this legislation and through the multiple pieces of other work our government is doing to keep Australians safe from technology facilitated abuse. This bill bans the sharing of nonconsensual deepfake pornography, taking action on digitally created and altered sexually explicit material that we know is a damaging form of abuse against women and girls—this kind of material that is too often used to degrade and dehumanise others, that targets women and, as I spoke about earlier, that perpetuates harmful gender stereotypes and is part of driving gender based violence.

This is an important bill. It is one of a number of measures our government is doing to try and make online spaces safer particularly for women and girls. It will hold perpetrators to account for causing harm through the nonconsensual sharing of deepfakes, and it does the work to ensure Australia’s criminal offences are keeping pace with new technology. I commend the bill to the House.

Click here to watch this speech.

Scroll to Top