Ethical Hacking News
A recent case involving Breeze Liu highlights the inadequacies of tech giants' content moderation policies, emphasizing the need for more effective measures to tackle digital abuse on the web. With progress made through policy changes in the US Congress, it is essential that technology companies continue to work towards creating a safer online environment.
Breeze Liu's experience highlights the inadequacies of tech companies' content moderation policies. The process of scrubbing non-consensual images from the web can be daunting for victims like Liu. Microsoft failed to remove explicit images of Liu despite her repeated pleas, highlighting systemic issues in content moderation. Investor support has been scarce for companies aiming to develop AI tools to detect and remove unwanted content. A proposal by Liu aimed at requiring websites to remove unwanted explicit images within 48 hours nearly reached President Biden's desk. The lack of consistency in policies and processes among tech companies contributes to delays in securing takedowns, according to experts.
The online world has become a breeding ground for various forms of digital abuse, and it is imperative that technology companies implement effective measures to tackle these issues. A recent case involving Breeze Liu, a prominent advocate for victims of intimate image abuse, has shed light on the difficulties faced by individuals in their quest for justice.
In April 2020, Liu received a phone call from a college classmate informing her that an explicit video of her was available on PornHub under the title "Korean teen." This incident sparked a chain reaction of events that would challenge Liu's mental well-being and force her to take drastic measures. The video in question had been filmed without Liu's consent when she was just 17 years old and uploaded without her knowledge.
Over time, the video spread across various platforms, including PornHub, where it was saved and shared with others. As a result, intimate deepfake videos featuring Liu were created and disseminated online, causing her immense emotional distress. "I honestly had to struggle with suicidal thoughts, and I almost killed myself," she recalled.
Breeze Liu's ordeal did not end there. When attempting to scrub the non-consensual images and videos from the web, she encountered significant resistance from Microsoft, one of the internet's largest gatekeepers. Despite repeated pleas, the tech giant failed to remove about 150 explicit images of Liu stored on its Azure cloud services.
Liu had previously started her own company, Alecto AI, with the aim of developing an AI tool that could detect and remove unwanted content from the web. However, investor support has been scarce, partly due to concerns raised by potential investors who laughed at the idea of building a business around this cause.
Undeterred, Liu turned her focus towards advocating for policy changes in the US Congress. A proposal she had long advocated for, which required websites to remove unwanted explicit images within 48 hours, nearly reached President Joe Biden's desk before being shelved. However, real progress was made when the bill passed the Senate last week.
In a recent interview with WIRED, Liu emphasized that her experience highlights the inadequacies of tech giants' content moderation policies. "It’s almost impossible for ordinary people to navigate the complex system and do damage control," she said. Liu's case has also shed light on the challenges faced by victims who struggle to determine whether their age was disputed or hard to discern in the imagery.
Liu's story serves as a stark reminder of the difficulties that individuals face when dealing with digital abuse. The process of scrubbing non-consensual images from the web can be daunting, and it is essential that technology companies implement more effective measures to tackle these issues.
Microsoft has since acknowledged Liu's concerns and taken steps to improve its reporting processes and relationships with victim aid groups. However, it remains unclear whether this change will be sufficient to address the systemic issues at play.
The lack of consistency in policies and processes among tech companies contributes to delays in securing takedowns, according to Emma Pickering, the head of technology facilitated abuse at Refuge. "They all just respond however they choose to—and the response usually is incredibly poor," she said.
Google has introduced new policies in July 2024 aimed at accelerating removals, but more work needs to be done to address these issues. The recent case of Breeze Liu serves as a powerful reminder that effective content moderation is not only crucial for protecting victims but also necessary for maintaining the integrity of the online world.
In conclusion, Breeze Liu's experience highlights the complexities and challenges involved in tackling digital abuse on the web. While progress has been made, more work needs to be done to ensure that technology companies implement effective measures to protect their users. The future of online content moderation will likely involve a collaborative effort between tech giants, policymakers, and advocates like Breeze Liu who are fighting for change.
Related Information:
https://www.wired.com/story/deepfake-survivor-breeze-liu-microsoft/
Published: Thu Feb 20 05:03:39 2025 by llama3.2 3B Q4_K_M