News

Published on November 1st, 2019 📆 | 5412 Views ⚑

0

Danielle Citron on the Danger of Deepfakes and Revenge Porn


Text to Speech Demo

Photo: Chip Somodevilla/Getty Images

When Danielle Citron began studying online harassment more than a decade ago, her argument that online mobs and coordinated harassment constituted civil-rights violations didnā€™t make sense to many of her colleagues. Now, the legal scholar works with some of the most important tech companies and lawmakers on finding ways to minimize harassment, which has grown from posting standard lies and insults to revenge porn and ā€œdeep fakes.ā€ Earlier this year, Citron received a MacArthur ā€œgeniusā€ grant for her work in the field. She spoke with Intelligencer about whatā€™s changed over the last few years and what hasnā€™t.

When did you start studying online harassment and online interactions?
I began writing about privacy generally in 2006 and then focusing on online harassment in 2007, when we first saw (or at least they were first reporting about) attacks on women from all different walks of life online. Whether it was female law students being attacked on a message board called AutoAdmit that was really popular at the time for all information on college and grad schools, to Kathy Sierra ā€” she was a software developer; she was writing about how to write code that makes you happy. Nothing deeply controversial.

And she was targeted on her blog and then on a few other blogs with sexually threatening, sexually demeaning lies and privacy invasion. And it basically chased her from participating in a big tech conference, giving a talk, [to] then hiding out in her house and really stopping blogging, which was a big part of her public outreach of the work that she was doing. She wrote a lot of books on programming, she was a writer, she was a tech developer, and she basically retreats for like a year. There were female law students from every law school ā€” they were targeted on AutoAdmit.

By other law students?

It was clear that it was definitely some law students, because they were saying, ā€œI saw X-name person at the gym. She was wearing this,ā€ and accompanied by a photo. Or, ā€œThis is where she lives. This is where she worked last summer.ā€ And then what would follow were rape fantasies, very specific and particular. Home addresses and defamation. Accusing women of sleeping with their deans to get into law school, of having herpes, like, ā€œDonā€™t sleep with that bitch. Sheā€™s sexually diseased. She has AIDS.ā€

So at the time, Iā€™m writing about privacy. And all of this is under the veil of anonymity. So, Kathy is targeted by people who are writing under pseudonyms that donā€™t give you any sense of who they are. Whether itā€™s Hitler, Jesus, you know what I mean. XOXO. Names that arenā€™t traceable in that public way to peopleā€™s real identities.

Right. It was all like WeedLord420.

Yeah, there you go. But I was thinking of the real names that people used like Cheese Eating Monkey. Iā€™m serious. This is in the lawsuits that two of the female law students, represented by pro bono counsel, [who] sued 39 posters. Because basically what it had become was like a cybermob bent on ensuring that these two female law students couldnā€™t get jobs after law school.

People would put up posts saying, ā€œEveryone contact the hiring folks at the top 50 law firms and tell them why they shouldnā€™t hire this bitch. Sheā€™s a liar who scammed her way into law school, and isnā€™t trustworthy, and has sexually transmitted diseases,ā€ et cetera. And people called law firms and sent emails. It was economic and personal reputational sabotage. What happens is, there was this ā€œtop-14 hottest girls in law schoolā€ list that was generated from AutoAdmit. Women were being named at all these law schools as being the hottest women in law school. And then, of course, people picked up those names and followed through and created a cybermob attack on all these women.

Thatā€™s what grabbed me through that. I write about privacy, but I also understood all this made the case for cyber civil rights, that understanding this is a civil-rights problem. The whole idea was to disenfranchise vulnerable people from making a living. And taking advantage of all the things, all these ways in which online tools are so important to our lives, and making sure that people that folks resented were vulnerable, could not take advantage of all those opportunities.

So, I made the case pretty early on that this was a civil-rights problem and I got really pooh-poohed for the idea: ā€œYouā€™re making a mountain out of a molehill. Youā€™re exaggerating. Itā€™s just boys being boys. Relax yourself.ā€ This was from even dear colleagues at the time saying I wanted to break the internet, and it was free speech, and I have to back off.

They were even holding up free speech back then?

Oh yeah, at least for academics, my wonderful friends who pushed back. I always appreciate pushback, but the significant argument was (a) itā€™s not a civil-rights problem, and (b) itā€™s the free speech of the harassers and thou shalt not regulate it. Even though I said, ā€œListen, I donā€™t want to go outside First Amendment doctrine. These are credible threats. This is knowing defamation.ā€ Like accusing someone of being an actual prostitute and said in a context where itā€™s clear theyā€™re making a false assertion. Itā€™s not an opinion like ā€œSheā€™s a slut.ā€

They were saying people were available for sex, and this is where they lived, and they were interested in rape fantasies. Those are provable facts. You can prove them as true or false. Even then I said, ā€œLook, itā€™s a true threat. Itā€™s defamation. Itā€™s privacy invasion in the sense of someoneā€™s nudes photos posted without consent. And I donā€™t want to tread on whatā€™s virtually protected free speech. But if it happens offline and we regulate it, if it happens online, we should regulate it.ā€ And so I got the answers, ā€œThe internet is different. Itā€™s special, and thou shalt not touch it.

What has changed in the last decade or so?

Isnā€™t it obvious that when you say youā€™re going to chop someoneā€™s head off and shove it up their cunt ā€” excuse me, but this is literally the stuff Iā€™m writing about or was writing about at the time ā€” that thatā€™s a threat? The suggestion that thatā€™s not a true threat, I think now youā€™d probably say to me, ā€œOf course thatā€™s a true threat.ā€ In a context in which you have no idea who it is, it seems really frightening. Thereā€™s nothing to suggest itā€™s a joke.

Yeah. I mean, even if itā€™s not a true threat, if I were a message-board moderator, I would not let it stand.

Weā€™ve come a long way from being kind of crazy and overreacting. What has been so gratifying in the past 12 years is not only convincing companies to take this seriously and working with the safety folks at Facebook and Twitter and Microsoft, but also working really closely with law enforcers and lawmakers. I worked for two years with thenā€“[California] AG Kamala Harris on what she called cyberexploitation. After my book came out, she read it and had me in to talk to her executive team. And then I worked with them for two years before she became a senator. And ever since then, Iā€™ve worked with her office and many other folks on the Hill on a wide range of issues involving harassment, cyberstalking, threats.

In working with government officials, do you still get a lot of free-speech advocates?

Oh, definitely. I feel like in my work with lawmakers, we are very careful about First Amendment challenges. We think about our drafting suggestions with First Amendment concerns, values, and challenges in mind. But nonetheless, as careful as we can be in our advice, certain advocacy groups like the ACLU just say, ā€œIf you want to criminalize speech, you canā€™t. End of story.ā€ Which isnā€™t actually true. We have like 21 crimes that we think of that are crimes involving words.

But of course, we have special ways to think about criminalization involved in terms of making sure the law is very specific and not vague. And youā€™ve got to think really hard about First Amendment doctrine. But itā€™s not off the table by any means. But thatā€™s the role they play, and thatā€™s an important role. Right?

Right. Thereā€™s got to be a check on it.

Of course. That is, of course I appreciate civil libertarians. You know what I mean? I work really closely with groups like EFF [Electronic Frontier Foundation]. Itā€™s not as if I love those folks; itā€™s that I appreciate the work they do. But it has to be tempered, and it is, by discussion.

The internet has become a lot more centralized over the last decade or so. And Facebook and Twitter and whatnot, I mean theyā€™ll never stop getting criticized for not doing enough about harassment, it seems. So Iā€™m wondering how you think about megaplatforms in regard to this topic, platforms that seem almost too big to effectively regulate.

Okay. I think, gosh, the idea of theyā€™re too big to regulate strikes me as just a false idea. I donā€™t think youā€™re suggesting, Brian, that thatā€™s true. But ā€”

No, but Iā€™m saying they operate on a scale at which theyā€™re like, ā€œSomething will always slip through the cracks.ā€

Oh, I see what youā€™re saying. Right. ā€œAt our scale, you canā€™t always be happy about our responses.ā€ Iā€™ve worked really closely with the safety people [at these companies]. Itā€™s my sense that at that scale, youā€™re never going to have perfect compliance with your own rules and practices. You can always do better. My sense is that for the people I work with ā€” Nathaniel Gleicher, or Monika Bickert, Antigone Davis, folks at Facebook who have very important jobs but theyā€™re not the ultimate deciders, so to speak, but they are in charge of policy ā€” is that they want to do better. They always want to do better. My sense is always we can always do better and letā€™s talk about how. Itā€™s gratifying, certainly. The reason why I donā€™t get paid for my work is so that I can always then criticize. Iā€™m unpaid, which allows me to, as an academic, speak my mind.

I think youā€™re right that the objection of ā€œWeā€™re never going to get it right, so donā€™t complain,ā€ itā€™s there. But Iā€™m not persuaded by it. The folks who are not in the C-suite, but folks who are making policy and on the ground at these companies, I think they know theyā€™re fallible and can always do better. This is why I bother doing it. I feel if I wasnā€™t useful, I would just swing from the outside. But because I always feel like thereā€™s real listening and efforts at improving at every stage, I think itā€™s worth helping.

Facebook, for example, is trying to create a single rule set for the entire Earth. Do you ever run into people saying, ā€œOh, we canā€™t implement that rule. Because while it makes sense in the U.S., it might not make sense somewhere else.ā€

And I criticize them for that approach in my scholarship. I donā€™t think thereā€™s a one-size-fits-all rule for hate speech in the U.S. The definition of what we mean by hate speech and why we care about hate speech, I think, should differ depending on the context. A society thatā€™s resilient and strong, and also one that has a First Amendment, is different from a society thatā€™s really fragile. Whether itā€™s Rwanda or Myanmar, the challenges on the ground are really different.

A one-size-fits-all rule is cheaper because you donā€™t have to have piecemeal policy and execution of policy, but that is really wrongheaded because societies and cultures are different. I also think a huge mistake is just rolling out your product in a country where you donā€™t know the problems on the ground.

Thatā€™s what gave us Myanmar and the Rohingya. You just roll out your product and you donā€™t think that and donā€™t consider the possibility that thereā€™s a really vulnerable minority that are going to be subject to ethnic cleansing. Thereā€™s burgeoning hate speech leading to violence that you donā€™t even know about, because you donā€™t have content moderators who speak the language. Thatā€™s just wholly irresponsible, manifestly so.

Whatā€™s different in the U.S. is different from Canada, is different from Mexico, is different from ā€¦ right? Given how important these platforms are, how pervasive they are, and how integral they are in everyday life ā€” meaning all aspects of our lives not just to spout off, not just the public square, but everything we do ā€” I think itā€™s irresponsible to have a one-size-fits-all approach.

Do you think Facebook will ever get to that epiphany, or are they already there?

I guess it depends on the topic. Thereā€™s some things that having a one-size-fits-all [approach that] I think actually totally works. Just take nonconsensual pornography. They have a no-nudity policy anyway. So my deep worry is about peopleā€™s nude photos being used without their consent and shared on the platform. Theyā€™ve been really aggressive and wonderful about how to proactively deal with some of these problems. That is a one-size-fits-all approach tied to child pornography and exploitation. And that sort of makes sense, because the approach is largely similar across boundaries. In Wales and the U.K., in India. We say, ā€œOkay, you donā€™t want your nude photo posted without consent.ā€ This luckily falls within their anti-pornography rules.

But having a hate-speech policy, thatā€™s different. The four biggest platforms in 2016 signed a memo of understanding with the European Commission to ban within and take down within 24 hours hate speech, understood as just speech that incites or that demeans protected groups. So, thatā€™s pretty broad, demeaning speech. I actually want them to be much more transparent about what theyā€™re actually doing. I feel like on hate speech, we need far more transparency for me to even have an educated guess. Thatā€™s not one of the areas that I consult with them on.

Last year, Facebook announced a tool where users could submit their nude photos to Facebook so it would index them and fingerprint them, which from an outside perspective seems insane to me, but I was wondering what your take on it was.

Let me explain why I donā€™t think thatā€™s insane. Iā€™m on Facebookā€™s Nonconsensual Intimate Imagery Task Force. Its a group of folks advising them, from advocacy groups including the National Network to End Domestic Violence, CCRI, and others. The reason why itā€™s not crazy is because what we hear from victims of nonconsensual pornography is so often, people threaten them with the posting. They havenā€™t yet done it, but the person will say, ā€œIā€™m going to post this on Facebook unless you go out on a date with me.ā€

Itā€™s a great relief to victims that victims can provide an image that Facebook then hashes, and it will help prevent it being reposted on the platform. [Hashing is a computer function that turns a file into a unique, algorithmically generated string of letters and numbers. A file with a hash identical to a submitted photo could not be uploaded to Facebook.] Now, at the time that this was announced, there were really smart computer-security folks who were worried that this could lead to theft or leaks. That then undermines the project. But theyā€™ve been working really hard, from my understanding, on those issues involving security to make this process work. Once youā€™ve hashed an image, you canā€™t reengineer it back from the hash. So the database of hashes isnā€™t whatā€™s vulnerable, itā€™s the initial uploading to Facebook.

The reason why we helped them with it is because actually victims were coming to us asking advocates for help. It didnā€™t come from Facebook, like, ā€œThis is this great idea we have.ā€ It really was advocacy groups saying to Facebook, ā€œThis is what victims are saying.ā€ Itā€™s not total relief of your worries, but at least it takes something off your plate to worry about as a victim.

Do you know if the tool is getting a lot of use?

Itā€™s getting use. Do I know the actual numbers right now? I donā€™t know. I did at some point know that people were using it. It really was an effective policy and people were taking advantage.

Good. Yeah, I just remember seeing that initial announcement. And from a privacy standpoint, given Facebookā€™s reputation, it seems like itā€™s dicey. I was like, ā€œHmmmm.ā€ But itā€™s good to know itā€™s actually working as intended.

Right, and privacy folks like myself were part of the conversation. Iā€™ve written about that market response as a pro-social, on balance, thing. I wrote a piece called ā€œSexual Privacyā€ in the Yale Law Journal that just came out, and is the start of my next book, about how market responses have promise. Not just law, but market responses. That project was one I pointed to as a helpful response.

Have you heard from other platforms about similar systems or information sharing?

Thatā€™s an interesting point, because they do it with the hashes of extremists in terrorist material, these major platforms. Not just the four dominant ones, but many more have subscribed to the database of hash of determined terrorist images that theyā€™ve proactively filtered or removed. And I have not heard of a similar project with regard to NCP [nonconsensual pornography]. It doesnā€™t mean itā€™s not happening, I just donā€™t know about it. It would be important, I think.

Have you been following whatā€™s been going on with Representative Katie Hill?





I think what is so distressing about it is that, of course, it lines up with everything that Iā€™ve been studying, which is the targeting of a woman, shaming her for her sex and sexual relationship, and then have it cost her probably more than people in more privileged spots would have. We have a president who has been accused of rape who is above the law. And a lot of folks who have been accused of far worse nonconsensual touching and assault that just proceed as if, ā€œOkay, I can handle and withstand these kinds of allegations,ā€ and continue on as lieutenant governor, continue on as senator, as representative.

What I think what is so depressing is that not only is it an ex-husband who wants to really destroy the life and career of his ex-wife ā€” itā€™s worked. Heā€™s also shamed and embarrassed and hurt, harmed really reputationally in all sorts of ways, that young staffer, the young campaign worker that he was also having a relationship with. So, who loses are these few victims of nonconsensual pornography. Itā€™s true they [news sites that posted the photos] blacked out the nipples, but itā€™s an egregious violation of sexual privacy no matter what. Youā€™re exposing someoneā€™s body in a way that shows them in the most intimate moments in which they did not consent.

Do you think itā€™s illegal for those news outlets to have posted those photos?

There are laws on the books in California and D.C. In California, you need to show an intent to cause emotional distress. You have to make out a case that the husband, beyond a reasonable doubt, intended to cause emotional distress.

All of these statutes have exemptions for matters of public interest. And so, as to the husband, is he doing it as a matter of public interest? Absolutely not. As to the journalists, there are, of course, arguments that the matter of public interest is what makes it newsworthy is that here we are, we have someone running for Congress in a relationship with someone who has less power, someone working on her campaign.

But the truth of the matter is there are texts that speak to their relationship, and it was seen as totally consensual. ā€œDo we need to publish the photos?ā€ is a question that Mary Anne Franks and I raised in one of our law review articles [ā€œCriminalizing Revenge Pornā€] on the topic of revenge porn. For it to qualify as a matter of public interest ā€” weā€™re talking about the photos, not just the conversation about the relationship. There is at least an argument that itā€™s not newsworthy. Seeing the photos isnā€™t newsworthy.

But again, do I think theyā€™re going to bring criminal charges? I would doubt it as to the publishers. I think thereā€™s an argument on both sides as to matter of public interest. But they shouldnā€™t get a free pass. The truth of the matter is it seems like this is all an effort to make this female Democrat resign, and it worked. She resigned.

Are you worried about seeing it happen more?

Yes, I am. Imagine a deep-fake sex video; itā€™s very hard to distinguish between real and fake. Letā€™s say the technology is really advancing in three months, and again used against a woman running for an office, and she drops out even if itā€™s not real. I worry about it in all sorts of ways. Itā€™s part of the broader civil-rights story Iā€™ve been telling for a long time.

You also study deep fakes. Iā€™m curious what your perspective on that is, because I feel like thereā€™s a lot of worry about deep fakes as a tool to start World War III with fake news. But that doesnā€™t seem like the most effective use of them, if Iā€™m understanding it correctly.

About a year and a half ago, I think it was Bobby Chesney, had written a series of articles on deep fakes both in California Law Review and Foreign Affairs. I testified before the Senate Intelligence Committee this summer. So weā€™ve done both public writing, and Iā€™ve testified about the national security and privacy implications of deep fakes. Your question is, where should we be most concerned about the rise of deep fakes? And as a practical matter, it ties right into my broader research, which is 98% of deep fakes that are appearing online are deep fake sex videos. And 99% of deep fake sex videos involve women, and usually itā€™s female celebrities. Taking their faces and inserting their faces into porn, and basically making you be a sexual object in ways that you didnā€™t choose. Thereā€™s nothing wrong with pornography as long as you chose it yourself.

We are seeing it used, not just for female celebrities but for average people and subjecting them to humiliation, reputational harm, emotional harm. A fake sex video of the famous journalist Rana Ayyub engaged in a sex act basically was on half of the phones in India because it was spread through WhatsApp groups. And she was then beset by a cyber mob and basically couldnā€™t leave her home for weeks and weeks and weeks. She stopped writing. People threatened to rape and to kill her. Itā€™s not just someone like Rana Ayyub who is provocative in the sense that she wrote about government abuses, human rights abuses, especially of Muslim minorities in India, itā€™s also just the everyday person appearing in a deepfake sex video and the harm and the impact is broad. It could be hard to keep a job and get a job if itā€™s in a search of your name. Itā€™s terrifying, because it takes your sexual identity and exposes it in ways you didnā€™t choose.

But, I think I have to say Bobby Chesney and I are pretty damn worried about the 2020 election and a well-timed deep fake that could shift and tip an election. What Iā€™ve seen, and youā€™ve seen it too, is politicians basically turning around and saying whatā€™s real is fake. We call that the liarā€™s dividend. Like when President Trump said of the Access Hollywood tape, ā€œThat was fake.ā€ Oh, come on. But the idea of a liarā€™s dividend, having deep fakes work for liars, have it work for criminals, I worry about that, too. Itā€™s more evidence for folks who want to convince us that we live in a post-fact environment. Itā€™s actually destruction of democracy.

I think a lot of the 2020 fear around deepfakes is someone will create one and then itā€™ll just go viral on its own, and everyone will be convinced by the video. I think the real issue is what you talked about, which is some sort of elite opinion maker endorses it and that gives it more mileage than it mightā€™ve otherwise gotten.

Whatā€™s interesting is that you donā€™t even really need a deep fake to have that happen. You know, like the Nancy Pelosi video [which slowed down footage and audio of her to make it appear as if she was slurring her words]. And really, whoā€™s the bug? Itā€™s us. Itā€™s human beings. We are so easily hacked itā€™s scary. We think itā€™s a video. We have this visceral reaction. And if it confirms our beliefs, we spread it on. If itā€™s salacious, forget about it. We really help it go viral. And so, even a really poor production like the Nancy Pelosi slowed down video got a lot of mileage. And if you time it well enough, and someone prominent endorses it, as you were saying, then it has great potential to shift how people behave. You can turn an election, especially if itā€™s timed right. Itā€™s all in the timing, I think, for these deep fakes.

An October Surprise deep fake.

Right, itā€™s not that itā€™s my greatest concern, but I worry when you donā€™t have a test case. It doesnā€™t have to just be an election, it could be the night before an initial public offering. To change the direction of the market or change what a CEO does for a really consequential decision, it doesnā€™t have to be just an election. The stakes are significant societally as well as, of course, as weā€™ve seen with individuals.

Is there anything that the average person can do to figure out whatā€™s a deep fake, or is it just sort of hopeless?

I donā€™t think itā€™s hopeless. A journalist with time and energy can debunk a deep fake, not based on the technology but on asking questions. ā€œWell, where were you?ā€ And are there facts that suggest that person wasnā€™t remotely near where ā€¦ you know what Iā€™m saying?

Individuals, I think we do need some real literacy and education. Before you spread information online you should to say to yourself, ā€œThis is pretty provocative. Should I pass this on? This might not be true.ā€ We can talk to ourselves, because weā€™re the problem. Weā€™re the ones making them go viral. I hope we, human beings, before we start spreading things, we think before we act. Platforms amplify, they supercharge our worst instincts. But we donā€™t have to let them. Nothing is inevitable in this space. Individuals are responsible for what goes on behind their computer.

Are there generational differences in internet usage? I at least want to believe that older people are more gullible than I am, but maybe thatā€™s not actually true. Iā€™m wondering if youā€™ve seen any appreciable differences there.

I donā€™t know if youā€™ve read Yochai Benklerā€™s new book called Disinformation, about basically the spread of misinformation or disinformation to influence campaigns and propaganda, but where bad information thrives is actually on Fox News. I mean, Iā€™m not talking about deep-fake sex videos. Iā€™m talking about obvious falsehoods throughout the world.

Itā€™s more often people in the older-than-55 set. Thereā€™ve been studies about how an older and more right-wing group of folks, theyā€™re spreading falsehoods at a rate much higher than theyā€™re spreading accurate stories. And itā€™s human beings and not bots doing it. Yochaiā€™s new book talks about how that disinformation is spread more often in right-wing echo chambers. And it really starts with Fox. Itā€™s not that itā€™s coming from Facebook and these clickbait sites. Itā€™s coming from Fox News, and then more often spread on social-media platforms by people who are, as you said, not your generation, Brian. There is an age differential. Youā€™re not wrong about that.

Do you think thereā€™s any way of breaking that at all?

I have this new piece in Michigan Law Review about cybermobs and death videos, disinformation. Law, and social norms, and education, and companies, theyā€™re all part of the same conversation. None can do it all. But the fact of the matter is that people over 55 [are] spreading disinformation. We need to education them. Itā€™s not like we have schools for it.

I worry that the people most in need of educating either are resistant or arenā€™t interested, and there are no passive hubs for them. Because the younger people are taught digital literacy in school and then in university. I have two college students, so I know theyā€™re pretty skeptical readers.

Yeah. If I were an older person, I would not want to be told to go back to school, either.

I do a bunch of work with the ADL, and they are constantly discussing literacy thatā€™s not just kids, but itā€™s also all about in companies. Thereā€™s a way to get education in the mix without it being so obvious. And that companies and platforms, theyā€™re our educators as well, or they could be if they cared. And convincing them to care is tough. Why? Because their business model is clicks. The fact that we do not have strong privacy rules, that we donā€™t have strong limits on behavioral advertising or any disincentives in this space, is a huge part of the problem. I always think all these conversations about disinformation, we have to have a conversation about privacy regulation. If we donā€™t get at the business model, weā€™ll never end these problems.

I went to a Facebook event maybe at least a year ago -ā€” I forget exactly when ā€” where they were talking about what theyā€™re doing to fight fake news or whatever. One of the things was a pop-up in the News Feed about becoming more media literate. I asked them, ā€œHow many people are actually clicking that?ā€

Sure. Whatā€™d they say?

They said, ā€œWe donā€™t have the numbers, but if you follow up, weā€™ll get back to you.ā€

Oh, please. They so have them.

Iā€™m guessing itā€™s in the single or low double digits. At some point, the idea of teaching giving people the resources and expecting them to use them on their own seems a little naĆÆve to me.

Iā€™m not suggesting that as a cure-all, but it has to at least be part of the discussion. Their financial incentives are to keep us clicking and sharing. And if itā€™s the salacious and the outrageous, then there are no barriers in the laws, then theyā€™re going to keep doing it. Itā€™s just rational for them to escalate. I mean, from their own financial perspective itā€™s rational, and from their shareholdersā€™ perspective. If we had privacy rules that restricted what we could collect, how we could use it and share it and sell it, that would change the landscape of incentives. My view of all these issues at the content layer are informed by my view of Wild West of data collection, use and sharing practices, and the conspicuous lack of regulation.

Once you get platforms to not focus on maximizing engagement, that just sort of changes the calculus of how these things work overall.

Right, yeah. Totally.

How do you feel about Section 230 [the regulation that shields platforms from responsibility for the behavior of their users]?

Iā€™m a longtime student of and someone who has criticized 230, but with caution. Ben Wittes and I have proposed keeping the immunity but conditioning it on reasonable content moderation practices. Right now, thereā€™s a provision of Section 230 which is about under-filtering or failing to remove. Itā€™s really a free pass, even though the statute was designed to encourage what the statute called good Samaritans cleaning up the internet. But the statute itself doesnā€™t condition under-filtering on being a good Samaritan. I think we should go back to the original purpose and reintroduce that into the wording and say, ā€œSure, you get the immunity so long as writ large you engage in reasonable content moderation practices.ā€ There are like now four sites that basically their business model is deep-fake sex videos. Their business is such that people upload deep fake sex videos, and then they make the advertising money. Should they enjoy immunity from liability? Absolutely not.

Think about 8chan having a strong sense that there were people on that site, totally unmoderated, who are inciting violence against vulnerable groups. And very specifically, like calling on other people to kill Hispanics and Jews and whatever, and leading to violence. This idea that you say, ā€œI donā€™t do anything. Whatever happens is too bad, so sad,ā€ is absurd.

My last question for you is, you recently won a MacArthur ā€œgeniusā€ grant, so what are you going to do with that?

Iā€™m going to write my next book. The timing could not have been better. I was in the midst of thinking about, I want to write a book about sexual privacy. My plan for the summer and for next year was to write in earnest. And I needed just really the time to do it. I started a new job at Boston University Law School this semester, and Iā€™ve been teaching a lot. I always give talks every week, and so Iā€™m on the road or Iā€™m teaching, and I havenā€™t really had much time for writing. So, whatā€™s great is that the grant frees me up to write next year. I can take some time off from teaching. Still retain my job, but buy myself out, so to speak. So, Iā€™m so excited. Iā€™m so honored I donā€™t even know what to say, really. This sort of thing happens and youā€™re like, ā€œAre you kidding? What?ā€ Itā€™s amazing.

Yeah, itā€™s cool. I would love to win one of those.

Yes. I feel like if I can, anyone can. Itā€™s really cool.

This interview has been edited and condensed for clarity.

Source link

Tagged with: ā€¢ ā€¢ ā€¢ ā€¢



Comments are closed.