The rapid development and adoption of technology around the world has resulted in an awkward situation that many tech companies are hesitant to face: The same social media sites that 13 year olds use to network with friends are also being used by terrorist organizations to fuel their growth and expand their reach around the globe.
On a recent episode of All Things Legal, Craig Ashton and Tim Hodson sat down to discuss a lawsuit filed against the largest Internet companies on the planet: Google, Facebook, and Twitter. In his lawsuit, a California father alleges that the companies aided the efforts of Isis, and contributed to the death of his daughter in the November 2015 terrorist attack in Paris.
The heart of the debate is this: Should corporate liability laws be changed to hold social media companies responsible for the words and actions of their users?
We’ll let Craig kick the discussion off: “We’re talking about a lawsuit brought by Nohemi Gonzalez’s father against YouTube, Facebook, and Google, for essentially… and the quote is: ‘They provided material support to ISIS without which the explosive growth of ISIS over the last few years would not have been possible. The material support has been instrumental to the rise of ISIS and its ability to carry out numerous terrorist attacks, including the November 13, 2015 attacks in Paris, where more than 125 people were killed.’”
Social media companies are currently shielded against responsibility for their content.
Craig: “So, the Communications Decency Act is the shield that these companies use in most cases, which basically says, for instance on Yelp, if somebody defames you, Yelp is not responsible for that. So they can write a review of a restaurant saying, ‘There was a cockroach in my soup, the waiter spit in my face,’ and if that’s untrue and ultimately causes them money damages, you don’t have a cause of action against Yelp, just the individual who did it.
“The problem is that individuals that defame people on Yelp usually don’t have money, so you can take them to trial, which will cost you hours and hours of time—if you get a lawyer, lots and lots of money—and at the end of the day if you get a judgment, they’ll just file for bankruptcy. So, it’s not a satisfactory way to approach it, in our opinion, because Yelp is making lots and lots of money through advertising, and they’re basically saying, ‘Look, we don’t have any responsibility for what’s being said on here. Unless you can prove that you were defamed, we’re keeping it on there.’ So that’s the same argument that Twitter, Facebook, and Google are using.
“What happened in this particular lawsuit is that a father of an American killed in the Paris attacks sued Twitter, Facebook, and Google. And the Communications Decency Act of 1996 basically protects these groups (and Yelp is also one of them). It says, look, they’re just a bulletin board. So they’re not responsible for content. And that is their defense in this particular case: ‘Hey, if ISIS puts up a video on YouTube to recruit people, we’re not responsible for that content, we’re just giving them a forum.’
Tim: “Well, they take it down once they’re notified.”
Existing liability laws don’t reflect the effect that technology has had on networking and media.
Craig: “Here’s the thing… the New York Times doesn’t allow [ISIS] to put a front page editorial about how great ISIS is. It would just never happen.
“You would never see, in the history of our country, back when there was [no] Internet, the New York Times on a daily basis allowing ISIS or Osama bin Laden or any of these groups—Al Qaeda—to have a front page ad or a front page story that they’re writing about how great they are, [with the Times] giving them a forum. You’re not going to see that in the Washington Post. You’re not going to see that in the New York Times. It’s not going to happen.
“The thing that really irritates me about this is, our country is super innovative. Our liberal democracy, and the laws that we have, have allowed people to create things. We’ve created the Internet! And we’ve created all this technology that these people who are anathema to the way we look at the world, that behead you because you’re homosexual, will stone you to death if you’re an adulterer… these gangsters are using Facebook and Twitter and YouTube to recruit people with our technology, which they in a million years couldn’t invent because they are not innovative in the least.”
Is it time to make social media companies responsible for the actions of their users?
Craig: “So what I’m saying is that we need to look at this and perhaps create some responsibility for Google and Facebook and YouTube. Because what they’re saying is, ‘Unless you bring it to our attention, we don’t have any responsibility to police what’s going on. If it goes up there, we’re immune from liability.’
“But if you make them responsible, guess what, you’re going to get the metaphorical air bag, or the better A-pillar, because the only reason car manufacturers put all this great safety equipment in is mostly, there’s financial responsibility. So we really need to look at that, because the fact that our archenemies that want to kill us, and don’t believe that our beautiful system is worth protecting, are benefiting from our beautiful system and the innovation that is allowed in such a free society.”
Tim: “How do you implement it? I agree with you 100%, all the way. The question I always have with that is, how do you police it? How do you implement it? Especially with these companies… they’re hemorrhaging money. [According to CNN, Twitter has lost $2 billion since 2011.] If you have to police it, therefore you have to hire an entire cybersecurity staff that’s policing your content, your billions of megabytes of content, 24 hours a day. It’s almost impossible.”
Craig’s response was that if these companies invested a percentage of their financial resources into policing their content, they would be more than capable of reversing the situation. And that perhaps an act of Congress could provide the necessary legal impetus: “Congress would have to pass a law to say, ‘You’re responsible for any terrorist content, if that is not taken down within 24 hours, or there is a massive fine.’ That would be a way to approach that. And [companies might] say, ‘You’re affecting the First Amendment.’ [But the response by the courts could be], ‘Well, those are fighting words.’ And the First Amendment doesn’t extend to that. Those are inciting words.”
Tim continued to walk a careful line between Craig’s opinions, the technical challenge of enacting Craig’s suggested legislation, and admiration for the California father who filed the suit: “I do give the father credit. From what I can tell from the article, he even says, this is not about money. He does not ask for damages. He’s bringing the lawsuit which they probably all know is going to get dismissed, on its face. He’s bringing it to light to start this conversation. So I do give him credit for that. It’s not a money grab.
“I do think it’s a little bit of a stretch for him to say that Google shares profits with ISIS. I think that’s a little bit far of a jump. His argument is definitely being heard, and it is warranted. I agree with everything, it’s just policing that content, it’s… that technology is expanding so fast, even if you get something that [fixes the issue], they can find ways to get around it.”
One of the key challenges for Congress is to create legislation regarding technology they scarcely understand.
Craig: “I think one of the problems is, I think it was Senator Williams from Alaska, I may be getting it wrong, he thought the Internet was made out of tubes. You know, like vacuum pressurized tubes where you drop some information in and it popped out somewhere else. And most of the people that are running the Senate… they’re all 60 and 70 and 80 years old. They don’t know what the Internet [does], how it works. They’re not super familiar with it, because they’ve got staff members!
“The point is that we need people that understand how the Internet works, because in 1996, when the Communications Decency Act [was passed], that was a long time ago. That was light-years ago from what we’ve got now.”
Tim: “So much has changed in 20 years. Especially that technology. In two years it’ll be different.”
Twitter has shown a willingness to combat the usage of the platform by terrorists and sympathetic individuals, despite its First Amendment concerns.
Twitter in particular is a company that has struggled to find a balance between its passion for the First Amendment, and the necessity of limiting hate speech. A few months ago, an in-depth BuzzFeed article traced the company’s reticence to address the counterculture of harassment and hatred that the platform has fostered due to the anonymity it affords. (The article about Twitter’s struggle with abuse is worth reading, though please be warned, a great deal of adult language is quoted.)
The company has taken steps to inhibit ISIS’s use of the platform. In February, Twitter reported that it had suspended 125,000 accounts used for the purpose of terrorism. Last July, FBI Director James Comey (who is currently making headlines for his role in the current presidential election) praised Twitter for being “very cooperative.” Comey noted that the company had increased their level of cooperation after ISIS, annoyed with the company’s closure of several ISIS-related accounts, posted a warning—directed at Twitter cofounder Jack Dorsey—that stated, “your virtual war on the Internet will cause a real war on you.” Comey added that, “They saw some of the darkness I see. They don’t want people using Twitter to engage in criminal activity, especially terrorism.”
The question is whether the voluntary actions of Twitter, Google, Facebook, and other companies will be enough, or whether Congress will need to intervene and push the issue.