[vc_row][vc_column]

[/vc_column][/vc_row]

Tech’s Authorized Defend Seems Prone to Survive as Congress Focuses on Particulars

WASHINGTON – Former President Donald J. Trump has made multiple calls for the repeal of the law protecting technology companies from legal responsibility for what people post. As a candidate, President Biden said the law should be “repealed”.

But lawmakers looking to weaken the law have begun to agree on a different approach. They are increasingly focused on removing protections for certain types of content rather than making sweeping legislative changes or removing them entirely.

That still left them with a question with potentially far-reaching results: what exactly should the legislature cut?

A bill introduced last month would remove protection from content that businesses, such as ads, are paid to distribute. Another proposal, expected to be reintroduced by the last session of Congress, would allow people to sue if a platform amplifies content related to terrorism. And another, which is likely to return, would only exempt content from law if a platform doesn’t obey a court order to remove it.

Even these more humble proposals for the legal shield, Section 230 of the Communications Decency Act, could spread over the Internet. The adjustments could incentivize companies like Facebook and YouTube to remove certain types of content while leaving others. Critics of the ideas also say there is great potential for unintended consequences, citing a 2018 law that removes immunity from platforms that knowingly facilitated sex trafficking and makes some sex work more unsafe.

“I think we’re trying to say, ‘How can you draw some exemptions from 230 so narrow that your freedom of speech isn’t compromised?'” Said Virginia Senator Mark Warner, who introduced law abbreviation, with a fellow Democrat, Senator Mazie K. Hirono from Hawaii.

The call for change gained momentum after the January 6 attack on the Capitol, carried out in part by people associated with QAnon and other conspiracy theories that thrive on social media. Critics say the shield made tech giants ignore criminal activity, hate speech and extremist content posted on their services.

The law protects websites from many legal disputes about content posted by their users or the way websites moderate that content. It was passed in 1996 and allowed large online services to rise by avoiding new legal liability every time they added another of their billions of users.

Big tech companies have said they are open to legislative cuts to shape changes they think are increasingly likely. Facebook and Google, the owner of YouTube, have signaled that they are ready to work with lawmakers to change the law, and some smaller companies recently launched a lobby group to shape changes.

Some small steps – like pushing for content to be removed following a court order – might deserve support from tech companies. But others, like removing immunity from all ads, probably wouldn’t.

Many lawmakers say that creating legislative changes would allow them to tackle the most damaging cases of disinformation or hate speech online without disrupting the entire internet economy, dampening small websites, or violating free speech rights.

“There is no legislation that deals with everything,” said representative Anna G. Eshoo, a California Democrat who has proposed that certain content be worked out of the law. “When someone says remove section 230, the first thing they tell me is that they don’t really understand.”

But there are many other unsolved problems. Legislators have to decide how close they want to get to the platforms’ core business models instead of just promoting better moderation. One way to get to the heart of the matter is to limit the protective shield when a post is reinforced by proprietary algorithms that rate, sort, and recommend content to users, as Ms. Eshoo’s bill would in some cases. Or, as Mr. Warner’s bill does, lawmakers could simply say that Section 230 shouldn’t apply to advertisements at all.

And they have to grapple with the question of whether changes should only apply to the largest platforms such as Facebook and YouTube or should take effect across the Internet. Smaller businesses have argued that they should be exempt from many changes.

“I think we want to be as humble as possible,” said Hany Farid, a professor at the University of California at Berkeley who studies misinformation. “Give it a year or two, see how it goes, and make adjustments.”

The legislator’s focus on targeted changes to the law is well known. In 2018, Congress passed law removing Section 230 protection when platforms knowingly facilitated sex trafficking.

But Mr Trump focused on repealing the law. In his final weeks at the White House, he urged Congress Republicans to end protection in an unrelated defense funding bill. His supporters and allies may not be happy with the targeted changes proposed by the Democrats, who now control both the Senate and the House.

The White House did not immediately comment on the matter on Monday. However, a December statement co-written by Bruce Reed, deputy chief of staff to Mr Biden, stated, “Platforms should be held accountable for all content that generates revenue.” The comment also said that while carving out certain types of content was a start, lawmakers would do well to give platforms full liability protection only on the condition that they properly moderate content.

Proponents of Section 230 say that even small changes could hurt vulnerable people. They point to the 2018 Anti-Trafficking Act, which sex workers said made it more difficult to screen potential customers online after some of the services they used were shut down for fear of new legal liability. Instead, sex workers have stated that they must now risk meeting customers face-to-face without using the internet to safely determine their intentions.

Senator Ron Wyden, the Oregon Democrat who co-wrote Section 230 in the House of Representatives, said measures to combat disinformation on the right could be used against other political groups in the future.

“If you remember September 11th and had all these jerky reactions to these terrible tragedies,” he said. “I think it would be a huge mistake to use the sickening, disgusting attacks on the Capitol as a means of suppressing freedom of expression.”

Industry officials say it could be extremely difficult to make changes to the law.

“I appreciate that some policymakers are trying to be more specific about what they dislike online,” said Kate Tummarello, executive director of Engine, a small business advocacy group. “But there is no universe in which platforms, especially small platforms, automatically know when and where illegal language is happening on their website.”

The problem could be at the fore when the executives of Google, Facebook and Twitter testify later this month before the House’s energy and trade committee, which is reviewing the future of the law.

“I think it’s going to be a big problem,” said Washington representative Cathy McMorris Rodgers, the committee’s chief Republican. “Section 230 really drives it.”

Comments are closed.