Mark Zuckerberg‘s Facebook continues trying to prove that his social media giant can change with the times even though a new scandal hits them every week. After saying time and again that they will be fighting fake news, this time they’re really sure they’ve got the solution.

Facebook said Wednesday it is rolling out a wide range of updates aimed at combatting the spread of false and harmful information on the social media site — stepping up the company’s fight against misinformation and hate speech as it faces growing outside pressure.

The updates will limit the visibility of links found to be significantly more prominent on Facebook than across the web as a whole, suggesting they may be clickbait or misleading. The company is also expanding its fact-checking program with outside expert sources, to vet videos and other material posted on Facebook.

Facebook groups — the online communities that many point to as lightning rods for the spread of fake information — will also be more closely monitored. If they are found to be spreading misinformation, their visibility in users’ news feeds will be limited.

Lawmakers and human rights groups have been critical of the company for the spread of extremism and misinformation on its flagship site and on Instagram.

During a hearing Tuesday on the spread of white nationalism, congress members questioned a company representative about how Facebook prevents violent material from being uploaded and shared on the site.

In a separate Senate subcommittee hearing Wednesday, the company was asked about allegations that social media companies are biased against conservatives.

The dual hearings illustrate the tricky line that Facebook, and other social media sites such as Twitter and YouTube, are walking as they work to weed out problematic and harmful materials while also avoiding what could be construed as censorship.

CEO Mark Zuckerberg’s latest vision for Facebook, with an emphasis on private, encrypted messaging, is sure to pose a challenge for the company when it comes to removing problematic material.

Guy Rosen, Facebook’s vice president of integrity, acknowledged the challenge in a meeting with reporters at Facebook’s Menlo Park, California, headquarters Wednesday. He said striking a balance between protecting people’s privacy and public safety is “something societies have been grappling for centuries.”

Rosen said the company is focused on making sure it does the best job possible “as Facebook evolves toward private communications.” But he offered no specifics.

“This is something we are going to be working on, working with experts outside the company,” he said, adding that the aim is “to make sure we make really informed decisions as we go into this process.”

RELATED NEWS
Avoiding Cyber Monday 2019 scams and phishing schemes plus hot deals

Facebook already has teams in place to monitor the site for material that breaks the company’s policies against information that is overtly sexual, incites violence or is hate speech.

Karen Courington, who works on product-support operations at Facebook, said half of the 30,000 workers in the company’s “safety and security” teams are focused on content review. She said those content moderators are a mix of Facebook employees and contractors, but she declined to give a percentage breakdown.

Facebook has received criticism for the environment the content reviewers work in. They are exposed to posts, photos and videos that represent the worst of humanity and have to decide what to take down and what to leave up in minutes, if not seconds.

Courington said these workers receive 80 hours of training before they start their jobs and “additional support,” including psychological resources. She also said they are paid above the “industry standard” for these jobs but did not give numbers.

It’s also not clear if the workers have options to move into other jobs if the content-review work proves psychologically difficult or damaging.

Democrats want AI face recognition checked out 2019.

Democrats Delve Into AI Face Recognition

Congress is starting to show interest in prying open the “black box” of tech companies’ artificial intelligence with oversight that parallels how the federal government checks under car hoods and audits banks.

One proposal introduced Wednesday and co-sponsored by a Democratic presidential candidate, Sen. Cory Booker, would require big companies to test the “algorithmic accountability” of their high-risk AI systems, such as technology that detects faces or makes important decisions based on your most sensitive personal data.

“Computers are increasingly involved in so many of the key decisions Americans make with respect to their daily lives — whether somebody can buy a home, get a job or even go to jail,” Sen. Ron Wyden said in an interview with media outlets. The Oregon Democrat is co-sponsoring the bill.

“When the companies really go into this, they’re going to be looking for bias in their systems,” Wyden said. “I think they’re going to be finding a lot.”

The Democrats’ proposal is the first of its kind, and may face an uphill battle in the Republican-led Senate. But it reflects growing — and bipartisan — scrutiny of the largely unregulated data economy — everything from social media feeds, online data brokerages, financial algorithms and self-driving software that are increasingly impacting daily life. A bipartisan Senate bill introduced last month would require companies to notify people before using facial recognition software on them, while also requiring third-party testing to check for bias problems.

RELATED NEWS
Twitter's messy political ad ban plus robocall fight hits Trump's desk

Academic studies and real-life examples have unearthed facial recognition systems that misidentify darker-skinned women, computerized lending tools that charge higher interest rates to Latino and black borrowers, and job recruitment tools that favor men in industries where they already dominate.

“There’s this myth that algorithms are these neutral, objective things,” said Aaron Rieke, managing director at advocacy group Upturn. “Machine learning picks up patterns in society — who does what, who buys what, or who has what job. Those are patterns shaped by issues we’ve been struggling with for decades.”

President Donald Trump’s administration is also taking notice and has made the development of “safe and trustworthy” algorithms a major objective of the White House’s new AI initiative. But it would do so mostly by strengthening an existing industry-driven process of creating technological standards.

“There’s a need for greater transparency and data comparability,” and for detecting and reducing bias in these systems, said Commerce Undersecretary Walter Copan, who directs the National Institute of Standards and Technology. “Consumers are essentially flying blind.”

Dozens of facial recognition developers, including brand-name companies like Microsoft, last year submitted their proprietary algorithms to Copan’s agency so that they could be evaluated and compared against each other. The results showed significant gains in accuracy over previous years.

But Wyden said the voluntary standards are not enough.

“Self-regulation clearly has failed here,” he said.

In a bolder move from the Trump administration, the federal Department of Housing and Urban Development has charged Facebook with allowing landlords and real estate brokers to systematically exclude groups such as non-Christians, immigrants and minorities from seeing ads for houses and apartments.

Booker, in a statement about his bill, said that while HUD’s Facebook action is an important step, it’s necessary to dig deeper to address the “pernicious ways” discrimination operates on tech platforms, sometimes unintentionally.

Booker said biased algorithms are causing the same kind of discriminatory real estate practices that sought to steer his New Jersey parents and other black couples away from certain U.S. neighborhoods in the late 1960s. This time, he said, it’s harder to detect and fight.

The bill he and Wyden have introduced would enable the Federal Trade Commission to set and enforce new rules for companies to check for accuracy, bias and potential privacy or security concerns in their automated systems, and correct them if problems are found. It exempts smaller companies that make less than $50 million a year, unless they are data brokers with information on at least 1 million consumers.

New York Democratic Rep. Yvette Clarke, who is introducing a companion bill in the House, said the goal is to fix problems, not just to assess them. She said it makes sense to give the FTC authority to regularly monitor how these systems are performing because it “has the finger on the pulse of what’s happening to consumers.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.