The Federal Trade Commission’s regulators are going to the heart of the matter after ordering Twitter, Facebook, Amazon, TikTok’s parent and five other social media companies to provide detailed information on how they collect and use consumers’ personal data and how their practices affect children and teens. They know this is where the social media giant’s business plans revolve around.
FTC Goes To Biz Model Heart
The FTC’s action which was announced Monday goes straight to the heart of the tech industry’s very lucrative business model: harvesting data from platform users and making it available to online advertisers so they can pinpoint specific consumers to target.
The agency plans to use the information, due in 45 days, for a comprehensive study.
Regulators and lawmakers are increasingly weaving their concerns over data power and privacy into their investigations of Big Tech companies’ market dominance.
When the FTC and 48 states and districts filed landmark antitrust lawsuits against Facebook last week, accusing it of abusing its market power to crush smaller competitors, they also alleged that the company’s conduct has harmed consumers’ data privacy.
Facebook, the largest social network, gets the bulk of its revenue — which reached $70.7 billion last year — from online ads.
With its new request, the FTC wants to know how social media and video streaming services collect, use and track consumers’ personal and demographic information, how they decide which ads and other content are shown to consumers, whether they apply algorithms or data analytics to personal information, how they measure and promote user engagement and how their practices affect children and teens.
“Never before has there been an industry capable of surveilling and monetizing so much of our personal lives,” three of the five FTC commissioners said in a statement. They said the planned study “will lift the hood on the social media and video streaming firms to carefully study their engines.”
Twitter said in a statement, “We’re working, as we always do, to ensure the FTC has the information it needs to understand how Twitter operates its services.”
Support has grown in Congress for a national privacy law that could sharply rein in the ability of the biggest tech companies to collect and make money from users’ personal data. Legislation could gain steam in the new Congress next year with support from the Biden administration.
The FTC fined Facebook $5 billion last year for alleged privacy violations and instituted new oversight and restrictions on its business. The fine was the largest the agency had ever levied on a tech company, although it had no visible impact on Facebook’s business.
Also last year, YouTube was fined $170 million — $136 million by the FTC and $34 million by New York state — to settle allegations that it collected children’s personal data without their parents’ consent.
Facebook Allows Political Ads Cashing In On Georgia Election
Facebook said Tuesday it will temporarily pause its ban on political advertising in the U.S. to allow ads for the Georgia runoff elections for the state’s two Senate seats.
The broader political ad ban for the rest of the country still stands.
The social media giant banned new election and political ads six weeks ago, after the polls closed on Election Day. It was an extension of an earlier restriction on new political ads in the week leading up to Nov. 3.
Facebook said in a blog post it will reject political ads not specifically targeted to Georgia.
Early in-person voting began Monday in the Georgia runoff. The two races in which Democrats the Rev. Raphael Warnock and Jon Ossoff try to oust Republican Sens. Kelly Loeffler and David Perdue, respectively, will decide which party controls the U.S. Senate.
Britain, EU Ramp Up Heavy Tech Fines
Big tech companies face hefty fines in the European Union and Britain if they treat rivals unfairly or fail to protect users on their platforms, in proposed regulations unveiled Tuesday by officials in Brussels and London.
The EU outlined the long-awaited, sweeping overhaul of its digital rulebook while the British government released its own plans to step up policing of harmful material online, signaling the next phase of technology regulation in Europe.
Both sets of proposals include specific measures aimed at the biggest tech companies. The EU wants to set new rules for “digital gatekeepers” to prevent them from acting unfairly. It aims to prevent bad behavior rather than just punish past actions, as it has largely done so far.
Big tech companies won’t be allowed, for example, to stop users from uninstalling preinstalled software or apps, nor will they be able to use data from business users to compete against them.
The rules, known as the Digital Markets Act, allow for fines of up to 10% of annual global revenue and, controversially, set out three criteria for defining a gatekeeper: Companies that, for the past three years, have had annual European turnover of at least 6.5 billion euros ($8 billion); or a market value of 65 billion euros and at least 45 million monthly users; or 10,000 yearly business users.
Another part of the EU plan, the Digital Services Act, updates the bloc’s 20-year-old rules on e-commerce by making platforms take more responsibility for their goods and services. That will involve identifying sellers so that rogue traders can be tracked down, being more transparent with users on how algorithms make recommendations, or swiftly taking down illegal content such as hate speech, though in a bid to balance free speech requirements, users will be given the chance to complain. Violations risk fines of up to 6% of annual turnover.
The proposals aim to “make sure that we, as users, as customers, businesses, have access to a wide choice of safe products and services online, just as well as we do in the physical world,” and that European businesses “can freely and fairly compete online,” Margrethe Vestager, the EU’s executive vice president overseeing digital affairs, told a news conference in Brussels.
In Britain, social media and other internet companies similarly face big fines if they don’t remove and limit the spread of harmful material such as child sexual abuse or terrorist content and protect users on their platforms.
Under legislative proposals that the U.K. government plans to launch next year, tech companies that let people post their own material or talk to others online could be fined up to 18 million pounds ($24 million) or 10% of their annual global revenue, whichever is higher, for not complying with the rules.
The proposals, contained in the U.K. government’s Online Safety Bill, will have extra provisions for the biggest social media companies with “high-risk features,” expected to include Facebook, TikTok, Instagram and Twitter.
These companies will face special requirements to assess whether there’s a “reasonably foreseeable risk” that content or activity that they host will cause “significant physical or psychological harm to adults,” such as false information about coronavirus vaccines. They’ll have to clarify what is allowed and how they will handle it.
All companies will have to take extra measures to protect children using their platforms. The new regulations will apply to any company whose online services are accessible in the U.K and those that don’t comply could be blocked.
The U.K. government is also reserving the right to impose criminal sanctions on senior executives, with powers it could bring into force through additional legislation if companies don’t take the new rules seriously – for example by not responding swiftly to information requests from regulators.
The final version of the EU rules will depend on negotiations with the EU Parliament and the bloc’s 27 member states while the U.K. proposals will be debated in the British Parliament.
Meanwhile, the Irish Data Privacy Commission issued Twitter with a 450,000-euro fine for a security breach. The company triggered an investigation after reporting the breach in January 2019, which affected users of the social media company’s Android app.
But it didn’t report it quickly enough, because of “an unanticipated consequence of staffing between Christmas Day 2018 and New Years’ Day,” the company said.
“We take responsibility for this mistake and remain fully committed to protecting the privacy and data of our customers,” Twitter said.
It’s the first punishment for a big U.S. tech company since the EU’s strict privacy rules, known as General Data Protection Regulation, took effect in 2018.
Under GDPR, a single regulator takes the lead role in cross-border data privacy cases as part of a “one-stop shop” system. But the system has come under question, with Ireland’s watching facing criticism for taking too long to decide on cases. The Twitter decision was also delayed after regulators in other EU member states objected to Ireland’s draft penalty.