As reported by the Associated Press,
U.S. regulators want a federal judge to break up Google to prevent the company from continuing to squash competition through its dominant search engine after a court found it had maintained an abusive monopoly over the past decade.
The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.
A sale of Chrome “will permanently stop Google’s control of this critical search access point and allow rival search engines the ability to access the browser that for many users is a gateway to the internet,” Justice Department lawyers argued in their filing.
Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.
The broad scope of the recommended penalties underscores how severely regulators operating under President Joe Biden’s administration believe Google should be punished following an August ruling by U.S. District Judge Amit Mehta that branded the company as a monopolist.
The Justice Department decision-makers who will inherit the case after President-elect Donald Trump takes office next year might not be as strident. The Washington, D.C. court hearings on Google’s punishment are scheduled to begin in April and Mehta is aiming to issue his final decision before Labor Day.
If Mehta embraces the government’s recommendations, Google would be forced to sell its 16-year-old Chrome browser within six months of the final ruling. But the company certainly would appeal any punishment, potentially prolonging a legal tussle that has dragged on for more than four years.
Besides seeking a Chrome spinoff and a corralling of the Android software, the Justice Department wants the judge to ban Google from forging multibillion-dollar deals to lock in its dominant search engine as the default option on Apple’s iPhone and other devices. It would also ban Google from favoring its own services, such as YouTube or its recently-launched artificial intelligence platform, Gemini.
Regulators also want Google to license the search index data it collects from people’s queries to its rivals, giving them a better chance at competing with the tech giant. On the commercial side of its search engine, Google would be required to provide more transparency into how it sets the prices that advertisers pay to be listed near the top of some targeted search results.
Kent Walker, Google’s chief legal officer, lashed out at the Justice Department for pursuing “a radical interventionist agenda that would harm Americans and America’s global technology.” In a blog post, Walker warned the “overly broad proposal” would threaten personal privacy while undermining Google’s early leadership in artificial intelligence, “perhaps the most important innovation of our time.”
Wary of Google’s increasing use of artificial intelligence in its search results, regulators also advised Mehta to ensure websites will be able to shield their content from Google’s AI training techniques.
The measures, if they are ordered, threaten to upend a business expected to generate more than $300 billion in revenue this year.
As reported by the Associated Press,
A social media ban for children under 16 passed the Australian Parliament on Friday in a world-first law.
The law will make platforms including TikTok, Facebook, Snapchat, Reddit, X and Instagram liable for fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent children younger than 16 from holding accounts.
The Senate passed the bill on Thursday 34 votes to 19. The House of Representatives on Wednesday overwhelmingly approved the legislation by 102 votes to 13.
The House on Friday endorsed opposition amendments made in the Senate, making the bill law.
Prime Minister Anthony Albanese said the law supported parents concerned by online harms to their children.
“Platforms now have a social responsibility to ensure the safety of our kids is a priority for them,” Albanese told reporters.
The platforms have one year to work out how they could implement the ban before penalties are enforced.
Meta Platforms, which owns Facebook and Instagram, said the legislation had been “rushed.”
Digital Industry Group Inc., an advocate for the platforms in Australia, said questions remain about the law’s impact on children, its technical foundations and scope.
“The social media ban legislation has been released and passed within a week and, as a result, no one can confidently explain how it will work in practice – the community and platforms are in the dark about what exactly is required of them,” DIGI managing director Sunita Bose said.
The amendments passed on Friday bolster privacy protections. Platforms would not be allowed to compel users to provide government-issued identity documents including passports or driver’s licenses, nor could they demand digital identification through a government system.
As reported by Ars Technica,
Makers of smart devices that fail to disclose how long they will support their products with software updates may be breaking the Magnuson Moss Warranty Act, the Federal Trade Commission (FTC) warned this week.
The FTC released its statement after examining 184 smart products across 64 product categories, including soundbars, video doorbells, breast pumps, smartphones, home appliances, and garage door opener controllers. Among devices researched, the majority—or 163 to be precise—”did not disclose the connected device support duration or end date” on their product webpage, per the FTC’s report [PDF]. Contrastingly, 11.4 percent of devices examined shared a software support duration or end date on their product page.
In addition to manufacturers often neglecting to commit to software support for a specified amount of time, it seems that even when they share this information, it’s elusive.
For example, the FTC reported that some manufacturers made software support dates available but not on the related product’s webpage. Instead, this information is sometimes buried in specs, support, FAQ pages, or footnotes.
The FTC report added:
… some used ambiguous language that only imply the level of support provided, including phrases like, “lifetime technical support,” “as long as your device is fully operational,” and “continuous software updates,” for example. Notably, staff also had difficulty finding on the product webpages the device’s release date …
At times, the FTC found glaring inconsistencies. For example, one device’s product page said that the device featured “lifetime” support, “but the search result pointing to the manufacturer’s support page indicated that, while other updates may still be active, the security updates for the device had stopped in 2021,” per the FTC.
Those relying on Google’s AI Overviews may also be misled. In one case, AI Overviews pointed to a smart gadget getting “software support and updates for 3–6 months.” But through the link that AI Overviews provided, the FTC found that the three to six months figure that Google scraped actually referred to the device’s battery life. The next day, AI Overviews said that it couldn’t determine the duration of software support or updates for the gadget, the FTC noted.
In its report, the FTC encouraged law enforcement and policymakers to investigate whether vendors properly disclose software support commitments. The government agency warned that not informing shoppers about how long products with warranties will be supported may go against the Magnuson Moss Warranty Act:
This law requires that written warranties on consumer products costing more than $15 be made available to prospective buyers prior to sale and that the warranties disclose a number of things, including, “a clear description and identification of products, or parts, or characteristics, or components or properties covered by and where necessary for clarification, excluded from the warranty.”
The FTC also noted that vendors could be in violation of the FTC Act if omissions or misrepresentations around software support are likely to mislead shoppers.
The FTC’s research follows a September letter to the agency from 17 groups, including iFixit, Public Interest Research Group, Consumer Reports, and the Electronic Frontier Foundation, imploring that the FTC provide “clear guidance” on “making functions of a device reliant on embedded software that ties the device back to a manufacturer’s servers,” aka software tethering.
As reported by Ars Technica,
OpenAI keeps deleting data that could allegedly prove the AI company violated copyright laws by training ChatGPT on authors’ works. Apparently largely unintentional, the sloppy practice is seemingly dragging out early court battles that could determine whether AI training is fair use.
Most recently, The New York Times accused OpenAI of unintentionally erasing programs and search results that the newspaper believed could be used as evidence of copyright abuse.
The NYT apparently spent more than 150 hours extracting training data, while following a model inspection protocol that OpenAI set up precisely to avoid conducting potentially damning searches of its own database. This process began in October, but by mid-November, the NYT discovered that some of the data gathered had been erased due to what OpenAI called a “glitch.”
Looking to update the court about potential delays in discovery, the NYT asked OpenAI to collaborate on a joint filing admitting the deletion occurred. But OpenAI declined, instead filing a separate response calling the newspaper’s accusation that evidence was deleted “exaggerated” and blaming the NYT for the technical problem that triggered the data deleting.
OpenAI denied deleting “any evidence,” instead admitting only that file-system information was “inadvertently removed” after the NYT requested a change that resulted in “self-inflicted wounds.” According to OpenAI, the tech problem emerged because NYT was hoping to speed up its searches and requested a change to the model inspection set-up that OpenAI warned “would yield no speed improvements and might even hinder performance.”
The AI company accused the NYT of negligence during discovery, “repeatedly running flawed code” while conducting searches of URLs and phrases from various newspaper articles and failing to back up their data. Allegedly the change that NYT requested “resulted in removing the folder structure and some file names on one hard drive,” which “was supposed to be used as a temporary cache for storing OpenAI data, but evidently was also used by Plaintiffs to save some of their search results (apparently without any backups).”
Once OpenAI figured out what happened, data was restored, OpenAI said. But the NYT alleged that the only data that OpenAI could recover did “not include the original folder structure and original file names” and therefore “is unreliable and cannot be used to determine where the News Plaintiffs’ copied articles were used to build Defendants’ models.”
In response, OpenAI suggested that the NYT could simply take a few days and re-run the searches, insisting, “contrary to Plaintiffs’ insinuations, there is no reason to think that the contents of any files were lost.” But the NYT does not seem happy about having to retread any part of model inspection, continually frustrated by OpenAI’s expectation that plaintiffs must come up with search terms when OpenAI understands its models best.
OpenAI claimed that it has consulted on search terms and been “forced to pour enormous resources” into supporting the NYT’s model inspection efforts while continuing to avoid saying how much it’s costing. Previously, the NYT accused OpenAI of seeking to profit off these searches, attempting to charge retail prices instead of being transparent about actual costs.
Now, OpenAI appears to be more willing to conduct searches on behalf of NYT that it previously sought to avoid. In its filing, OpenAI asked the court to order news plaintiffs to “collaborate with OpenAI to develop a plan for reasonable, targeted searches to be executed either by Plaintiffs or OpenAI.”
How that might proceed will be discussed at a hearing on December 3. OpenAI said it was committed to preventing future technical issues and was “committed to resolving these issues efficiently and equitably.”