Computer
Security Flaws In Carmaker's Web Portal Let a Hacker Remotely Unlock Cars
Three years ago security researcher Eaton Zveare discovered a vulnerability in Jacuzzi's SmartTub interface allowing access to the personal data of every hot tub owner.
Now Zverae says flaws in an unnamed carmaker's dealership portal "exposed the private information and vehicle data of its customers," reports TechCrunch, "and could have allowed hackers to remotely break into any of its customers' vehicles."
Zveare, who works as a security researcher at software delivery company Harness, told TechCrunch the flaw he discovered allowed the creation of a ["national"] admin account that granted "unfettered access" to the unnamed carmaker's centralized web portal. With this access, a malicious hacker could have viewed the personal and financial data of the carmaker's customers, tracked vehicles, and enrolled customers in features that allow owners — or the hackers — to control some of their cars' functions from anywhere.
Zveare said he doesn't plan on naming the vendor, but said it was a widely known automaker with several popular sub-brands.
In an interview with TechCrunch ahead of his talk at the Def Con security conference in Las Vegas on Sunday, Zveare said the bugs put a spotlight on the security of these dealership systems, which grant their employees and associates broad access to customer and vehicle information... The flaws were problematic because the buggy code loaded in the user's browser when opening the portal's login page, allowing the user — in this case, Zveare — to modify the code to bypass the login security checks. Zveare told TechCrunch that the carmaker found no evidence of past exploitation, suggesting he was the first to find it and report it to the carmaker.
When logged in, the account granted access to more than 1,000 of the carmakers' dealers across the United States, he told TechCrunch... With access to the portal, Zveare said it was also possible to pair any vehicle with a mobile account, which allows customers to remotely control some of their cars' functions from an app, such as unlocking their cars... "The takeaway is that only two simple API vulnerabilities blasted the doors open, and it's always related to authentication," said Zveare. "If you're going to get those wrong, then everything just falls down."
Zveare told TechCrunch the portals even included "telematics systems that allowed the real-time location tracking of rental or courtesy cars...
"Zveare said the bugs took about a week to fix in February 2025 soon after his disclosure to the carmaker."
Thanks to long-time Slashdot reader schwit1 for sharing the article.
Read more of this story at Slashdot.
In Barcelona, Certain Buses Run On Biomethane Produced From Human Waste
From the French newspaper Le Monde:
Odorless, quiet, sustainable. On the last day of July, passengers boarded Barcelona's V3 bus line with no idea where its fuel came from. Written in large letters on the bus facade, just below its name "Nimbus," a sign clearly stated: "This bus runs on biomethane produced from eco-factory sludge." Still, the explanation was likely too vague for most to grasp its full meaning. The moist matter from wastewater treated at the Baix Llobregat treatment plant was used to produce the biomethane. In other words: the human waste of more than 1.5 million residents of the Catalan city.
Read more of this story at Slashdot.
Former Intel Engineer Sentenced for Stealing Trade Secrets for Microsoft
After leaving a nearly 10-year position as a product marketing engineer at Intel, Varun Gupta was charged with possessing trade secrets. He was facing a maximum sentence of 10 years in prison, a $250,000 fine and three years of supervised release, according to Oregon's U.S. Attorney's Office.
Portland's KGW reports:
While still employed at Intel, Varun Gupta downloaded about 4,000 files, which included trade secrets and proprietary materials, from his work computer to personal portable hard drives, according to the U.S. Attorney's Office for the District of Oregon. While working for Microsoft, between February and July 2020, Gupta accessed and used information during ongoing negotiations with Intel regarding chip purchases, according to a sentencing memo. Some of the information containing trade secrets included a PowerPoint presentation that referenced Intel's pricing strategy with another major customer, according to the U.S. Attorney's Office for the District of Oregon in a sentencing memo.
Intel raised concerns in 2020, and Microsoft and Intel launched a joint investigation, the sentencing memo says. Intel filed a civil lawsuit in February 2021 that resulted in Gupta being ordered to pay $40,000.
Tom's Hardware summarizes the trial:
Oregon Live reports that the prosecutor, Assistant U.S. Attorney William Narus, sought an eight-month prison term for Gupta. Narus spoke about Gupta's purposeful and repeated access to secret documents. Eight months of federal imprisonment was sought as Gupta repetitively abused his cache of secret documents, according to the prosecutor.
For the defense, attorney David Angeli described Gupta's actions as a "serious error in judgment." Mitigating circumstances, such as Gupta's permanent loss of high-level employment opportunities in the industry, and that he had already paid $40,000 to settle a civil suit brought by Intel, were highlighted.
U.S. District Judge Amy Baggio concluded the court hearing by delivering a balance between the above adversarial positions. Baggio decided that Gupta should face a two-year probationary sentence [and pay a $34,472 fine — before heading back to France]... The ex-tech exec and his family have started afresh in La Belle France, with eyes on a completely new career in the wine industry. According to the report, Gupta is now studying for a qualification in vineyard management, while aiming to work as a technical director in the business.
Read more of this story at Slashdot.
Phishing Training Is Pretty Pointless, Researchers Find
"Phishing training for employees as currently practiced is essentially useless," writes SC World, citing the presentation of two researchers at the Black Hat security conference:
In a scientific study involving thousands of test subjects, eight months and four different kinds of phishing training, the average improvement rate of falling for phishing scams was a whopping 1.7%. "Is all of this focus on training worth the outcome?" asked researcher Ariana Mirian, a senior security researcher at Censys and recently a Ph.D. student at U.C. San Diego, where the study was conducted. "Training barely works..."
[Research partner Christian Dameff, co-director of the U.C. San Diego Center for Healthcare Cybersecurity] and Mirian wanted scientifically rigorous, real-world results. (You can read their academic paper here.) They enrolled more than 19,000 employees of the UCSD Health system and randomly split them into five groups, each member of which would see something different when they failed a phishing test randomly sent once a month to their workplace email accounts... Over the eight months of testing, however, there was little difference in improvement among the four groups that received different kinds of training. Those groups did improve a bit over the control group's performance — by the aforementioned 1.7%...
[A]bout 30% of users clicked on a link promising information about a change in the organization's vacation policy. Almost as many fell for one about a change in workplace dress code... Another lesson was that given enough time, almost everyone falls for a phishing email. Over the eight months of the experiment, just over 50% failed at least once.
Thanks to Slashdot reader spatwei for sharing the article.
Read more of this story at Slashdot.
America's Labor Unions are Backing State Regulations for AI Use in Workplaces
"As employers and tech companies rush to deploy AI software into workplaces to improve efficiency, labor unions are stepping up work with state lawmakers across the nation to place guardrails on its use..." reports the Washington Post.
"Union leaders say they must intervene to protect workers from the potential for AI to cause massive job displacement or infringe on employment rights."
In Massachusetts, the Teamsters labor union is backing a proposed state law that would require autonomous vehicles to have a human safety operator who can intervene during the ride, effectively forbidding truly driverless rides. Oregon lawmakers recently passed a bill supported by the Oregon Nurses Association that prohibits AI from using the title "nurse" or any associated abbreviations. The American Federation of Labor and Congress of Industrial Organizations, a federation of 63 national and international labor unions, launched a national task force last month to work with state lawmakers on more laws that regulate automation and AI affecting workers... The AFL-CIO task force plans to help unions take on problematic use of AI in collective bargaining and contracts and in coming months to develop a slate of model legislation available to state leaders, modeled on recently passed and newly proposed legislation in places including California and Massachusetts.
The president of the California Federation of Labor Unions also supports a proposed state law "that would prevent employers from primarily relying on AI software to automate decisions like terminations or disciplinary actions," according to the article. "Instead, humans would have to review decisions. The law would also prohibit use of tools that predict workers' behaviors, emotional states and personality."
Read more of this story at Slashdot.
