Feed aggregator

'We're Not Learning Anything': Stanford GSB Students Sound The Alarm Over Academics

Slashdot - Fri, 2025-07-25 17:21
Stanford Graduate School of Business students have publicly criticized their academic experience, telling Poets&Quants that outdated course content and disengaged faculty leave them unprepared for post-MBA careers. The complaints target one of the world's most selective business programs, which admitted just 6.8% of applicants last fall. Students described required courses that "feel like they were designed in the 2010s" despite operating in an AI age. They cited a curriculum structure offering only 15 Distribution requirement electives, some overlapping while omitting foundational business strategy. A lottery system means students paying $250,000 tuition cannot guarantee enrollment in desired classes. Stanford's winter student survey showed satisfaction with class engagement dropped to 2.9 on a five-point scale, the lowest level in two to three years. Students contrasted Stanford's "Room Temp" system, where professors pre-select five to seven students for questioning, with Harvard Business School's "cold calling" method requiring all students to prepare for potential questioning.

Read more of this story at Slashdot.

Categories: Computer, News

'Call of Duty' Maker Goes To War With 'Parasitic' Cheat Developers in LA Federal Court

Slashdot - Fri, 2025-07-25 16:40
A federal court has denied requests by Ryan Rothholz to dismiss or transfer an Activision lawsuit targeting his alleged Call of Duty cheating software operation. Rothholz, who operated under the online handle "Lerggy," submitted motions in June and earlier this month seeking to dismiss the case or move it to the Southern District of New York, but both were rejected due to filing errors. The May lawsuit alleges Rothholz created "Lergware" hacking software that enabled players to cheat by kicking opponents offline, then rebranded to develop "GameHook" after receiving a cease and desist letter in June 2023. Court filings say he sold a "master key" for $350 that facilitated cheating across multiple games. The hacks "are parasitic in nature," the complaint said, alleging violations of the game's terms of service, copyright law and the Computer Fraud and Abuse Act.

Read more of this story at Slashdot.

Categories: Computer, News

Indian Studio Uses AI To Change 12-Year-Old Film's Ending Without Director's Consent in Apparent First

Slashdot - Fri, 2025-07-25 16:00
Indian studio Eros International plans to re-release the 2013 Bollywood romantic drama "Raanjhanaa" on August 1 with an AI-generated alternate ending that transforms the film's tragic conclusion into a happier one. The original Hindi film, which starred Dhanush and Sonam Kapoor and became a commercial hit, ended with the protagonist's death. The AI-altered Tamil version titled "Ambikapathy" will allow the character to survive. Director Aanand L. Rai condemned the decision as "a deeply troubling precedent" made without his knowledge or consent. Eros CEO Pradeep Dwivedi defended the move as legally permitted under Indian copyright law, which grants producers full authorship rights over films. The controversy represents what appears to be the first instance of AI being used to fundamentally alter a completed film's narrative without director involvement.

Read more of this story at Slashdot.

Categories: Computer, News

College Grads Are Pursuing a New Career Path: Training AI Models

Slashdot - Fri, 2025-07-25 15:00
College graduates across specialized fields are pursuing a new career path training AI models, with companies paying between $30 to $160 per hour for their expertise. Handshake, a university career networking platform, recruited more than 1,000 AI trainers in six months through its newly created Handshake AI division for what it describes as the top five AI laboratories. The trend stems from federal funding cuts straining academic research and a stalled entry-level job market, making AI training an attractive alternative for recent graduates with specialized knowledge in fields including music, finance, law, education, statistics, virology, and quantum mechanics.

Read more of this story at Slashdot.

Categories: Computer, News

American Airlines Chief Blasts Delta's AI Pricing Plans as 'Inappropriate'

Slashdot - Fri, 2025-07-25 14:00
American Airlines Chief Executive Robert Isom criticized the use of AI in setting air fares during an earnings call, calling the practice "inappropriate" and a "bait and switch" move that could trick travelers. Isom's comments target Delta Air Lines, which is testing AI to help set pricing on about 3% of its network today with plans to expand to 20% by year-end. Delta maintains it is not using the technology to target customers with individualized offers based on personal information, stating all customers see identical fares across retail channels. US Senators Ruben Gallego, Richard Blumenthal, and Mark Warner have questioned Delta's AI pricing plans, citing data privacy concerns and potential fare increases. Southwest Airlines CEO Bob Jordan said his carrier also has no plans to use AI in revenue management or pricing decisions.

Read more of this story at Slashdot.

Categories: Computer, News

Mercedes-Benz Is Already Testing Solid-State Batteries In EVs With Over 600 Miles Range

Slashdot - Fri, 2025-07-25 12:00
An anonymous reader quotes a report from Electrek: The "holy grail" of electric vehicle battery tech may be here sooner than you'd think. Mercedes-Benz is testing EVs with solid-state batteries on the road, promising to deliver over 600 miles of range. Earlier this year, Mercedes marked a massive milestone, putting "the first car powered by a lithium-metal solid-state battery on the road" for testing. Mercedes has been testing prototypes in the UK since February. The company used a modified EQS prototype, equipped with the new batteries and other parts. The battery pack was developed by Mercedes-Benz and its Formula 1 supplier unit, Mercedes AMG High-Performance Powertrains (HPP) Mercedes is teaming up with US-based Factorial Energy to bring the new battery tech to market. In September, Factorial and Mercedes revealed the all-solid-state Solstice battery. The new batteries, promising a 25% range improvement, will power the German automaker's next-generation electric vehicles. According to Markus Schafer, the automaker's head of development, the first Mercedes EVs powered by solid-state batteries could be here by 2030. During an event in Copenhagen, Schafer told German auto news outlet Automobilwoche, "We expect to bring the technology into series production before the end of the year." In addition to providing a longer driving range, Mercedes believes the new batteries can significantly reduce costs. Schafer said current batteries won't suffice, adding, "At the core, a new chemistry is needed." Mercedes and Factorial are using a sulfide-based solid electrolyte, said to be safer and more efficient.

Read more of this story at Slashdot.

Categories: Computer, News

Largest-Ever Supernova Catalog Provides Further Evidence Dark Energy Is Weakening

Slashdot - Fri, 2025-07-25 09:00
Scientists using the largest-ever catalog of Type 1a supernovas -- cosmic explosions from white dwarf "vampire stars" -- have uncovered further evidence that dark energy may not be constant. While the findings are still preliminary, they suggest the mysterious force driving the universe's expansion could be weakening, which "would have ramifications for our understanding of how the cosmos will end," reports Space.com. From the report: By comparing Type 1a supernovas at different distances and seeing how their light has been redshifted by the expansion of the universe, the value for the rate of expansion of the universe (the Hubble constant) can be obtained. Then, that can be used to understand the impact of dark energy on the cosmos at different times. This story is fitting because it was the study of 50 Type 1a supernovas that first tipped astronomers off to the existence of dark energy in the first place back in 1998. Since then, astronomers have observed a further 2,000 Type 1a supernovas with different telescopes. This new project corrects any differences between those observations caused by different astronomical instruments, such as how the filters of telescopes drift over time, to curate the largest standardized Type 1a supernova dataset ever. It's named Union3. Union3 contains 2,087 supernovas from 24 different datasets spanning 7 billion years of cosmic time. It builds upon the 557 supernovas catalogued in an original dataset called Union2. Analysis of Union3 does indeed seem to corroborate the results of DESI -- that dark energy is weakening over time -- but the results aren't yet conclusive. What is impressive about Union3, however, is that it presents two separate routes of investigation that both point toward non-constant dark energy. "I don't think anyone is jumping up and down getting overly excited yet, but that's because we scientists are suppressing any premature elation since we know that this could go away once we get even better data," Saul Perlmutter, study team member and a researcher at Berkeley Lab, said in a statement. "On the other hand, people are certainly sitting up in their chairs now that two separate techniques are showing moderate disagreement with the simple Lambda CDM model." And when it comes to dark energy in general, Perlmutter says the scientific community will pay attention. After all, he shared the 2011 Nobel Prize in Physics for discovering this strange force. "It's exciting that we're finally starting to reach levels of precision where things become interesting and you can begin to differentiate between the different theories of dark energy," Perlmutter said.

Read more of this story at Slashdot.

Categories: Computer, News

Error'd: It's Getting Hot in Here

The Daily WTF - Fri, 2025-07-25 08:30

Or cold. It's getting hot and cold. But on average... no. It's absolutely unbelievable.

"There's been a physics breakthrough!" Mate exclaimed. "Looking at meteoblue, I should probably reconsider that hike on Monday." Yes, you should blow it off, but you won't need to.

 

An anonymous fryfan frets "The yellow arches app (at least in the UK) is a buggy mess, and I'm amazed it works at all when it does. Whilst I've heard of null, it would appear that they have another version of null, called ullnullf! Comments sent to their technical team over the years, including those with good reproduceable bugs, tend to go unanswered, unfortunately."

 

Llarry A. whipped out his wallet but baffled "I tried to pay in cash, but I wasn't sure how much."

 

"Github goes gonzo!" groused Gwenn Le Bihan. "Seems like Github's LLM model broke containment and error'd all over the website layout. crawling out of its grouped button." Gross.

 

Peter G. gripes "The text in the image really says it all." He just needs to rate his experience above 7 in order to enable the submit button.

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
Categories: Computer

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes

Slashdot - Fri, 2025-07-25 05:30
An anonymous reader quotes a report from Ars Technica: Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence." The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...] The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people. The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.

Read more of this story at Slashdot.

Categories: Computer, News

UK Student Jailed For Selling Phishing Kits Linked To $135M of Fraud

Slashdot - Fri, 2025-07-25 03:40
A 21-year-old student who designed and distributed online kits linked to $175 million worth of fraud has been jailed for seven years. From a report: Ollie Holman created phishing kits that mimicked government, bank and charity websites so that criminals could harvest victims' personal information to defraud them. In one case a kit was used to mimic a charity's donation webpage so when someone tried to give money, their card details were taken and used by criminals. Holman, of Eastcote in north-west London, created and supplied 1,052 phishing kits that targeted 69 organisations across 24 countries. He also offered tutorials in how to use the kits and built up a network of almost 700 connections. The fake websites supplied in the kits had features that allowed information such as login and bank details to be stored. It is estimated Holman received $405,000 from selling the kits between 2021 and 2023. The kits were distributed through the encrypted messaging service Telegram.

Read more of this story at Slashdot.

Categories: Computer, News

Scientists Are Developing Artificial Blood That Could Save Lives In Emergencies

Slashdot - Fri, 2025-07-25 03:00
Scientists at the University of Maryland are developing ErythroMer, a freeze-dried artificial blood substitute made from hemoglobin encased in fat bubbles, designed to be shelf-stable for years and reconstituted with water in emergencies. With promising animal trial results and significant funding from the Department of Defense, the team aims to begin human testing within two years. NPR reports: "The No. 1 cause of preventable death on the battlefield is hemorrhage still today," says Col. Jeremy Pamplin, the project manager at the Defense Advanced Research Projects Agency. "That's a real problem for the military and for the civilian world." [Dr. Allan Doctor, a scientist at the University of Maryland working to develop the artificial blood substitute] is optimistic his team may be on the brink of solving that problem with ... ErythroMer. Doctor co-founded KaloCyte to develop the blood and serves on the board and as the firm's chief scientific officer. "We've been able to successfully recapitulate all the functions of blood that are important for a resuscitation in a system that can be stored for years at ambient temperature and be used at the scene of an accident," he says. [...] Doctor's team has tested their artificial blood on hundreds of rabbits and so far it looks safe and effective. "It would change the way that we could take care of people who are bleeding outside of hospitals," Doctor says. "It'd be transformative." [...] While the results so far seem like cause for optimism, Doctor says he still needs to prove to the Food and Drug Administration that his artificial blood would be safe and effective for people. But he hopes to start testing it in humans within two years. A Japanese team is already testing a similar synthetic blood in people. "I'm very hopeful," Doctor says. While promising, some experts remain cautious, noting that past attempts at artificial blood ultimately proved unsafe. "I think it's a reasonable approach," says Tim Estep, a scientist at Chart Biotech Consulting who consults with companies developing artificial blood. "But because this field has been so challenging, the proof will be in the clinical trials," he adds. "While I'm overall optimistic, placing a bet on any one technology right now is overall difficult."

Read more of this story at Slashdot.

Categories: Computer, News

Pages