Slashdot
OpenAI Releases First Open-Weight Models Since GPT-2
OpenAI has released two open-weight language models, marking the startup's first such release since GPT-2 in 2019. The models, gpt-oss-120b and gpt-oss-20b, can run locally on consumer devices and be fine-tuned for specific purposes. Both models use chain-of-thought reasoning approaches first deployed in OpenAI's o1 model and can browse the web, execute code, and function as AI agents.
The smaller 20-billion-parameter model runs on consumer devices with 16 GB of memory. Gpt-oss-120B model will require about 80 GB of memory. OpenAI said the 120-billion-parameter model performs similarly to the company's proprietary o3 and o4-mini models. The models are available free on Hugging Face under the Apache 2.0 license after safety testing that delayed their March announcement.
Read more of this story at Slashdot.
Three US Agencies Get Failing Grades For Not Following IT Best Practices
The Government Accountability Office has issued reports criticizing the Department of Homeland Security, Environmental Protection Agency, and General Services Administration for failing to implement critical IT and cybersecurity recommendations.
DHS leads with 43 unresolved recommendations dating to 2018, including seven priority matters. The EPA has 11 outstanding items, including failures to submit FedRAMP documentation and conduct organization-wide cybersecurity risk assessments. GSA has four pending recommendations.
All three agencies failed to properly log cybersecurity events and conduct required annual IT portfolio reviews. The DHS' HART biometric program remains behind schedule without proper cost accounting or privacy controls, with all nine 2023 recommendations still open.
Read more of this story at Slashdot.
Wikipedia Editors Adopt 'Speedy Deletion' Policy for AI Slop Articles
Wikipedia editors have adopted a policy enabling administrators to delete AI-generated articles without the standard week-long discussion period. Articles containing telltale LLM responses like "Here is your Wikipedia article on" or "Up to my last training update" now qualify for immediate removal.
Articles with fabricated citations -- nonexistent papers or unrelated sources such as beetle research cited in computer science articles -- also meet deletion criteria.
Read more of this story at Slashdot.
'No One Cares' About Elite Degrees at Palantir, CEO Tells Investors
Palantir chief executive Alex Karp has told analysts and investors that the company treats Harvard, Princeton and Yale graduates the same as those without college degrees, calling employment at the data analytics firm "a new credential independent of class and background."
During the earnings call Monday where Palantir reported its first billion-dollar revenue quarter, Karp said university graduates come to the company after being "engaged in platitudes" and claimed workers without college degrees sometimes create more value than degree holders using Palantir products. The company launched its Meritocracy Fellowship this spring to recruit talent outside traditional university pathways.
Read more of this story at Slashdot.
Microsoft Teases the Future of Windows as an Agentic OS
An anonymous reader shares a report: Microsoft has published a new video that appears to be the first in an upcoming series of videos dubbed "Windows 2030 Vision," where the company outlines its vision for the future of Windows over the next five years. It curiously makes references to some potentially major changes on the horizon, in the wake of AI.
This first episode features David Weston, Microsoft's Corporate Vice President of Enterprise & Security, who opens the video by saying "the world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS."
Right out of the gate, it sounds like he's teasing the potential for a radical new desktop UX made possible by agentic AI. Weston later continues, "I truly believe the future version of Windows and other Microsoft operating systems will interact in a multimodal way. The computer will be able to see what we see, hear what we hear, and we can talk to it and ask it to do much more sophisticated things."
Read more of this story at Slashdot.
AI Is Listening to Your Meetings. Watch What You Say.
AI meeting transcription software is inadvertently sharing private conversations with all meeting participants through automated summaries. WSJ found a series of mishaps that people confirmed on-record.
Digital marketing agency owner Tiffany Lewis discovered her "Nigerian prince" joke about a potential client was included in the summary sent to that same client. Nashville branding firm Studio Delger received meeting notes documenting their discussion about "getting sandwich ingredients from Publix" and not liking soup when their client failed to appear. Communications agency coordinator Andrea Serra found her personal frustrations about a neighborhood Whole Foods and a kitchen mishap while making sweet potato recipes included in official meeting recaps distributed to colleagues.
Read more of this story at Slashdot.
Nearly 100,000 ChatGPT Conversations Were Searchable on Google
An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.
The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.
Read more of this story at Slashdot.
Deadly Titan Submersible Implosion Was Preventable Disaster, Coast Guard Concludes
The U.S. Coast Guard determined the implosion of the Titan submersible that killed five people while traveling to the wreckage of the Titanic was a preventable disaster caused by OceanGate Expeditions's inability to meet safety and engineering standards. WSJ: A 335-page report [PDF] detailing a two-year inquiry from the U.S. Coast Guard's Marine Board of Investigation found the company that owned and operated the Titan failed to follow maintenance and inspection protocols for the deep-sea submersible.
OceanGate avoided regulatory review and managed the submersible outside of standard protocols "by strategically creating and exploiting regulatory confusion and oversight challenges," the report said. The Coast Guard opened its highest-level investigation into the event in June 2023, shortly after the implosion occurred. "There is a need for stronger oversight and clear options for operators who are exploring new concepts outside of the existing regulatory framework," Jason Neubauer, the chair of the Coast Guard Marine Board of Investigation for the Titan submersible, said in a statement.
Read more of this story at Slashdot.
An Illinois Bill Banning AI Therapy Has Been Signed Into Law
An anonymous reader shares a report: In a landmark move, Illinois state lawmakers have passed a bill banning AI from acting as a standalone therapist and placing firm guardrails on how mental health professionals can use AI to support care. Governor JB Pritzker signed the bill into law on Aug. 1.
The legislation, dubbed the Wellness and Oversight for Psychological Resources Act, was introduced by Rep. Bob Morgan and makes one thing clear: only licensed professionals can deliver therapeutic or psychotherapeutic services to another human being. [...] Under the new state law, mental health providers are barred from using AI to independently make therapeutic decisions, interact directly with clients, or create treatment plans -- unless a licensed professional has reviewed and approved it. The law also closes a loophole that allows unlicensed persons to advertise themselves as "therapists."
Read more of this story at Slashdot.
Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds
For years, whistle-blowers have warned that fake results are sneaking into the scientific literature at an increasing pace. A new statistical analysis backs up the concern. From a report: A team of researchers found evidence of shady organizations churning out fake or low-quality studies on an industrial scale. And their output is rising fast, threatening the integrity of many fields.
"If these trends are not stopped, science is going to be destroyed," said LuÃs A. Nunes Amaral, a data scientist at Northwestern University and an author of the study, which was published in the Proceedings of the National Academy of Sciences on Monday. Science has made huge advances over the past few centuries only because new generations of scientists could read about the accomplishments of previous ones. Each time a new paper is published, other scientists can explore the findings and think about how to make their own discoveries. Fake scientific papers produced by commercial "paper mills" are doubling every year and a half, according to the report. Northwestern University researchers examined over one million papers and identified networks of fraudulent studies sold to scientists seeking to pad their publication records. The team estimates the actual scope of fraud may be 100 times greater than currently detected cases. Paper mills charge hundreds to thousands of dollars for fake authorship and often target specific research fields like microRNA cancer studies.
Read more of this story at Slashdot.
Man Controls iPad With His Mind Using Synchron Brain Implant
BrianFagioli shares a report from NERDS.xyz: Synchron has just released a public demo showing something that used to feel impossible. A man with ALS is now using his iPad with nothing but his brain. No hands. No voice. No eye-tracking. Just thought. The man in the video is named Mark. He's part of Synchron's COMMAND clinical study and has an implant called the Stentrode. It sits inside his brain's blood vessels and picks up his motor intention. Those signals get sent wirelessly to an external decoder, which then tells the iPad what to do. It's all made possible by Apple's new Brain-Computer Interface Human Interface Device protocol, which lets iPadOS treat brain activity like an actual input method.
Apple's built-in Switch Control feature makes the whole thing work on the software side. The iPad even sends back screen context to the BCI decoder to make everything run more smoothly and accurately. [...] Synchron was the first company to start clinical trials with a permanently implanted BCI. The big difference here is that it doesn't require open brain surgery. The device is implanted through the blood vessels, which makes it way more practical for real-world use.
Read more of this story at Slashdot.
NASA's Lunar Trailblazer Mission Ends In Disappointment
NASA's Lunar Trailblazer mission ended prematurely after losing contact with the satellite just one day post-launch, the agency announced today. Engadget reports: The NASA satellite was part of the IM-2 mission by Intuitive Machines, which took off from a SpaceX Falcon 9 rocket from Kennedy Space Center on February 26 at 7:16PM ET. The Lunar Trailblazer successfully separated from the rocket as planned about 48 minutes after launch. Operators in Pasadena, CA established communication with the satellite at 8:13PM ET, but two-way communication was lost the next day and the team was unable to recover the connection. From the limited data ground teams received before the satellite went dark, the craft's solar arrays were not correctly positioned toward the sun, which caused its batteries to drain. "While it was not the outcome we had hoped for, mission experiences like Lunar Trailblazer help us to learn and reduce the risk for future, low-cost small satellites to do innovative science as we prepare for a sustained human presence on the Moon," said Nicky Fox, associate administrator at NASA Headquarters' Science Mission Directorate. "Thank you to the Lunar Trailblazer team for their dedication in working on and learning from this mission through to the end."
Read more of this story at Slashdot.

