Computer

Journalists at 'The Atlantic' Demand Assurances Their Jobs Will Be Protected From OpenAI

Slashdot - Sat, 2024-08-03 19:34
"As media bosses scramble to decide if and how they should partner with AI companies, workers are increasingly concerned that the technology could imperil their jobs or degrade their work..." reports the Washington Post. The latest example? "Two months after the Atlantic reached a licensing deal with OpenAI, staffers at the storied magazine are demanding the company ensure their jobs and work are protected." (Nearly 60 journalists have now signed a letter demanding the company "stop prioritizing its bottom line and champion the Atlantic's journalism.") The unionized staffers want the Atlantic bosses to include AI protections in the union contract, which the two sides have been negotiating since 2022. "Our editorial leaders say that The Atlantic is a magazine made by humans, for humans," the letter says. "We could not agree more..." The Atlantic's new deal with OpenAI grants the tech firm access to the magazine's archives to train its AI tools. While the Atlantic in return will have special access to experiment with these AI tools, the magazine says it is not using AI to create journalism. But some journalists and media observers have raised concerns about whether AI tools are accurately and fairly manipulating the human-written text they work with. The Atlantic staffers' letter noted a pattern by ChatGPT of generating gibberish web addresses instead of the links intended to attribute the reporting it has borrowed, as well as sending readers to sites that have summarized Atlantic stories rather than the original work... Atlantic spokeswoman Anna Bross said company leaders "agree with the general principles" expressed by the union. For that reason, she said, they recently proposed a commitment to not to use AI to publish content "without human review and editorial oversight." Representatives from the Atlantic Union bargaining committee told The Washington Post that "the fact remains that the company has flatly refused to commit to not replacing employees with AI." The article also notes that last month the union representing Lifehacker, Mashable and PCMag journalists "ratified a contract that protects union members from being laid off because AI has impacted their roles and requires the company to discuss any such plans to implement AI tools ahead of time."

Read more of this story at Slashdot.

Categories: Computer, News

Gen X and Millennials at Higher Cancer Risk Than Older Generations

Slashdot - Sat, 2024-08-03 18:34
"Generation X and millennials are at an increased risk of developing certain cancers compared with older generations," reports the Washington Post, "a shift that is probably due to generational changes in diet, lifestyle and environmental exposures, a large new study suggests." Researchers from the American Cancer analyzed data from more than 23.5 million patients who had been diagnosed with 34 types of cancer from 2000 to 2019 — and also studied mortality data that included 7 million deaths in the U.S. from 25 types of cancer among people ages 25 to 84. [The researchers reported] that cancer rates for 17 of the 34 most common cancers are increasing in progressively younger generations. The findings included: - Cancers with the most significant increased risk are kidney, pancreatic and small intestine, which are two to three times as high for millennial men and women as baby boomers. - Millennial women also are at higher risk of liver and bile duct cancers compared with baby boomers. - Although the risk of getting cancer is rising, for most cancers, the risk of dying of the disease stabilized or declined among younger people. But mortality rates increased for gallbladder, colorectal, testicular and uterine cancers, as well as for liver cancer among younger women. "It is a concern," said Ahmedin Jemal, senior vice president of the American Cancer Society's surveillance and health equity science department, who was the senior author of the study. If the current trend continues, the increased cancer and mortality rates among younger people may "halt or even reverse the progress that we have made in reducing cancer mortality over the past several decades," he added. While there is no clear explanation for the increased cancer rates among younger people, the researchers suggest that there may be several contributing factors, including rising obesity rates; altered microbiomes from unhealthy diets high in saturated fats, red meat and ultra-processed foods or antibiotic use; poor sleep; sedentary lifestyles; and environmental factors, including exposure to pollutants and carcinogenic chemicals.

Read more of this story at Slashdot.

Categories: Computer, News

Go Tech Lead Russ Cox Steps Down to Focus on AI-Powered Open-Source Contributor Bot

Slashdot - Sat, 2024-08-03 17:34
Thursday Go's long-time tech lead Russ Cox made an announcement: Starting September 1, Austin Clements will be taking over as the tech lead of Go: both the Go team at Google and the overall Go project. Austin is currently the tech lead for what we sometimes call the "Go core", which encompasses compiler toolchain, runtime, and releases. Cherry Mui will be stepping up to lead those areas. I am not leaving the Go project, but I think the time is right for a change... I will be shifting my focus to work more on Gaby [or "Go AI bot," an open-source contributor agent] and Oscar [an open-source contributor agent architecture], trying to make useful contributions in the Go issue tracker to help all of you work more productively. I am hopeful that work on Oscar will uncover ways to help open source maintainers that will be adopted by other projects, just like some of Go's best ideas have been adopted by other projects. At the highest level, my goals for Oscar are to build something useful, learn something new, and chart a path for other projects. These are the same broad goals I've always had for our work on Go, so in that sense Oscar feels like a natural continuation. The post notes that new tech lead Austin Clements "has been working on Go at Google since 2014" (and Mui since 2016). "Their judgment is superb and their knowledge of Go and the systems it runs on both broad and deep. When I have general design questions or need to better understand details of the compiler, linker, or runtime, I turn to them." It's important to remember that tech lead — like any position of leadership — is a service role, not an honorary title. I have been leading the Go project for over 12 years, serving all of you, and trying to create the right conditions for all of you to do your best work. Large projects like Go absolutely benefit from stable leadership, but they can also benefit from leadership changes. New leaders bring new strengths and fresh perspectives. For Go, I think 12+ years of one leader is enough stability; it's time for someone new to serve in this role. In particular, I don't believe that the "BDFL" (benevolent dictator for life) model is healthy for a person or a project. It doesn't create space for new leaders. It's a single point of failure. It doesn't give the project room to grow. I think Python benefited greatly from Guido stepping down in 2018 and letting other people lead, and I've had in the back of my mind for many years that we should have a Go leadership change eventually.... I am going to consciously step back from decision making and create space for Austin and the others to step forward, but I am not disappearing. I will still be available to talk about Go designs, review CLs, answer obscure history questions, and generally help and support you all in whatever way I can. I will still file issues and send CLs from time to time, I have been working on a few potential new standard libraries, I will still advocate for Go across the industry, and I will be speaking about Go at GoLab in Italy in November... I am incredibly proud of the work we have all accomplished together, and I am confident in the leaders both on the Go team at Google and in the Go community. You are all doing remarkable work, and I know you will continue to do that.

Read more of this story at Slashdot.

Categories: Computer, News

Could AI Speed Up the Design of Nuclear Reactors?

Slashdot - Sat, 2024-08-03 16:34
A professor at Brigham Young University "has figured out a way to shave critical years off the complicated design and licensing processes for modern nuclear reactors," according to an announcement from the university. "AI is teaming up with nuclear power." The typical time frame and cost to license a new nuclear reactor design in the United States is roughly 20 years and $1 billion. To then build that reactor requires an additional five years and between $5 and $30 billion. By using AI in the time-consuming computational design process, [chemical engineering professor Matt] Memmott estimates a decade or more could be cut off the overall timeline, saving millions and millions of dollars in the process — which should prove critical given the nation's looming energy needs.... "Being able to reduce the time and cost to produce and license nuclear reactors will make that power cheaper and a more viable option for environmentally friendly power to meet the future demand...." Engineers deal with elements from neutrons on the quantum scale all the way up to coolant flow and heat transfer on the macro scale. [Memmott] also said there are multiple layers of physics that are "tightly coupled" in that process: the movement of neutrons is tightly coupled to the heat transfer which is tightly coupled to materials which is tightly coupled to the corrosion which is coupled to the coolant flow. "A lot of these reactor design problems are so massive and involve so much data that it takes months of teams of people working together to resolve the issues," he said... Memmott's is finding AI can reduce that heavy time burden and lead to more power production to not only meet rising demands, but to also keep power costs down for general consumers... Technically speaking, Memmott's research proves the concept of replacing a portion of the required thermal hydraulic and neutronics simulations with a trained machine learning model to predict temperature profiles based on geometric reactor parameters that are variable, and then optimizing those parameters. The result would create an optimal nuclear reactor design at a fraction of the computational expense required by traditional design methods. For his research, he and BYU colleagues built a dozen machine learning algorithms to examine their ability to process the simulated data needed in designing a reactor. They identified the top three algorithms, then refined the parameters until they found one that worked really well and could handle a preliminary data set as a proof of concept. It worked (and they published a paper on it) so they took the model and (for a second paper) put it to the test on a very difficult nuclear design problem: optimal nuclear shield design. The resulting papers, recently published in academic journal Nuclear Engineering and Design, showed that their refined model can geometrically optimize the design elements much faster than the traditional method. In two days Memmott's AI algorithm determined an optimal nuclear-reactor shield design that took a real-world molten salt reactor company spent six months. "Of course, humans still ultimately make the final design decisions and carry out all the safety assessments," Memmott says in the announcement, "but it saves a significant amount of time at the front end.... "Our demand for electricity is going to skyrocket in years to come and we need to figure out how to produce additional power quickly. The only baseload power we can make in the Gigawatt quantities needed that is completely emissions free is nuclear power." Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Categories: Computer, News

Initiative Aims To Require EU Game Publishers To Make Retired Games Playable

Slashdot - Sat, 2024-08-03 12:00
A proposed European Union law seeks to ensure that video games sold or licensed in the EU remain playable even if servers are shut down or studios close. The law would require publishers of sold and free-to-play games with microtransactions to provide resources to keep games functional, such as allowing players to host their own servers. Through a process called the "European Citizens Initiative," the petition needs one million signatures just to have a chance at becoming law. PC Gamer reports: "An increasing number of publishers are selling videogames that are required to connect through the internet to the game publisher, or 'phone home' to function," the petition reads. "While this is not a problem in itself, when support ends for these types of games, very often publishers simply sever the connection necessary for the game to function, proceed to destroy all working copies of the game, and implement extensive measures to prevent the customer from repairing the game in any way." Understanding that developers and publishers can't support games forever, the initiative would expect "the publisher to provide resources for the said videogame once they discontinue it while leaving it in a reasonably functional (playable) state." That means giving players the tools to host the game on their own servers, for example, and removing the requirement for games to connect to the publisher's (defunct) servers in order to be played. This is what the developer behind Knockout City did when it pulled the plug on the game's official servers. Not only does this initiative apply to games that are sold, but includes free to play games that have microtransactions for assets (like skins) or other paid-for features. The thought is, if you purchase an item in a free game, you should have the right to continue to use it indefinitely -- which means keeping that free game playable in some form. It's important to note that even a million signatures doesn't mean an automatic win, just that it'll go forward to the European Union as a proposal to become a law.

Read more of this story at Slashdot.

Categories: Computer, News

Pages