Feed aggregator
Facebook and Instagram Offer UK Users an Ad-Stopping Subscription Fee
"Facebook and Instagram owner Meta is launching paid subscriptions for users who do not want to see adverts in the UK," reports the BBC:
The company said it would start notifying users in the coming weeks to let them choose whether to subscribe to its platforms if they wish to use them without seeing ads. EU users of its platforms can already pay a fee starting from €5.99 (£5) a month to see no ads — but subscriptions will start from £2.99 a month for UK users.
"It will give people in the UK a clear choice about whether their data is used for personalised advertising, while preserving the free access and value that the ads-supported internet creates for people, businesses and platforms," Meta said. But UK users will not have an option to not pay and see "less personalised" adverts — a feature Meta added for EU users after regulators raised concerns...
Meta said its own model would see its subscription for no ads cost £2.99 a month on the web or £3.99 a month on iOS and Android apps — with the higher fee to offset cuts taken from transactions by Apple and Google... [Meta] reiterated its critical stance on the EU on Friday, saying its regulations were creating a worse experience for users and businesses unlike the UK's "more pro-growth and pro-innovation regulatory environment".
"Meta said its own model would see its subscription for no ads cost £2.99 a month on the web or £3.99 a month on iOS and Android apps," according to the BBC, "with the higher fee to offset cuts taken from transactions by Apple and Google."
Even users not paying for an ad-free experience have "tools and settings that empower people to control their ads experience," according to Meta's announcement. The include Ad Preferences which influences data used to inform ads including Activity Information from Ad Partners. "We also have tools in our products that explain 'Why am I seeing this ad?' and how people can manage their ad experience. We do not sell personal data to advertisers."
Read more of this story at Slashdot.
Will AI Mean Bring an End to Top Programming Language Rankings?
IEEE Spectrum ranks the popularity of programming languages — but is there a problem? Programmers "are turning away from many of these public expressions of interest. Rather than page through a book or search a website like Stack Exchange for answers to their questions, they'll chat with an LLM like Claude or ChatGPT in a private conversation."
And with an AI assistant like Cursor helping to write code, the need to pose questions in the first place is significantly decreased. For example, across the total set of languages evaluated in the Top Programming Languages, the number of questions we saw posted per week on Stack Exchange in 2025 was just 22% of what it was in 2024...
However, an even more fundamental problem is looming in the wings... In the same way most developers today don't pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail... [T]he popularity of different computer languages could become as obscure a topic as the relative popularity of railway track gauges... But if an AI is soothing our irritations with today's languages, will any new ones ever reach the kind of critical mass needed to make an impact? Will the popularity of today's languages remain frozen in time?
That's ultimately the larger question. "how much abstraction and anti-foot-shooting structure will a sufficiently-advanced coding AI really need...?"
[C]ould we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future? True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks. And instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh.
What's the role of the programmer in a future without source code? Architecture design and algorithm selection would remain vital skills... How should a piece of software be interfaced with a larger system? How should new hardware be exploited? In this scenario, computer science degrees, with their emphasis on fundamentals over the details of programming languages, rise in value over coding boot camps.
Will there be a Top Programming Language in 2026? Right now, programming is going through the biggest transformation since compilers broke onto the scene in the early 1950s. Even if the predictions that much of AI is a bubble about to burst come true, the thing about tech bubbles is that there's always some residual technology that survives. It's likely that using LLMs to write and assist with code is something that's going to stick. So we're going to be spending the next 12 months figuring out what popularity means in this new age, and what metrics might be useful to measure.
Having said that, IEEE Spectrum still ranks programming language popularity three ways — based on use among working programmers, demand from employers, and "trending" in the zeitgeist — using seven different metrics.
Their results? Among programmers, "we see that once again Python has the top spot, with the biggest change in the top five being JavaScript's drop from third place last year to sixth place this year. As JavaScript is often used to create web pages, and vibe coding is often used to create websites, this drop in the apparent popularity may be due to the effects of AI... In the 'Jobs' ranking, which looks exclusively at what skills employers are looking for, we see that Python has also taken 1st place, up from second place last year, though SQL expertise remains an incredibly valuable skill to have on your resume."
Read more of this story at Slashdot.
Researchers (Including Google) are Betting on Virtual 'World Models' for Better AI
"Today's AIs are book smart," reports the Wall Street Journal. "Everything they know they learned from available language, images and videos. To evolve further, they have to get street smart."
And that requires "world models," which are "gaining momentum in frontier research and could allow technology to take on new roles in our lives."
The key is enabling AI to learn from their environments and faithfully represent an abstract version of them in their "heads," the way humans and animals do. To do it, developers need to train AIs by using simulations of the world. Think of it like learning to drive by playing "Gran Turismo" or learning to fly from "Microsoft Flight Simulator." These world models include all the things required to plan, take actions and make predictions about the future, including physics and time... There's an almost unanimous belief among AI pioneers that world models are crucial to creating next-generation AI. And many say they will be critical to someday creating better-than-human "artificial general intelligence," or AGI. Stanford University professor and AI "godmother" Fei-Fei Li has raised $230 million to launch world-model startup World Labs...
Google DeepMind researchers set out to create a system that could generate real-world simulations with an unprecedented level of fidelity. The result, Genie 3 — which is still in research preview and not publicly available — can generate photo-realistic, open-world virtual landscapes from nothing more than a text prompt. You can think of Genie 3 as a way to quickly generate what's essentially an open-world videogame that can be as faithful to the real world as you like. It's a virtual space in which a baby AI can endlessly play, make mistakes and learn what it needs to do to achieve its goals, just as a baby animal or human does in the real world. That experimentation process is called reinforcement learning. Genie 3 is part of a system that could help train the AI that someday pilots robots, self-driving cars and other "embodied" AIs, says project co-lead Jack Parker-Holder. And the environments could be filled with people and obstacles: An AI could learn how to interact with humans by observing them moving around in that virtual space, he adds.
"It isn't clear whether all these bets will lead to the superintelligence that corporate leaders predict," the article concedes.
"But in the short term, world models could make AIs better at tasks at which they currently falter, especially in spatial reasoning."
Read more of this story at Slashdot.
Million-Year-Old Skull Rewrites Human Evolution, Scientists Claim
The BBC reports that a million-year-old human skull found in China suggests that the human species "began to emerge at least half a million years earlier than we thought, researchers are claiming in a new study."
It also shows that we co-existed with other sister species, including Neanderthals, for much longer than we've come to believe, they say.
The scientists claim their analysis "totally changes" our understanding of human evolution and, if correct, it would certainly rewrite a key early chapter in our history. But other experts in a field where disagreement over our emergence on the planet is rife, say that the new study's conclusions are plausible but far from certain.
The discovery, published in the leading scientific journal Science, shocked the research team, which included scientists from a university in China and the UK's Natural History Museum. "From the very beginning, when we got the result, we thought it was unbelievable. How could that be so deep into the past?" said Prof Xijun Ni of Fudan University, who co-led the analysis. "But we tested it again and again to test all the models, use all the methods, and we are now confident about the result, and we're actually very excited."
Thanks to long-time Slashdot reader sinij for sharing the article.
Read more of this story at Slashdot.
California Now Has 68% More EV Chargers Than Gas Nozzles, Continues Green Energy Push
Six months ago California had 48% more public and "shared" private EV chargers than gasoline nozzles. (In March California had 178,000 public and shared private EV chargers, versus about 120,000 gas nozzles.)
Since then they've added 23,000 more public/shared charging ports — and announced this week that there's now 68% more EV charger ports than the number of gasoline nozzles statewide. "Thanks to the state's ever-expanding charger network, 94% of Californians live within 10 minutes of an EV charger," according to the announcement from the state's energy policy agency. And the California Energy Commission staff told CleanTechnica they expect more chargers in the future. "We are watching increased private investment by consortiums like IONNA and OEMs like Rivian, Ford, and others that are actively installing EV charging stations throughout the state."
Clean Technica notes in 2019, the state had roughly 42,000 charging ports and now there are a little over 200,000. (And today there's about 800,000 home EV chargers.)
This week California announced another milestone: that in 2024 nearly 23% of all the state's new truck sales — that's trucks, buses, and vans — were zero-emission vehicles. (The state subsidizes electric trucks — $200 million was requested on the program's first day.)
Greenhouse gas emissions in California are down 20% since 2000 — even as the state's GDP increased 78% in that same time period all while becoming the world's fourth largest economy.
The state also continues to set clean energy records. California was powered by two-thirds clean energy in 2023, the latest year for which data is available — the largest economy in the world to achieve this level of clean energy. The state has run on 100% clean electricity for some part of the day almost every day this year.
"Last year, California ran on 100% clean electricity for the equivalent of 51 days," notes another announcement, which points out California has 15,763 MW of battery storage capacity — roughly a third of the amount projected to be needed by 2045.
Read more of this story at Slashdot.
Mistral's New Plan for Improving Its AI Models: Training Data from Enterprises
Paris-based AI giant Mistral "is pushing to improve its models," reports the Wall Street Journal, "by looking inside legacy enterprises that hold some of the world's last untapped data reserves...."
Mistral's approach will be to form partnerships with enterprises to further train existing models on their own proprietary data, a phenomenon known as post-training... [At Dutch chip-equipment company ASML], Mistral embeds its own solutions architects, applied AI engineers and applied scientists into the enterprise to work on improving models with the company's data. [While Mistral sells some models under a commercial license], this co-creation strategy allows Mistral to make money off the services side of its business and afford to give away its open source AI free of charge, while improving model performance for the customer with more industry context...
This kind of hand-holding approach is necessary for most companies to tackle AI successfully, said Arthur Mensch [co-founder and chief executive of Mistral]. "The very high-tech companies [and] a couple of banks are able to do it on their own. But when it comes to getting some [return on investment] from use cases, in general, they fail," he said. Mensch attributes that in part to a mismatch between expectations and reality. "The curse of AI is that it looks like magic. So you can very quickly make something that looks amazing to your boss," but it doesn't scale or work more broadly, he said. In other cases, enterprises simply might not know what to focus on. For example, it is a mistake to think equipping all employees with a chatbot will create meaningful gains on the bottom line, he said. Mensch said to fully take advantage of AI, companies will have to rethink organizational structures. With information flowing more easily, they could require fewer middle managers, for example.
There is a lot of work yet to do, Mensch said, but in a large sense, the future of AI development now lies inside the enterprise itself. "This is a pattern that we've seen with many of our customers: At some point, the capabilities of the frontier model can only be increased if we partner," he said.
Read more of this story at Slashdot.
Should Salesforce's Tableau Be Granted a Patent On 'Visualizing Hierarchical Data'?
Long-time Slashdot reader theodp says America's Patent and Trademark Office (USPTO) has granted a patent to Tableau (Salesforce's visual analytics platform) — for a patent covering "Data Processing For Visualizing Hierarchical Data":
"A provided data model may include a tree specification that declares parent-child relationships between objects in the data model. In response to a query associated with objects in the data model: employing the parent-child relationships to determine a tree that includes parent objects and child objects from the objects based on the parent-child relationships; determining a root object based on the query and the tree; traversing the tree from the root object to visit the child objects in the tree; determining partial results based on characteristics of the visited child objects such that the partial results are stored in an intermediate table; and providing a response to the query that includes values based on the intermediate table and the partial results."
A set of 15 simple drawings is provided to support the legal and tech gobbledygook of the invention claims. A person can have a manager, Tableau explains in Figures 5-6 of its accompanying drawings, and that manager can also manage and be managed by other people. Not only that, Tableau illustrates in Figures 7-10 that computers can be used to count how many people report to a manager. How does this magic work, you ask? Well, you "generate [a] tree" [Fig. 13] and "traverse a tree" [Fig. 15], Tableau explains. But wait, there's more — you can also display the people who report to a manager in multi-level or nested pie charts (aka Sunburst charts), Tableau demonstrates in Fig. 11.
Interestingly, Tableau released a "pre-Beta" Sunburst chart type in late April 2023 but yanked it at the end of June 2023 (others have long-supported Sunburst charts, including Plotly). So, do you think Tableau should be awarded a patent in 2025 on a concept that has roots in circa-1921 Sunburst charts and tree algorithms taught to first-year CS students in circa-1975 Data Structures courses?
Read more of this story at Slashdot.
