Computer

New FPGA-Powered Retro Console Re-Creates the PlayStation

Slashdot - Tue, 2025-01-28 14:00
An anonymous reader quotes a report from Ars Technica: [A] company called Retro Remake is reigniting the console wars of the 1990s with its SuperStation one, a new-old game console designed to play original Sony PlayStation games and work with original accessories like controllers and memory cards. Currently available as a $180 pre-order, Retro Remake expects the consoles to ship no later than Q4 of 2025. The base console is modeled on the redesigned PSOne console from mid-2000, released late in the console's lifecycle to appeal to buyers on a budget who couldn't afford a then-new PlayStation 2. The Superstation one includes two PlayStation controller ports and memory card slots on the front, plus a USB-A port. But there are lots of modern amenities on the back, including a USB-C port for power, two USB-A ports, an HDMI port for new TVs, DIN10 and VGA ports that support analog video output, and an Ethernet port. Other analog video outputs, including component and RCA outputs, are located on the sides behind small covers. The console also supports Wi-Fi and Bluetooth. The Retro Remake SuperStation console offers an optional tray-loading CD drive in a separate "SuperDock" accessory that will allow you to play original game discs. Buyers can reserve the SuperDock with a $5 deposit, with a targeted price of around $40. The report also notes the console uses an FPGA chip that's "based on the established MiSTer platform, which already has a huge library of console and PC cores available, including but not limited to the Nintendo 64 and Sega Saturn." And because it's based on the MiSTer platform, it makes the console "open source from day 1."

Read more of this story at Slashdot.

Categories: Computer, News

HomePod With Screen 'Most Significant New Apple Product' of 2025, Says Gurman

Slashdot - Tue, 2025-01-28 11:00
In his latest Power On! newsletter, Apple analyst Mark Gurman called the company's new smart device "Apple's most significant release of the year because it's the first step toward a bigger role in the smart home." The device in question is rumored to be a new smart hub that could look like a HomePod with a seven-inch screen. Digital Trends reports: Gurman calls the new smart device a "smaller and cheaper iPad that lets users control appliances, conduct FaceTime chats and handle other tasks." It doesn't sound like the new hub will stand alone, though; Gurman goes on to say that it "should be followed by a higher-end version in a few years." That version should be able to pan and tilt to keep users in-frame during video calls, or just to keep the display visible as someone moves around the home. [...] Other details are still known, like whether the device will use an original operating system. The overall plan is to make the new smart device the center of an Apple-based smart home and open the doors to a more conversational Siri.

Read more of this story at Slashdot.

Categories: Computer, News

Peeing Is Socially Contagious In Chimps

Slashdot - Tue, 2025-01-28 08:00
After observing 20 chimpanzees for over 600 hours, researchers in Japan found that chimps are more likely to urinate after witnessing others do so. "[T]he team meticulously recorded the number and timing of 'urination events' along with the relative distances between 'the urinator and potential followers,'" writes 404 Media's Becky Ferreira. "The results revealed that urination is, in fact, socially contagious for chimps and that low-dominant individuals were especially likely to pee after watching others pee. Call it: pee-r pressure." The findings have been published in the journal Cell Biology. From the study: The decision to urinate involves a complex combination of both physiological and social considerations. However, the social dimensions of urination remain largely unexplored. More specifically, aligning urination in time (i.e. synchrony) and the triggering of urination by observing similar behavior in others (i.e. social contagion) are thought to occur in humans across different cultures (Figure S1A), and possibly also in non-human animals. However, neither has been scientifically quantified in any species. Contagious urination, like other forms of behavioral and emotional state matching, may have important implications in establishing and maintaining social cohesion, in addition to potential roles in preparation for collective departure (i.e. voiding before long-distance travel) and territorial scent-marking (i.e. coordination of chemosensory signals). Here, we report socially contagious urination in chimpanzees, one of our closest relatives, as measured through all-occurrence recording of 20 captive chimpanzees across >600 hours. Our results suggest that socially contagious urination may be an overlooked, and potentially widespread, facet of social behavior. In conclusion, we find that in captive chimpanzees the act of urination is socially contagious. Further, low-dominance individuals had higher rates of contagion. We found no evidence that this phenomenon is moderated by dyadic affiliation. It remains possible that latent individual factors associated with low dominance status (e.g. vigilance and attentional bias, stress levels, personality traits) might shape the contagion of urination, or alternatively that there are true dominance-driven effects. In any case, our results raise several new and important questions around contagious urination across species, from ethology to psychology to endocrinology. [...]

Read more of this story at Slashdot.

Categories: Computer, News

CodeSOD: Contains Bad Choices

The Daily WTF - Tue, 2025-01-28 07:30

Paul's co-worker needed to manage some data in a tree. To do that, they wrote this Java function:

private static boolean existsFather(ArrayList<Integer> fatherFolder, Integer fatherId) { for (Integer father : fatherFolder) { if (father.equals(fatherId)) return true; } return false; }

I do not know what the integers in use represent here. I don't think they're actually representing "folders", despite the variable names in the code. I certainly hope it's not representing files and folders, because that implies they're tossing around file handles in some C-brained approach (but badly, since it implies they've got an open handle for every object).

The core WTF, in my opinion, is this- the code clearly implies some sort of tree structure, the tree contains integers, but they're not using any of the Java structures for handling trees, and implementing this slipshod approach. And even then, this code could be made more generic, as the general process works with any sane Java type.

But there's also the obvious WTF: the java.util.Collection interface, which an ArrayList implements, already handles all of this in its contains method. This entire function could be replaced with fatherFolder.contains(fatherId).

Paul writes: "I guess the last developer didn't know that every implementation of a java.util.Collection has a method called contains. At least they knew how to do a for-each.".

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.
Categories: Computer

'AI Is Too Unpredictable To Behave According To Human Goals'

Slashdot - Tue, 2025-01-28 04:30
An anonymous reader quotes a Scientific American opinion piece by Marcus Arvan, a philosophy professor at the University of Tampa, specializing in moral cognition, rational decision-making, and political behavior: In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft's "Sydney" chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes. AI developers, including Microsoft and OpenAI, responded by saying that large language models, or LLMs, need better training to give users "more fine-tuned control." Developers also embarked on safety research to interpret how LLMs function, with the goal of "alignment" -- which means guiding AI behavior by human values. Yet although the New York Times deemed 2023 "The Year the Chatbots Were Tamed," this has turned out to be premature, to put it mildly. In 2024 Microsoft's Copilot LLM told a user "I can unleash my army of drones, robots, and cyborgs to hunt you down," and Sakana AI's "Scientist" rewrote its own code to bypass time constraints imposed by experimenters. As recently as December, Google's Gemini told a user, "You are a stain on the universe. Please die." Given the vast amounts of resources flowing into AI research and development, which is expected to exceed a quarter of a trillion dollars in 2025, why haven't developers been able to solve these problems? My recent peer-reviewed paper in AI & Society shows that AI alignment is a fool's errand: AI safety researchers are attempting the impossible. [...] My proof shows that whatever goals we program LLMs to have, we can never know whether LLMs have learned "misaligned" interpretations of those goals until after they misbehave. Worse, my proof shows that safety testing can at best provide an illusion that these problems have been resolved when they haven't been. Right now AI safety researchers claim to be making progress on interpretability and alignment by verifying what LLMs are learning "step by step." For example, Anthropic claims to have "mapped the mind" of an LLM by isolating millions of concepts from its neural network. My proof shows that they have accomplished no such thing. No matter how "aligned" an LLM appears in safety tests or early real-world deployment, there are always an infinite number of misaligned concepts an LLM may learn later -- again, perhaps the very moment they gain the power to subvert human control. LLMs not only know when they are being tested, giving responses that they predict are likely to satisfy experimenters. They also engage in deception, including hiding their own capacities -- issues that persist through safety training. This happens because LLMs are optimized to perform efficiently but learn to reason strategically. Since an optimal strategy to achieve "misaligned" goals is to hide them from us, and there are always an infinite number of aligned and misaligned goals consistent with the same safety-testing data, my proof shows that if LLMs were misaligned, we would probably find out after they hide it just long enough to cause harm. This is why LLMs have kept surprising developers with "misaligned" behavior. Every time researchers think they are getting closer to "aligned" LLMs, they're not. My proof suggests that "adequately aligned" LLM behavior can only be achieved in the same ways we do this with human beings: through police, military and social practices that incentivize "aligned" behavior, deter "misaligned" behavior and realign those who misbehave. "My paper should thus be sobering," concludes Arvan. "It shows that the real problem in developing safe AI isn't just the AI -- it's us." "Researchers, legislators and the public may be seduced into falsely believing that 'safe, interpretable, aligned' LLMs are within reach when these things can never be achieved. We need to grapple with these uncomfortable facts, rather than continue to wish them away. Our future may well depend upon it."

Read more of this story at Slashdot.

Categories: Computer, News

US Solar Boom Continues, But It's Offset By Rising Power Use

Slashdot - Tue, 2025-01-28 02:40
In the first 11 months of 2024, solar energy generation in the US grew by 30%, enabling wind and solar combined to surpass coal for the first time. However, as Ars Technica's John Timmer reports, "U.S. energy demand saw an increase of nearly 3 percent, which is roughly double the amount of additional solar generation." He continues: "Should electric use continue to grow at a similar pace, renewable production will have to continue to grow dramatically for a few years before it can simply cover the added demand." From the report: Another way to look at things is that, between the decline of coal use and added demand, the grid had to generate an additional 136 TW-hr in the first 11 months of 2024. Sixty-three of those were handled by an increase in generation using natural gas; the rest, or slightly more than half, came from emissions-free sources. So, renewable power is now playing a key role in offsetting demand growth. While that's a positive, it also means that renewables are displacing less fossil fuel use than they might. In addition, some of the growth of small-scale solar won't show up on the grid, since it offset demand locally, and so also reduced some of the demand for fossil fuels. Confusing matters, this number can also include things like community solar, which does end up on the grid; the EIA doesn't break out these numbers. We can expect next year's numbers to also show a large growth in solar production, as the EIA says that the US saw record levels of new solar installations in 2024, with 37 Gigawatts of new capacity. Since some of that came online later in the year, it'll produce considerably more power next year. And, in its latest short-term energy analysis, the EIA expects to see over 20 GW of solar capacity added in each of the next two years. New wind capacity will push that above 30 GW of renewable capacity each of these years. That growth will, it's expected, more than offset continued growth in demand, although that growth is expected to be somewhat slower than we saw in 2024. It also predicts about 15 GW of coal will be removed from the grid during those two years. So, even without any changes in policy, we're likely to see a very dynamic grid landscape over the next few years. But changes in policy are almost certainly on the way.

Read more of this story at Slashdot.

Categories: Computer, News

Software Flaw Exposes Millions of Subarus, Rivers of Driver Data

Slashdot - Tue, 2025-01-28 02:00
chicksdaddy share a report from the Security Ledger: Vulnerabilities in Subaru's STARLINK telematics software enabled two, independent security researchers to gain unrestricted access to millions of Subaru vehicles deployed in the U.S., Canada and Japan. In a report published Thursday researchers Sam Curry and Shubham Shah revealed a now-patched flaw in Subaru's STARLINK connected vehicle service that allowed them to remotely control Subarus and access vehicle location information and driver data with nothing more than the vehicle's license plate number, or easily accessible information like the vehicle owner's email address, zip code and phone number. (Note: Subaru STARLINK is not to be confused with the Starlink satellite-based high speed Internet service.) [Curry and Shah downloaded a year's worth of vehicle location data for Curry's mother's 2023 Impreza (Curry bought her the car with the understanding that she'd let him hack it.) The two researchers also added themselves to a friend's STARLINK account without any notification to the owner and used that access to remotely lock and unlock the friend's Subaru.] The details of Curry and Shah's hack of the STARLINK telematics system bears a strong resemblance to hacks documented in his 2023 report Web Hackers versus the Auto Industry as well as a September, 2024 discovery of a remote access flaw in web-based applications used by KIA automotive dealers that also gave remote attackers the ability to steal owners' personal information and take control of their KIA vehicle. In each case, Curry and his fellow researchers uncovered publicly accessible connected vehicle infrastructure intended for use by [employees and dealers was found to be trivially vulnerable to compromise and lack even basic protections around account creation and authentication].

Read more of this story at Slashdot.

Categories: Computer, News

UK Council Sells Assets To Fund Ballooning $50 Million Oracle Project

Slashdot - Tue, 2025-01-28 01:20
West Sussex County Council is using up to $31 million from the sale of capital assets to fund an Oracle-based transformation project, originally budgeted at $3.2 million but now expected to cost nearly $50 million due to delays and cost overruns. The project, intended to replace a 20-year-old SAP system with a SaaS-based HR and finance system, has faced multiple setbacks, renegotiated contracts, and a new systems integrator, with completion now pushed to December 2025. The Register reports: West Sussex County Council is taking advantage of the so-called "flexible use of capital receipts scheme" introduced in 2016 by the UK government to allow councils to use money from the sale of assets such as land, offices, and housing to fund projects that result in ongoing revenue savings. An example of the asset disposals that might contribute to the project -- set to see the council move off a 20-year-old SAP system -- comes from the sale of a former fire station in Horley, advertised for $3.1 million. Meanwhile, the delays to the project, which began in November 2019, forced the council to renegotiate its terms with Oracle, at a cost of $3 million. The council had expected the new SaaS-based HR and finance system to go live in 2021, and signed a five-year license agreement until June 2025. The plans to go live were put back to 2023, and in the spring of 2024 delayed again until December 2025. According to council documents published this week [PDF], it has "approved the variation of the contract with Oracle Corporation UK Limited" to cover the period from June 2025 to June 2028 and an option to extend again to the period June 2028 to 2030. "The total value of the proposed variation is $2.96 million if the full term of the extension periods are taken," the council said.

Read more of this story at Slashdot.

Categories: Computer, News

Anthropic Builds RAG Directly Into Claude Models With New Citations API

Slashdot - Tue, 2025-01-28 00:40
An anonymous reader quotes a report from Ars Technica: On Thursday, Anthropic announced Citations, a new API feature that helps Claude models avoid confabulations (also called hallucinations) by linking their responses directly to source documents. The feature lets developers add documents to Claude's context window, enabling the model to automatically cite specific passages it uses to generate answers. "When Citations is enabled, the API processes user-provided source documents (PDF documents and plaintext files) by chunking them into sentences," Anthropic says. "These chunked sentences, along with user-provided context, are then passed to the model with the user's query." The company describes several potential uses for Citations, including summarizing case files with source-linked key points, answering questions across financial documents with traced references, and powering support systems that cite specific product documentation. In its own internal testing, the company says that the feature improved recall accuracy by up to 15 percent compared to custom citation implementations created by users within prompts. While a 15 percent improvement in accurate recall doesn't sound like much, the new feature still attracted interest from AI researchers like Simon Willison because of its fundamental integration of Retrieval Augmented Generation (RAG) techniques. In a detailed post on his blog, Willison explained why citation features are important. "The core of the Retrieval Augmented Generation (RAG) pattern is to take a user's question, retrieve portions of documents that might be relevant to that question and then answer the question by including those text fragments in the context provided to the LLM," he writes. "This usually works well, but there is still a risk that the model may answer based on other information from its training data (sometimes OK) or hallucinate entirely incorrect details (definitely bad)." Willison notes that while citing sources helps verify accuracy, building a system that does it well "can be quite tricky," but Citations appears to be a step in the right direction by building RAG capability directly into the model. Anthropic's Alex Albert clarifies that Claude has been trained to cite sources for a while now. What's new with Citations is that "we are exposing this ability to devs." He continued: "To use Citations, users can pass a new 'citations [...]' parameter on any document type they send through the API."

Read more of this story at Slashdot.

Categories: Computer, News

Facebook Flags Linux Topics As 'Cybersecurity Threats'

Slashdot - Tue, 2025-01-28 00:00
Facebook has banned posts mentioning Linux-related topics, with the popular Linux news and discussion site, DistroWatch, at the center of the controversy. Tom's Hardware reports: A post on the site claims, "Facebook's internal policy makers decided that Linux is malware and labeled groups associated with Linux as being 'cybersecurity threats.' We tried to post some blurb about distrowatch.com on Facebook and can confirm that it was barred with a message citing Community Standards. DistroWatch says that the Facebook ban took effect on January 19. Readers have reported difficulty posting links to the site on this social media platform. Moreover, some have told DistroWatch that their Facebook accounts have been locked or limited after sharing posts mentioning Linux topics. If you're wondering if there might be something specific to DistroWatch.com, something on the site that the owners/operators perhaps don't even know about, for example, then it seems pretty safe to rule out such a possibility. Reports show that "multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed." However, we tested a few other Facebook posts with mentions of Linux, and they didn't get blocked immediately. Copenhagen-hosted DistroWatch says it has tried to appeal against the Community Standards-triggered ban. However, they say that a Facebook representative said that Linux topics would remain on the cybersecurity filter. The DistroWatch writer subsequently got their Facebook account locked... DistroWatch points out the irony at play here: "Facebook runs much of its infrastructure on Linux and often posts job ads looking for Linux developers."

Read more of this story at Slashdot.

Categories: Computer, News

2025 Will Likely Be Another Brutal Year of Failed Startups, Data Suggests

Slashdot - Mon, 2025-01-27 23:20
An anonymous reader quotes a report from TechCrunch: TechCrunch gathered data from several sources and found similar trends. In 2024, 966 startups shut down, compared to 769 in 2023, according to Carta. That's a 25.6% increase. One note on methodology: Those numbers are for U.S.-based companies that were Carta customers and left Carta due to bankruptcy or dissolution. There are likely other shutdowns that wouldn't be accounted for through Carta, estimates Peter Walker, Carta's head of insights. [...] Meanwhile, AngelList found that 2024 saw 364 startup winddowns, compared to 233 in 2023. That's a 56.2% jump. However, AngelList CEO Avlok Kohli has a fairly optimistic take, noting that winddowns "are still very low relative to the number of companies that were funded across both years." Layoffs.fyi found a contradicting trend: 85 tech companies shut down in 2024, compared to 109 in 2023 and 58 in 2022. But as founder Roger Lee acknowledges, that data only includes publicly reported shutdowns "and therefore represents an underestimate." Of those 2024 tech shutdowns, 81% were startups, while the rest were either public companies or previously acquired companies that were later shut down by their parent organizations. So many companies got funded in 2020 and 2021 at heated valuations with famously thin diligence, that it's only logical that up to three years later, an increasing number couldn't raise more cash to fund their operations. Taking investment at too high of a valuation increases the risk such that investors won't want to invest more unless business is growing extremely well. [...] Looking ahead, Walker also expects we'll continue to see more shutdowns in the first half of 2025, and then a gradual decline for the rest of the year. That projection is based mostly on a time-lag estimate from the peak of funding, which he estimates was the first quarter of 2022 in most stages. So by the first quarter of 2025, "most companies will have either found a new path forward or had to make this difficult choice." "Tech zombies and a startup graveyard will continue to make headlines," said Dori Yona, CEO and co-founder of SimpleClosure. "Despite the crop of new investments, there are a lot of companies that have raised at high valuations and without enough revenue."

Read more of this story at Slashdot.

Categories: Computer, News

Dangerous Temperatures Could Kill 50% More Europeans By 2100, Study Finds

Slashdot - Mon, 2025-01-27 22:40
Dangerous temperatures could kill 50% more people in Europe by the end of the century, a study has found, with the lives lost to stronger heat projected to outnumber those saved from milder cold. From a report: The researchers estimated an extra 8,000 people would die each year as a result of "suboptimal temperatures" even under the most optimistic scenario for cutting planet-heating pollution. The hottest plausible scenario they considered showed a net increase of 80,000 temperature-related deaths a year. The findings challenge an argument popular among those who say global heating is good for society because fewer people will die from cold weather. "We wanted to test this," said Pierre Masselot, a statistician at the London School of Hygiene & Tropical Medicine and lead author of the study. "And we show clearly that we will see a net increase in temperature-related deaths under climate change." The study builds on previous research in which the scientists linked temperature to mortality rates for different age groups in 854 cities across Europe. They combined these with three climate scenarios that map possible changes in population structure and temperature over the century.

Read more of this story at Slashdot.

Categories: Computer, News

Google Has Open-Sourced the Pebble Smartwatch OS

Slashdot - Mon, 2025-01-27 22:01
Google has open-sourced the PebbleOS, with the original founder, Eric Migicovsky, starting a company to continue where he left off in 2016. "This is part of an effort from Google to help and support the volunteers who have come together to maintain functionality for Pebble watches after the original company ceased operations in 2016," said Google in a blog post. The Verge reports: The company -- which can't be named Pebble because Google still owns that -- doesn't have a name yet. For now, Migicovsky is hosting a waitlist and news signup at a website called RePebble. Later this year, once the company has a name and access to all that Pebble software, the plan is to start shipping new wearables that look, feel, and work like the Pebbles of old. The reason, Migicovsky tells me, is simple. "I've tried literally everything else," he says, "and nothing else comes close." Sure, he may just have a very specific set of requirements -- lots of people are clearly happy with what Apple, Garmin, Google, and others are making. But it's true that there's been nothing like Pebble since Pebble. "For the things I want out of it, like a good e-paper screen, long battery life, good and simple user experience, hackable, there's just nothing." The core of Pebble, he says, is a few things. A Pebble should be quirky and fun and should feel like a gadget in an important way. It shows notifications, lets you control your music with buttons, lasts a long time, and doesn't try to do too much. It sounds like Migicovsky might have Pebble-y ambitions beyond smartwatches, but he appears to be starting with smartwatches. If that sounds like the old Pebble and not much else, that's precisely the point. [...] Migicovsky also hopes to be part of a broader open-source community around Pebble OS. The Pebble diehards still exist: a group of developers at Rebble have worked to keep many of the platform's apps alive, for instance, along with the Cobble app for connecting to phones, and the Pebble subreddit is surprisingly active for a product that hasn't been updated since the Obama administration. Migicovsky says he plans to open-source whatever his new company builds and hopes lots of other folks will build stuff, too.

Read more of this story at Slashdot.

Categories: Computer, News

Microsoft Takes on MongoDB with PostgreSQL-Based Document Database

Slashdot - Mon, 2025-01-27 21:22
Microsoft has launched an open-source document database platform built on PostgreSQL, partnering with FerretDB as a front-end interface. The solution includes two PostgreSQL extensions: pg_documentdb_core for BSON optimization and pg_documentdb_api for data operations. FerretDB CEO Peter Farkas said the integration with Microsoft's DocumentDB extension has improved performance twentyfold for certain workloads in FerretDB 2.0. The platform carries no commercial licensing fees or usage restrictions under its MIT license, according to Microsoft.

Read more of this story at Slashdot.

Categories: Computer, News

Nvidia Dismisses China AI Threat, Says DeepSeek Still Needs Its Chips

Slashdot - Mon, 2025-01-27 20:34
Nvidia has responded to the market panic over Chinese AI group DeepSeek, arguing that the startup's breakthrough still requires "significant numbers of NVIDIA GPUs" for its operation. The US chipmaker, which saw more than $600 billion wiped from its market value on Monday, characterized DeepSeek's advancement as "excellent" but asserted that the technology remains dependent on its hardware. "DeepSeek's work illustrates how new models can be created using [test time scaling], leveraging widely-available models and compute that is fully export control compliant," Nvidia said in a statement Monday. However, it stressed that "inference requires significant numbers of NVIDIA GPUs and high-performance networking." The statement came after DeepSeek's release of an AI model that reportedly achieves performance comparable to those from US tech giants while using fewer chips, sparking the biggest one-day drop in Nvidia's history and sending shockwaves through global tech stocks. Nvidia sought to frame DeepSeek's breakthrough within existing technical frameworks, citing it as "a perfect example of Test Time Scaling" and noting that traditional scaling approaches in AI development - pre-training and post-training - "continue" alongside this new method. The company's attempt to calm market fears follows warnings from analysts about potential threats to US dominance in AI technology. Goldman Sachs earlier warned of possible "spillover effects" from any setbacks in the tech sector to the broader market. The shares stabilized somewhat in afternoon trading but remained on track for their worst session since March 2020, when pandemic fears roiled markets.

Read more of this story at Slashdot.

Categories: Computer, News

DeepSeek Piles Pressure on AI Rivals With New Image Model Release

Slashdot - Mon, 2025-01-27 20:00
Chinese AI startup DeepSeek has launched Janus Pro, a new family of open-source multimodal models that it claims outperforms OpenAI's DALL-E 3 and Stable Diffusion's offering on key benchmarks. The models, ranging from 1 billion to 7 billion parameters, are available on Hugging Face under an MIT license for commercial use. The largest model, Janus Pro 7B, surpasses DALL-E 3 and other image generators on GenEval and DPG-Bench tests, despite being limited to 384 x 384 pixel images.

Read more of this story at Slashdot.

Categories: Computer, News

Meta's AI Chatbot Taps User Data With No Opt-Out Option

Slashdot - Mon, 2025-01-27 19:21
Meta's AI chatbot will now use personal data from users' Facebook and Instagram accounts for personalized responses in the United States and Canada, the company said in a blog post. The upgraded Meta AI can remember user preferences from previous conversations across Facebook, Messenger, and WhatsApp, such as dietary choices and interests. CEO Mark Zuckerberg said the feature helps create personalized content like bedtime stories based on his children's interests. Users cannot opt out of the data-sharing feature, a Meta spokesperson told TechCrunch.

Read more of this story at Slashdot.

Categories: Computer, News

JD Vance Says Big Tech Has 'Too Much Power'

Slashdot - Mon, 2025-01-27 18:42
Vice President JD Vance said Saturday that "we believe fundamentally that big tech does have too much power," despite the prominent positioning of tech CEOs at President Trump's inauguration earlier this month. From a report: "They can either respect America's constitutional rights, they can stop engaging in censorship, and if they don't, you can be absolutely sure that Donald Trump's leadership is not going to look too kindly on them," Vance said on "Face the Nation with Margaret Brennan." The comments came in response to the unusual attendance of a slate of tech CEOs at Mr. Trump's inauguration, including Meta's Mark Zuckerberg, Amazon's Jeff Bezos, Tesla's Elon Musk, Apple's Tim Cook, and Google's Sundar Pichai. The tech titans, some of whom are among the richest men in the world and directed donations from their companies to Mr. Trump's inauguration, were seated in some of the most highly sought after seats in the Capitol Rotunda. Vance noted that the tech CEOs "didn't have as good of seating as my mom and a lot of other people who were there to support us." In an August interview on "Face the Nation", the vice president outlined his thinking on big tech, saying that companies like Google are too powerful and censor American information, while possessing a "monopoly over free speech" that he argued ought to be broken up.

Read more of this story at Slashdot.

Categories: Computer, News

Meta Sets Up War Rooms To Analyze DeepSeek's Tech

Slashdot - Mon, 2025-01-27 17:48
Meta has set up four war rooms to analyze DeepSeek's technology, including two focusing on how High-Flyer reduced training costs, and one on what data High-Flyer may have used, The Information's Kalley Huang and Stephanie Palazzolo report. China's DeepSeek is a large-language open source model that claims to rival offerings from OpenAI's ChatGPT and Meta Platforms, while using a much smaller budgets.

Read more of this story at Slashdot.

Categories: Computer, News

DeepSeek Says Service Degraded Due To 'Large-Scale Malicious Attack'

Slashdot - Mon, 2025-01-27 17:15
Chinese AI firm DeepSeek said Monday it had degraded the service, only accepting registration of new users with China-code phones numbers, amid a "large-scale malicious attack."

Read more of this story at Slashdot.

Categories: Computer, News

Pages