Powered by AI
Login to get more features including translations and can subscribe to get daily digest and more.
2025-05-07

Apple is exploring AI search engines like ChatGPT for Safari, signaling a shift from Google’s $20B deal. How will this reshape search competition?

The recent detection of a massive Chinese nuclear fusion facility by US surveillance satellites has sent ripples through the international scientific and defense communities. This groundbreaking discovery reveals China's ambitious push toward mastering fusion technology, potentially revolutionizing global energy production while raising serious concerns about its military applications and international…
In an era where social media platforms are the driving force behind data generation, optimizing the data ingestion process has become crucial for machine learni

With the help of artificial intelligence, a man was 'brought back to life' at his killer's sentencing to deliver a victim's statement himself.

Microsoft says that it's embracing Google's recently launched open protocol, Agent2Agent, for allowing AI 'agents' to communicate with each other.

The hum of the AI co-pilot has become a familiar soundtrack in the world of software development. These intelligent tools, promising increased efficiency and code generation prowess, have been embraced with open arms by many. But what happens when this reliance morphs into over-dependence? What are the potential pitfalls of blindly trusting algorithms we don’t fully comprehend, especially when they occasionally – or even frequently – get it wrong? And perhaps most worryingly, what becomes of the core skills that define a truly capable software developer?

With modern tools and APIs, it no longer takes a massive budget or dedicated AI team to get results.
Clinical trials, which are the backbone of medical research, have become increasingly complex, generating massive amounts of data from diverse sources. In a wor
In this rapidly growing digital era, artificial intelligence (AI) is not just enhancing convenience but fundamentally transforming industries. One such domain e

This study proposes a semi-weakly supervised learning approach for pulmonary embolism (PE) detection on CT pulmonary angiography (CTPA) to alleviate the resource-intensive burden of exhaustive medical image annotation. Attention-based CNN-RNN models were trained on the RSNA pulmonary embolism CT dataset and externally validated on a pooled dataset (Aida and FUMPE). Three configurations included weak (examination-level labels only), strong (all examination and slice-level labels), and semi-weak (examination-level labels plus a limited subset of slice-level labels). The proportion of slice-level labels varying from 0 to 100%. Notably, semi-weakly supervised models using approximately one-quarter of the total slice-level labels achieved an AUC of 0.928, closely matching the strongly supervised model’s AUC of 0.932. External validation yielded AUCs of 0.999 for the semi-weak and 1.000 for the strong model. By reducing labeling requirements without sacrificing diagnostic accuracy, this method streamlines model development, accelerates the integration of models into clinical practice, and enhances patient care.
2025-05-06

Giving AI systems the ability to focus on particular brain regions can make them much better at reconstructing images of what a monkey is looking at from brain recordings

Nvidia warns U.S. AI hardware export rules may backfire, accelerating Huawei’s AI rise. The AI Diffusion Rule could fracture global standards and threaten America.

Tumors exhibit an increased ability to obtain and metabolize nutrients. Here, we implant engineered adipocytes that outcompete tumors for nutrients and show that they can substantially reduce cancer progression, a technology termed adipose manipulation transplantation (AMT). Adipocytes engineered to use increased amounts of glucose and fatty acids by upregulating UCP1 were placed alongside cancer cells or xenografts, leading to significant cancer suppression. Transplanting modulated adipose organoids in pancreatic or breast cancer genetic mouse models suppressed their growth and decreased angiogenesis and hypoxia. Co-culturing patient-derived engineered adipocytes with tumor organoids from dissected human breast cancers significantly suppressed cancer progression and proliferation. In addition, cancer growth was impaired by inducing engineered adipose organoids to outcompete tumors using tetracycline or placing them in an integrated cell-scaffold delivery platform and implanting them next to the tumor. Finally, we show that upregulating UPP1 in adipose organoids can outcompete a uridine-dependent pancreatic ductal adenocarcinoma for uridine and suppress its growth, demonstrating the potential customization of AMT. Adipose manipulation transplantation can reduce tumor growth and proliferation in vitro and in mouse models.

Power bursts in large AI workloads can threaten to overwhelm the grid

We present DoomArena, a security evaluation framework for AI agents. DoomArena is designed on three principles: 1) It is a plug-in framework and integrates easily into realistic agentic frameworks like BrowserGym (for web agents) and $τ$-bench (for tool calling agents); 2) It is configurable and allows for detailed threat modeling, allowing configuration of specific components of the agentic framework being attackable, and specifying targets for the attacker; and 3) It is modular and decouples the development of attacks from details of the environment in which the agent is deployed, allowing for the same attacks to be applied across multiple environments. We illustrate several advantages of our framework, including the ability to adapt to new threat models and environments easily, the ability to easily combine several previously published attacks to enable comprehensive and fine-grained security testing, and the ability to analyze trade-offs between various vulnerabilities and performance. We apply DoomArena to state-of-the-art (SOTA) web and tool-calling agents and find a number of surprising results: 1) SOTA agents have varying levels of vulnerability to different threat models (malicious user vs malicious environment), and there is no Pareto dominant agent across all threat models; 2) When multiple attacks are applied to an agent, they often combine constructively; 3) Guardrail model-based defenses seem to fail, while defenses based on powerful SOTA LLMs work better. DoomArena is available at https://github.com/ServiceNow/DoomArena.

The Federal Government has announced the training of over 200,000 Nigerians on Artificial Intelligence (AI) and emerging technologies to build a digitally skilled workforce and position the country as a continental leader in AI innovation.
In today's rapidly evolving technological landscape, artificial intelligence (AI) has emerged as a transformative force, particularly in the realm of cloud econ

In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers. They said that ``brute force" search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. These researchers wanted methods based on human input to win and were disappointed when they did not.

AI is going to get really, really good at writing code. It is also thousands of times cheaper than humans. What does this mean for the software profession? This is a kind of companion piece to my…

It will take decades, not years, for artificial intelligence to transform society in the revolutionary ways that big developer labs and companies have been predicting, say AI researchers at Princeton University. AI, they argue, is a general-purpose technology like electricity which will not make human labour redundant
In the modern digital era, fraud in programmatic advertising has become a multi-billion-dollar challenge, threatening the integrity of digital marketing and lea

How to use AI coding assistants without letting your hard-earned engineering skills wither away.
Technology used to be loud, visible, and unavoidable. From bulky desktop computers to buzzing fax machines, it demanded our attention. But in recent years, a su

A new wave of 'reasoning' systems from companies like OpenAI is producing incorrect information more often. Even the companies don't know why.
2025-05-05

GPUs are the most popular platform for accelerating HPC workloads, such as artificial intelligence and science simulations. However, most microarchitectural research in academia relies on GPU core pipeline designs based on architectures that are more than 15 years old. This paper reverse engineers modern NVIDIA GPU cores, unveiling many key aspects of its design and explaining how GPUs leverage hardware-compiler techniques where the compiler guides hardware during execution. In particular, it reveals how the issue logic works including the policy of the issue scheduler, the structure of the register file and its associated cache, and multiple features of the memory pipeline. Moreover, it analyses how a simple instruction prefetcher based on a stream buffer fits well with modern NVIDIA GPUs and is likely to be used. Furthermore, we investigate the impact of the register file cache and the number of register file read ports on both simulation accuracy and performance. By modeling all these new discovered microarchitectural details, we achieve 18.24% lower mean absolute percentage error (MAPE) in execution cycles than previous state-of-the-art simulators, resulting in an average of 13.98% MAPE with respect to real hardware (NVIDIA RTX A6000). Also, we demonstrate that this new model stands for other NVIDIA architectures, such as Turing. Finally, we show that the software-based dependence management mechanism included in modern NVIDIA GPUs outperforms a hardware mechanism based on scoreboards in terms of performance and area.

We assess how physically realistic the ''simulation hypothesis'' for this Universe is, based on physical constraints arising from the link between information and energy, and on known astrophysical constraints. We investigate three cases: the simulation of the entire visible Universe, the simulation of Earth only, or a low resolution simulation of Earth, compatible with high-energy neutrino observations. In all cases, the amounts of energy or power required by any version of the simulation hypothesis are entirely incompatible with physics, or (literally) astronomically large, even in the lowest resolution case. Only universes with very different physical properties can produce some version of this Universe as a simulation. On the other hand, our results show that it is just impossible that this Universe is simulated by a universe sharing the same properties, regardless of technological advancements of the far future.

America must discard the belief that it is beating China in the innovation race.

Long-term, widespread AI coding might hamper learning, problem-solving and—eventually—the ability to maintain software systems.