Cybersecurity News and Vulnerability Aggregator

Cybersecurity news aggregator

Top Cybersecurity Stories Today

The Hacker News 4h ago

Huntress is warning that threat actors are exploiting three recently disclosed security flaws in Microsoft Defender to gain elevated privileges in compromised systems. The activity involves the exploitation of three vulnerabilities that are codenamed BlueHammer (requires GitHub sign-in), RedSun, and UnDefend, all of which were released as zero-days by a researcher known as Chaotic Eclipse (

The Register Just now

Bug hiding in plain sight for over a decade lands on KEV list CISA is sounding the alarm on a newly-exploited Apache ActiveMQ bug, ordering federal agencies to patch within two weeks as attackers circle a flaw that's been quietly lurking for more than a decade.…

Cloudflare 4h ago
APT

The web has always had to adapt to new standards. It learned to speak to web browsers, and then it learned to speak to search engines. Now, it needs to speak to AI agents. Today, we are excited to introduce isitagentready.com — a new tool to help site owners understand how they can make their sites optimized for agents, from guiding agents on how to authenticate, to controlling what content agents can see, the format they receive it in, and how they pay for it. We are also introducing a new dataset to Cloudflare Radar that tracks the overall adoption of each agent standard across the Internet. We want to lead by example. That is why we are also sharing how we recently overhauled Cloudflare's Developer Documentation to make it the most agent-friendly documentation site, allowing AI tools to answer questions faster and significantly cheaper. How agent-ready is the web today? The short answer: not very. This is expected, but also shows how much more effective agents can be than they are today, if standards are adopted. To analyze this, Cloudflare Radar took the 200,000 most visited domains on the Internet; filtered out categories where agent readiness isn't important (like redirects, ad-servers, and tunneling services) to focus on businesses, publishers, and platforms that AI agents might realistically need to interact with; and scanned them using our new tool. The result is a new “Adoption

Latest

Friday, April 17
r/cybersecurity Just now

We spent the last quarter (Jan-Apr 2026) analyzing C2 infrastructure concentrated within Russian ISPs and hosting providers. The goal was to move past individual IOCs and look at things from a provider/hosting layer perspective instead. Some of the more interesting findings: * 1,252 C2 servers across 165 distinct providers. C2 traffic accounts for \~88.6% of all malicious artifacts observed, dwarfing phishing (\~4.9%) and open directories (\~5.3%). * A handful of providers carry most of the weight. TimeWeb alone had 311 C2 detections. WebHost1 (140), REG.RU (138), VDSina (86), and PROSPERO OOO (80) round out the top five. * Keitaro dominates the malware family distribution with 587 unique C2 IPs. Hajime still going strong at 191, which tells you IoT botnets aren't slowing down in that region. * Offensive tooling is well represented: Tactical RMM (87), Cobalt Strike (55 combined verified/unverified), Sliver (24), Ligolo-ng (10). * Yandex.Cloud had the widest malware diversity (11 distinct families across 39 C2 endpoints), while TimeWeb had the raw volume. The post also walks through specific campaigns we observed during the window, including Latrodectus v2.3 using ClickFix fake CAPTCHAs, Lumma Stealer abusing Google Groups for distribution, SmartApeSG delivering Remcos via DLL sideloading, and others. Full writeup with the SQL queries used, provider-level breakdowns, and campaign details here: [https://hunt.io/blog/russian-malicious-infrastructure-c2-servers-mapped](https://hunt.io/blog/russian-malicious-infrastructure-c2-servers-mapped)

The Register Just now

Bug hiding in plain sight for over a decade lands on KEV list CISA is sounding the alarm on a newly-exploited Apache ActiveMQ bug, ordering federal agencies to patch within two weeks as attackers circle a flaw that's been quietly lurking for more than a decade.…

The Register Just now

Or, how public information and a €5 tracker exposed an avoidable opsec lapse Militaries around the world spend countless hours training, developing policies, and implementing best operational security practices, so imagine the size of the egg on the face of the Dutch navy when journalists managed to track one of its warships for less than the cost of some hagelslag and a coffee.…

r/cybersecurity 1h ago
CVE

Most writeups of BlueHammer describe what it does. I read the actual PoC (FunnyApp.cpp, \~100KB of C++) and the most important line isn't in the oplock setup, the NT object namespace redirect, or the Cloud Files freeze. It's a comment. The filestoleak array ships with one target active and two commented out: const wchar\_t\* filestoleak\[\] = { {L"\\\\Windows\\\\System32\\\\Config\\\\SAM"} /\*,{L"\\\\Windows\\\\System32\\\\Config\\\\SYSTEM"},{L"\\\\Windows\\\\System32\\\\Config\\\\SECURITY"}\*/ }; SAM alone is a partial dump. The hashes are encrypted with the boot key — which lives in SYSTEM. Without SYSTEM you have ciphertext. With SAM + SYSTEM you have NTLM hashes you can pass-the-hash or crack offline. SECURITY adds LSA secrets: service account credentials, cached domain logon hashes, DPAPI master keys. The complete credential package is two uncommented lines away from the published PoC. The author wrote both lines and chose what to ship.

The Hacker News 4h ago

Huntress is warning that threat actors are exploiting three recently disclosed security flaws in Microsoft Defender to gain elevated privileges in compromised systems. The activity involves the exploitation of three vulnerabilities that are codenamed BlueHammer (requires GitHub sign-in), RedSun, and UnDefend, all of which were released as zero-days by a researcher known as Chaotic Eclipse (

Cloudflare 4h ago
APT

The web has always had to adapt to new standards. It learned to speak to web browsers, and then it learned to speak to search engines. Now, it needs to speak to AI agents. Today, we are excited to introduce isitagentready.com — a new tool to help site owners understand how they can make their sites optimized for agents, from guiding agents on how to authenticate, to controlling what content agents can see, the format they receive it in, and how they pay for it. We are also introducing a new dataset to Cloudflare Radar that tracks the overall adoption of each agent standard across the Internet. We want to lead by example. That is why we are also sharing how we recently overhauled Cloudflare's Developer Documentation to make it the most agent-friendly documentation site, allowing AI tools to answer questions faster and significantly cheaper. How agent-ready is the web today? The short answer: not very. This is expected, but also shows how much more effective agents can be than they are today, if standards are adopted. To analyze this, Cloudflare Radar took the 200,000 most visited domains on the Internet; filtered out categories where agent readiness isn't important (like redirects, ad-servers, and tunneling services) to focus on businesses, publishers, and platforms that AI agents might realistically need to interact with; and scanned them using our new tool. The result is a new “Adoption

Cloudflare 4h ago

Web pages have grown 6-9% heavier every year for the past decade, spurred by the web becoming more framework-driven, interactive, and media-rich. Nothing about that trajectory is changing. What is changing is how often those pages get rebuilt and how many clients request them. Both are skyrocketing because of agents. Shared dictionaries shrink asset transfers from servers to browsers so pages load faster with less bloat on the wire, especially for returning users or visitors on a slow connection. Instead of re-downloading entire JavaScript bundles after every deploy, the browser tells the server what it already has cached, and the server only sends the file diffs. Today, we’re excited to give you a sneak peek of our support for shared compression dictionaries, show you what we’ve seen in early testing, and reveal when you’ll be able to try the beta yourself (hint: it’s April 30, 2026!). The problem: more shipping = less caching Agentic crawlers, browsers, and other tools hit endpoints repeatedly, fetching full pages, often to extract a fragment of information. Agentic actors represented just under 10% of total requests across Cloudflare's network during March 2026, up ~60% year-over-year. Every page shipped is heavier than last year and read more often by machines than ever before. But agents aren’t just consuming the web, they’re helping to build it. AI-assisted development means teams ship faster. Increasing the frequency of deploys, experiments, and iterations is great for product velocity, but terrible for caching. As agents push a one-line fix, the bundler re-chunks, filenames change, and every user on earth could re-download the entire application. Not because the code is meaningfully any different, b

Cloudflare 4h ago

Cloudflare's Wrangler CLI has published several major versions over the past six years, each containing at least some critical changes to commands, configuration, or how developers interact with the platform. Like any actively maintained open-source project, we keep documentation for older versions available. The v1 documentation carries a deprecation banner, a noindex meta tag , and canonical tags pointing to current docs. Every advisory signal says the same thing: this content is outdated, look elsewhere. AI training crawlers don’t reliably honor those signals. We use AI Crawl Control on developers.cloudflare.com , so we know that bots in the AI Crawler Category visited 4.8 million times over the last 30 days, and they consumed deprecated content at the same rate as current content. The advisory signals made no measurable difference. The effect is cumulative because AI agents don't always fetch content live; they draw on trained models. When crawlers ingest deprecated docs, agents inherit outdated foundations. Today, we’re launching Redirects for AI Training to let you enforce that verified AI training crawlers are redirected to up-to-date content. Your existing canonical tags become HTTP 301 redirects for verified AI training crawlers, automatically, with one toggle, on all paid Cloudflare plans. And because status codes are ultimately how the web communicates policy to crawlers,

Cloudflare 4h ago
AI

Running inference within 50ms of 95% of the world's Internet-connected population means being ruthlessly efficient with GPU memory. Last year we improved memory utilization with Infire , our Rust-based inference engine, and eliminated cold-starts with Omni , our model scheduling platform. Now we are tackling the next big bottleneck in our inference platform: model weights. Generating a single token from an LLM requires reading every model weight from GPU memory. On the NVIDIA H100 GPUs we use in many of our datacenters, the tensor cores can process data nearly 600 times faster than memory can deliver it, leading to a bottleneck not in compute, but memory bandwidth. Every byte that crosses the memory bus is a byte that could have been avoided if the weights were smaller. To solve this problem, we built Unweight : a lossless compression system that can make model weights up to 15–22% smaller while preserving bit-exact outputs, without relying on any special hardware. The core breakthrough here is that decompressing weights in fast on-chip memory and feeding them directly to the tensor cores avoids an extra round-trip through slow main memory. Depending on the workload, Unweight’s runtime selects from multiple execution strategies – some prioritize simplicity, others minimize memory traffic – and an autotuner picks the best one per weight matrix and batch size. This post dives into how Unweight works, but in the spirit of greater transparency and encouraging innovation in this rapidly developing space, we’re also publishing a technical paper and open sourcing the

Cloudflare 4h ago

When it comes to the Internet, performance is everything. Every millisecond shaved off a connection is a better experience for the real people using the applications and websites you build. That's why, at Cloudflare, we measure our performance constantly and share updates on a regular basis. In our last performance post , published during Birthday Week 2025, we shared that Cloudflare was the fastest network in 40% of the largest 1,000 networks in the world. At the time, we noted a nuanced reading of that figure; we were competitive in many more networks, and the gaps were often notably small. But even so, we were not satisfied with 40%. By December 2025 (our most recent available analysis), we had become the fastest provider in 60% of the top networks . Here's how we got there, and what it means. How do we measure and compare network performance? Before diving into the results, let’s review how we collect the data. We start with the 1,000 largest networks in the world by estimated population, using APNIC's data as our source. These networks represent real users in nearly every geography, giving us a broad and meaningful picture of how Internet users experience the web. To measure performance, we use TCP connection time, which is the time it takes for an end user's device to complete a TCP handshake with the endpoint they're trying to reach. We chose this metric because it most closely approximates what users actually perceive as "Internet speed." It's not so abstract that it ignores real-world constraints like congestion and distance, but it's precise enough to give us actionable data. (We've

Cloudflare 4h ago

AI is writing more code than ever. AI-assisted contributions now account for a rapidly growing share of new code across the platform. Agentic coding tools like OpenCode and Claude Code are shipping entire features in minutes. AI-generated code entering production is only going to accelerate. But the bigger shift isn't just speed — it's autonomy. Today, an AI agent writes code and a human reviews, merges, and deploys it. Tomorrow, the agent does all of that itself. The question becomes: how do you let an agent ship to production without removing every safety net? Feature flags are the answer. An agent writes a new code path behind a flag and deploys it — the flag is off, so nothing changes for users. The agent then enables the flag for itself or a small test cohort, exercises the feature in production, and observes the results. If metrics look good, it ramps the rollout. If something breaks, it disables the flag. The human doesn't need to be in the loop for every step — they set the boundaries, and the flag controls the blast radius. This is the workflow feature flags were always building toward: not just decoupling deployment from release, but decoupling human attention from every stage of the shipping process. The agent moves fast because the flag makes it safe to move fast. Today, we're announcing Flagship — Cloudflare's native feature flag service, built on OpenFeature , the CNCF open standard for feature flag evaluation. It works everywhere — Workers, Node.js, Bun, Deno, and the browser — but it's fastest on Workers, where flags are evaluated within the Cloudflare network. With the Flagship binding and OpenFeature, integration looks like this: await OpenFeature.setProviderAndWait( new FlagshipServerProvider({ binding: env.FLAGS }) ); Flagship is now available in closed beta. The problem with feature fla

r/cybersecurity 6h ago
CVE

So from what I could find there is not much information on this vulnerability. Based on [CVE-2026-27913 - Security Update Guide - Microsoft - Windows BitLocker Security Feature Bypass Vulnerability](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2026-27913) only Windows servers are patched now however I would guess this affects user-end machines as well if not more. Are there any official sources saying if this is patched for users too? And if so which update did that. Thanks

Trail of Bits 6h ago
CVE

Two weeks ago, Google’s Quantum AI group published a zero-knowledge proof of a quantum circuit so optimized, they concluded that first-generation quantum computers will break elliptic curve cryptography keys in as little as 9 minutes. Today, Trail of Bits is publishing our own zero-knowledge proof that significantly improves Google’s on all metrics. Our result is not due to some quantum breakthrough, but rather the exploitation of multiple subtle memory safety and logic vulnerabilities in Google’s Rust prover code. Google has patched their proof, and their scientific claims are unaffected, but this story reflects the unique attack surface that systems introduce when they use zero-knowledge proofs. Google’s proof uses a zero-knowledge virtual machine (zkVM) to calculate the cost of a quantum circuit on three key metrics. The total number of operations and Toffoli gate count represent the running time of the circuit, and the number of qubits represents the memory requirements. Google, along with their coauthors from UC Berkeley, the Ethereum Foundation, and Stanford, published proofs for two circuits; one minimizes the number of gates, and the other minimizes qubits. Our proof improves on both. Resource Type Google’s Low-Gate

The Hacker News 6h ago

Google this week announced a new set of Play policy updates to strengthen user privacy and protect businesses against fraud, even as it revealed it blocked or removed over 8.3 billion ads globally and suspended 24.9 million accounts in 2025. The new policy updates relate to contact and location permissions in Android, allowing third-party apps to access the contact lists and a user's location in

The Hacker News 10h ago

The National Institute of Standards and Technology (NIST) has announced changes to the way it handles cybersecurity vulnerabilities and exposures (CVEs) listed in its National Vulnerability Database (NVD), stating it will only enrich those that fulfil certain conditions owing to an explosion in CVE submissions. "CVEs that do not meet those criteria will still be listed in the NVD but will not

The Register 10h ago

Pause your Mythos panic because mainstream models anyone can use already pick holes in popular software Anthropic withheld its Mythos bug-finding model from public release due to concerns that it would enable attackers to find and exploit vulnerabilities before anyone could react.…

The Hacker News 11h ago
CVE

An international law enforcement operation has taken down 53 domains and arrested four people in connection with commercial distributed denial-of-service (DDoS) operations that were used by more than 75,000 cybercriminals. The ongoing effort, dubbed Operation PowerOFF, disrupted access to the DDoS-for-hire services, took down the technical infrastructure supporting them, and obtained access to

Thursday, April 16
Troy Hunt 18h ago
CVE

Presently sponsored by: Report URI: Guarding you from rogue JavaScript! Don’t get pwned; get real-time alerts & prevent breaches #SecureYourSite I love cutting-edge tech, but I hate hyperbole, so I find AI to be a real paradox. Somewhere in that whole mess of overnight influencers, disinformation and ludicrous claims is some real "gold" - AI stuff that's genuinely useful and makes a meaningful difference. This blog post cuts straight to the good stuff, specifically how you can use AI with Have I Been Pwned to do some pretty cool things. I'll be showing examples based on OpenClaw running on the Mac Mini in the hero shot, but they're applicable to other agents that turn HIBP's data into more insightful analysis. So, let me talk about what you can do right now, what we're working on and what you'll be able to do in the future. Model Context Protocol (MCP) A quick MCP primer first: Anthropic came up with the idea of building a protocol that could connect systems to AI apps, and thus the Model Context Protocol was born: Using MCP, AI applications like Claude or ChatGPT can connect to data sources (e.g. local files, databases), tools (e.g. search engines, calculators) and workflows (e.g. specialized prompts)—enabling them to access key information and perform tasks. If I'm honest, I'm a bit on the fence as to how useful this really is ( and I'm not alone ), but creating it was a

The Register 18h ago

Bug or feature? A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic's official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.…

r/netsec 20h ago
CVE

Two day intrusion. RDP brute force with a company specific wordlist, Cobalt Strike, and a custom Rust exfiltration platform (RustyRocket) that connected to over 6,900 unique Cloudflare IPs over 443 to pull data from every reachable host over SMB. Recovered the operator README documenting three operating modes and a companion pivoting proxy for segmented networks. Personalized extortion notes addressed by name to each employee with separate templates for leadership and staff. Writeup includes screen recordings of the intrusion, full negotiation chat from their Tor portal, timeline, and IOCs.

r/netsec 21h ago
CVE

u/albinowax ’s work on request smuggling has always inspired me. I’ve followed his research, watched his talks at DEFCON and BlackHat, and spent time experimenting with his labs and tooling. Coming from a web security background, I’ve explored vulnerabilities both from a black-box and white-box perspective — understanding not just how to exploit them, but also the exact lines of code responsible for issues like SQLi, XSS, and broken access control. Request smuggling, however, always felt different. It remained something I could detect and exploit… but never fully trace down to its root cause in real-world server implementations. A few months ago, I decided to go deeper into networking and protocol internals, and now, months later, I can say that I “might” have figured out how the internet works😂 This research on HAProxy (HTTP/3, standalone mode) is the result of that journey — finally connecting the dots between protocol behavior and the actual code paths leading to the bug. (Yes, I used AI 😉 )

WIRED 22h ago

Available for free to any company that wants to use it, the “completely anonymous” app puts the pressure on porn sites and social media platforms to start blocking access by minors.

The Register 23h ago

Social engineering: 'low-cost, hard to patch, and scales well' North Korean criminals set on stealing Apple users' credentials and cryptocurrency are using a combination of social engineering and a fake Zoom software update to trick people into manually running malware on their own computers, according to Microsoft.…

The Hacker News 23h ago

Cybersecurity researchers have warned of an active malicious campaign that's targeting the workforce in the Czech Republic with a previously undocumented botnet dubbed PowMix since at least December 2025. "PowMix employs randomized command-and-control (C2) beaconing intervals, rather than persistent connection to the C2 server, to evade the network signature detections," Cisco Talos

Bleeping Computer Apr 16

AI-powered SOC tools promise automation, but most only speed up triage instead of reducing real workload. Tines shows how real gains come from end-to-end workflows that execute actions across systems, not just summarize alerts. [...]

Cloudflare Apr 16
CVE

An agent needs to be powered by a large language model. A few weeks ago, we announced that Workers AI is officially entering the arena for hosting large open-source models like Moonshot’s Kimi K2.5. Since then, we’ve made Kimi K2.5 3x faster and have more model additions in-flight. These models have been the backbone of a lot of the agentic products, harnesses, and tools that we have been launching this week. Hosting AI models is an interesting challenge: it requires a delicate balance between software and very, very expensive hardware. At Cloudflare, we’re good at squeezing every bit of efficiency out of our hardware through clever software engineering. This is a deep dive on how we’re laying the foundation to run extra-large language models. Hardware configurations As we mentioned in our previous Kimi K2.5 blog post , we’re using a variety of hardware configurations in order to best serve models. A lot of hardware configurations depend on the size of inputs and outputs that users are sending to the model. For example, if you are using a model to write fanfiction, you might give it a few small prompts (input tokens) while asking it to generate pages of content (output tokens). Conversely, if you are running a summarization task, you might be sending in hundreds of thousands of input tokens, but only generating a small summary with a few thousand output tokens. Presented with these opposing use cases, you have to make a choice — should you tune your model configuration so it’s faster at processing input tokens, or faster at generating output tokens? When we launched large language models on Workers AI, we knew that most of the use cases would be used for agents. With agents, you send in a large number of input tokens. It starts off with a large system prompt, all

The Hacker News Apr 16

You know that feeling when you open your feed on a Thursday morning and it's just... a lot? Yeah. This week delivered. We've got hackers getting creative in ways that are almost impressive if you ignore the whole "crime" part, ancient vulnerabilities somehow still ruining people's days, and enough supply chain drama to fill a season of television nobody asked for. Not

Cloudflare Apr 16
CVE

Agents have changed how we think about source control, file systems, and persisting state. Developers and agents are generating more code than ever — more code will be written over the next 5 years than in all of programming history — and it’s driven an order-of-magnitude change in the scale of the systems needed to meet this demand. Source control platforms are especially struggling here: they were built to meet the needs of humans, not a 10x change in volume driven by agents who never sleep, can work on several issues at once, and never tire. We think there’s a need for a new primitive: a distributed, versioned filesystem that’s built for agents first and foremost, and that can serve the types of applications that are being built today. We’re calling this Artifacts: a versioned file system that speaks Git. You can create repositories programmatically, alongside your agents, sandboxes, Workers, or any other compute paradigm, and connect to it from any regular Git client. Want to give every agent session a repo? Artifacts can do it. Every sandbox instance? Also Artifacts. Want to create 10,000 forks from a known-good starting point? You guessed it: Artifacts again. Artifacts exposes a REST API and native Workers API for creating repositories, generating credentials, and commits for environments where a Git client isn’t the right fit (i.e. in any serverless function). Artifacts is available in private beta and we’re aiming to open this up as a public beta by early May. // Create a repo const repo = await env.AGENT_REPOS.create(name) // Pass back the token & remote to your agent return { repo.remote, repo.token } # Clone it and use it like any regular git remote $ git clone https://x:${TOKEN}@123def456abc.artifacts.cloudflare.net/git/repo-13194.git That’s it. A bare repo, ready to go, created on the fly, that any git client can opera

The Register Apr 16

Forged metadata made AI reviewer treat hostile changes as though they came from known maintainer Security boffins say Anthropic's Claude can be tricked into approving malicious code with just two Git commands by spoofing a trusted developer's identity.…

The Hacker News Apr 16

A "novel" social engineering campaign has been observed abusing Obsidian, a cross-platform note-taking application, as an initial access vector to distribute a previously undocumented Windows remote access trojan called PHANTOMPULSE in attacks targeting individuals in the financial and cryptocurrency sectors. Dubbed REF6598 by Elastic Security Labs, the activity has been found to leverage

The Register Apr 16

Just migrate already, would you? But if you can't, Redmond will take your cash Microsoft will keep delivering security updates for old versions of Exchange Server and Skype for Business Server, after admitting that some customers aren't ready to make the move to newer products.…

The Register Apr 16

Your cybersecurity is only as good as the physical security of the servers PWNED Welcome back to Pwned, the column where we immortalize the worst vulns that organizations opened up for themselves. If you’re the kind of person who leaves your car doors unlocked with a pile of cash in the center console, this week’s story is for you.…

The Hacker News Apr 16

The Computer Emergencies Response Team of Ukraine (CERT-UA) has disclosed details of a new campaign that has targeted governments and municipal healthcare institutions, mainly clinics and emergency hospitals, to deliver malware capable of stealing sensitive data from Chromium-based web browsers and WhatsApp. The activity, which was observed between March and April

The Register Apr 16
APT

Browser fingerprinting is everywhere Google markets its Chrome browser by citing its superior safety features, but according to privacy consultant Alexander Hanff, Chrome does not protect against browser fingerprinting – a method of tracking people online by capturing technical details about their browser.…

Wednesday, April 15
The Register Apr 15
AI

Like the majority of the companies participating, it remains a mystery Last week, Anthropic surprised the world by declaring that its latest model, Mythos, is so good at finding vulns that it would create chaos if released. Now, under the title of Project Glasswing, over 50 selected companies and orgs are allowed to test the hyped up LLM to find security holes in their own products. But just how many problems have they really discovered?…

Praetorian Apr 15

What Are Shadow Admins in AD? A common problem we encounter within many customer Active Directory environments are accounts that, at first glance, may appear innocuous, but that actually have hidden administrative privileges or unrolled privileges equivalent to those of a domain administrator account. We call these accounts shadow domain admins. These accounts don’t show up when you run the net group domain admins command. They won’t appear in your PAM solution’s audit reports. But an attacker who finds one has effectively won the domain. Over the past several years, the problem has gotten significantly worse as organizations undergo digital transformation. Workloads are migrating to AWS and Azure, identity is being federated to the cloud via ADFS, and domain controllers are running as virtual machines on ESXi. The blast radius of a single compromised account now extends well beyond the traditional Active Directory boundary. A shadow admin path in 2016 might have been a service account with an overly permissive ACL. In 2026, it’s an ADFS server running on a hypervisor managed by a VMware admin who doesn’t even know they’re one hop from domain admin, and two hops from your entire AWS environment. In this post, we’ll walk through several real-world examples we routinely discover during engagements and show how Praetorian Guard’s continuous attack path mapping surfaces them before an adversary does. ADFS Servers and the Federation Layer If your organization federates identity to cloud providers using Active Directory Federation Services (ADFS), you’ve almost certainly heard of the Golden SAML technique that was exploited in the SolarWinds (Solorigate) attack. The ADFS server h

The Hacker News Apr 15

Threat actors have been observed weaponizing n8n, a popular artificial intelligence (AI) workflow automation platform, to facilitate sophisticated phishing campaigns and deliver malicious payloads or fingerprint devices by sending automated emails. "By leveraging trusted infrastructure, these attackers bypass traditional security filters, turning productivity tools into delivery

The Register Apr 15

Vuln old enough to drive lands on CISA's exploited list While Microsoft was rolling out its bumper Patch Tuesday updates this week, US cybersecurity agency CISA was readying an alert about a 17-year-old critical Excel flaw now under exploit.…

The Hacker News Apr 15
AI

Few technologies have moved from experimentation to boardroom mandate as quickly as AI. Across industries, leadership teams have embraced its broader potential, and boards, investors, and executives are already pushing organizations to adopt it across operational and security functions. Pentera’s AI Security and Exposure Report 2026 reflects that momentum: every CISO surveyed

The Hacker News Apr 15

OpenAI on Tuesday unveiled GPT-5.4-Cyber, a variant of its latest flagship model, GPT‑5.4, that's specifically optimized for defensive cybersecurity use cases, days after rival Anthropic unveiled its own frontier model, Mythos. "The progressive use of AI accelerates defenders – those responsible for keeping systems, data, and users safe – enabling them to find and fix problems

Tuesday, April 14
Krebs on Security Apr 14

Microsoft today pushed software updates to fix a staggering 167 security vulnerabilities in its Windows operating systems and related software, including a SharePoint Server zero-day and a publicly disclosed weakness in Windows Defender dubbed “ BlueHammer .” Separately, Google Chrome fixed its fourth zero-day of 2026, and an emergency update for Adobe Reader nixes an actively exploited flaw that can lead to remote code execution. Redmond warns that attackers are already targeting CVE-2026-32201 , a vulnerability in Microsoft SharePoint Server that allows attackers to spoof trusted content or interfaces over a network. Mike Walters , president and co-founder of Action1 , said CVE-2026-32201 can be used to deceive employees, partners, or customers by presenting falsified information within trusted SharePoint environments. “This CVE can enable phishing attacks, unauthorized data manipulation, or social engineering campaigns that lead to further compromise,” Walters said. “The presence of active exploitation significantly increases or

r/computerforensics Apr 14

Hey everyone! we just released version 0.9.0 of Crow-eye, and it brings some major updates we've been working hard on. A big focus for us in this version was removing the friction of dealing with forensic images. We actually added direct support for analyzing images right inside Crow-eye, so you don't need any other mounting software to get started. You can just point it at the image and let it parse. Right now we support parsing directly from: \* E01 / Ex01 \* VHDX / VHD \* VMDK \* ISO \* Raw / DD We also decided it was time to move on from the old timeline prototype. We built a brand new version of the Timeline Visualization from the ground up, making it way easier to correlate everything and actually see the full picture in one place. https://preview.redd.it/t22zt7ty68vg1.png?width=3439&format=png&auto=webp&s=7d5bc5f51cb0e93029ce0641813636a068ba3d58 And finally, something a lot of people asked for: Crow-eye is now completely cross-platform! We updated all the parsers so they no longer depend on Windows APIs for offline artifacts. This means you can now run it natively on Linux to parse offline artifacts and process those forensic images without needing a Windows machine. GitHub : [https://github.com/Ghassan-elsman/Crow-Eye](https://github.com/Ghassan-elsman/Crow-Eye) Let me know how it runs for you, what you think of the new timeline, or if you run into any bugs or issues!

r/Malware Apr 14

Hello, I downloaded a sample from Malwarebazaar. It was a .bat file around 208.38 KB. I set it up into [AnyRun](https://any.run), and started the analysis. \--- **Threat Type:** XWorm v6.5 (RAT) + Stealer sold as Malware-as-a-Service. Capabilities include credential theft, keylogging, screenshot capture, file exfiltration, and hijacking of crypto wallets and accounts. **Execution Process:** 1. `.bat` file runs -> checks for sandbox using `findstr.exe` 2. Uses `certutil.exe` to Base64-decode an embedded payload 3. `cscript.exe` executes decoded VBScript, dropping `svchost.exe` (fake) to %TEMP% 4. Payload launches, copies itself to `%APPDATA%\main.exe` and the startup folder for persistence 5. Connects to C2 and sends system fingerprint via Telegram Bot API # IOCs **Dropper SHA256:** dea6cfb3234780ceeea718787e027cc6d2de18cfead1f8cc234e0ad268987868 **Dropped Payload SHA256:** 7f2b0ffbc5b149b4f9858589763bacdebf63ea1b3a00532e9278d613f75462ea * **C2:** `23.160(.)168.174:3212` * **AES Key:** `<666666>` * **Mutex:** `XUH24Sz2TPub4OF4` * **USB drop name:** `XWorm V6.5 by c3lestial(.)fun` Full Analysis: [https://app.any.run/tasks/1cd22443-8259-49c0-8e6e-a0ca93b0371c](https://app.any.run/tasks/1cd22443-8259-49c0-8e6e-a0ca93b0371c)

The Hacker News Apr 14

Two high-severity security vulnerabilities have been disclosed in Composer, a package manager for PHP, that, if successfully exploited, could result in arbitrary command execution. The vulnerabilities have been described as command injection flaws affecting the Perforce VCS (version control software) driver. Details of the two flaws are below - CVE-2026-40176 (CVSS

Synack Apr 14

When you hire an elite Red Team, you start with an implicit signal of their talent. You review their resumes, their standing within the research community, certifications with trusted vendors like OffSec and CREST. You assume they can navigate your specific tech stack and pivot through your environment. But in offensive security, assumptions are liabilities. […] The post Validating AI Pentesting with Explicit Signals from Synack Red Team appeared first on Synack .

The Hacker News Apr 14

Google has announced the integration of a Rust-based Domain Name System (DNS) parser into the modem firmware as part of its ongoing efforts to beef up the security of Pixel devices and push memory-safe code at a more foundational level. "The new Rust-based DNS parser significantly reduces our security risk by mitigating an entire class of vulnerabilities in a risky area, while also laying

The Hacker News Apr 14

Cybersecurity researchers have unmasked a novel ad fraud scheme that has been found to leverage search engine poisoning (SEO) techniques and artificial intelligence (AI)-generated content to push deceptive news stories into Google's Discover feed and trick users into enabling persistent browser notifications that lead to scareware and financial scams. The campaign, which has been

Compass Security Apr 14
CVE

This post is part of a small blog series covering common Entra ID security findings observed during real-world assessments. Each article explores selected findings in more detail to provide a clearer understanding of the underlying risks and practical implications. Part 1: Privileged Foreign Enterprise Applications Part 2: Privileged Unprotected Groups Part 3: Weak Privileged Identity Management Configuration Conditional Access Policies Conditional Access policies are among the most important security controls in Entra ID. As the name suggests, they define under which conditions access is allowed within a tenant. They are used to enforce protections such as MFA, restrict access based on device state or location, and apply stronger controls to sensitive applications or privileged accounts. At the same time, Conditional Access is a broad and complex topic. The

Troy Hunt Apr 14

Presently sponsored by: Report URI: Guarding you from rogue JavaScript! Don’t get pwned; get real-time alerts & prevent breaches #SecureYourSite I'm starting to become pretty fond of Bruce. Actually, I've had a bit of an epiphany: an AI assistant like Bruce isn't just about auto-responding to tickets in an entirely autonomous manner; it's also pretty awesome at responding with just a little bit of human assistance. Charlotte and I both replied to some tickets today that were way too specific for Bruce to ever do on his own, but by feeding in just a little bit of additional info (such as the number of domains someone was presently monitoring), Bruce was able to construct a really good reply and "own" the ticket. So maybe that's the sweet spot: auto-reply to the really obvious stuff and then take just a little human input on everything else.

Monday, April 13
r/netsec Apr 13
CVE

The current version of RAGFlow, a widely-deployed Retrieval Augmented Generation solution, contains a post-auth vulnerability that allows for arbitrary code execution. This post includes a POC, walkthrough and patch. The TL;DR is to make sure your RAGFlow instances aren't on the public internet, that you have the minimum number of necessary users, and that those user accounts are protected by complex passwords. (This is especially true if you're using Infinity for storage.)

The Guardian Apr 13

Undisclosed number of names and contact and reservation details accessed in latest cybercrime attempt The accommodation reservation website Booking.com has suffered a data breach with “unauthorised parties” gaining access to customers’ details. The platform said it “noticed some suspicious activity involving unauthorised third parties being able to access some of our guests’ booking information”. Continue reading...

r/ReverseEngineering Apr 13

To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the [Reverse Engineering StackExchange](http://reverseengineering.stackexchange.com/). See also /r/AskReverseEngineering.

Synack Apr 13

Introducing the Glasswing-Readiness Assessment In my last post, we looked at the emergence of Anthropic’s Mythos and how it has collapsed the exploit timeline from weeks to days. But once you accept that the speed of the adversary has changed, a more difficult question remains for security leaders: What do we actually do now? The […] The post Become Mythos-Ready and Close the AI Coverage Gap with Synack appeared first on Synack .

Sunday, April 12
Saturday, April 11
Friday, April 10
Praetorian Apr 10

The Blind Spot As organizations race to deploy LLM-powered chat agents, many have adopted a layered defense model: a primary chat agent handles user interactions while a secondary supervisor agent monitors contextual input (i.e., chat messages) for prompt injection attacks and policy violations. This architecture mirrors traditional security patterns like web application firewalls sitting in front of application servers. But what happens when the supervisor only watches the front door? Indirect prompt injection is a class of attack where adversarial instructions are embedded not in the user’s direct input, but in external data sources that an LLM consumes as context: profile fields, retrieved documents, tool outputs, or database records. Unlike direct prompt injection, where a user explicitly sends malicious instructions through the chat interface, indirect injection hides the payload in data that the application fetches on the user’s behalf—often from sources the system implicitly trusts. During a recent engagement targeting a multi-model AI-integrated customer service solution, our team identified a weakness in the architecture that made it susceptible to indirect prompt injection attacks. The customer service solution consisted of an AI-enabled chat agent that processed user requests and a separate supervisor agent that monitored the chat communications for adversarial instructions and manipulation, including prompts injected into data provided to the agent via the chat window. The supervisor agent was effective in consistently detecting and blocking attempts to attack or manipulate the chat agent. However, by injecting adversarial instructions into user profile fields—such as a user’s name—that the chat agent would retrieve upon request, we were able to bypass supervisor protections and trick the chat agent into misinterpreting our user’s profile data as a prompt and executing our hidden instructions. The root cause is a fundamen

Story Overview