Cybersecurity News and Vulnerability Aggregator

Cybersecurity news aggregator

Top Cybersecurity Stories Today

The Hacker News May 8

A previously undocumented Linux implant codenamed Quasar Linux RAT (QLNX) is targeting developers' systems to establish a silent foothold as well as facilitate a broad range of post-compromise functionality, such as credential harvesting, keylogging, file manipulation, clipboard monitoring, and network tunneling. "QLNX targets developers and DevOps credentials across the software supply chain,"

CERT/CC May 8

Overview A privilege escalation vulnerability has been discovered in Linux kernel versions version 4.17 (released 2017) and later. Many popular distributions and Linux-based containers are affected. This vulnerability was publicly disclosed on April 29, 2026, has been assigned CVE ID CVE-2026-31431 , and is commonly referred to as "Copy Fail." Description The Linux kernel, since version 4.17, includes the algif_aead module, which provides user space access to authenticated encryption with associated data (AEAD) operations via the AF_ALG interface. This module may be available as a loadable kernel module or compiled directly into the kernel, depending on the Linux distribution or the custom built Linux install. According to the https://copy.fail disclosure statement: An unprivileged local user can write 4 controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root. The vulnerability is caused by a logic flaw in the Linux kernel’s algif_aead ( AF_ALG) implementation. An unprivileged local user can reliably perform a controlled 4-byte write into the page cache of any readable file without race conditions or timing dependencies. Critically, the corrupted page is not marked dirty, so the modified contents are never written back to disk. The underlying file remains unchanged, allowing the in-memory corruption to bypass checksum and file integrity verification mechanisms. Because subsequent reads are served from the page cache, an attacker can target a setuid binary and modify it

The Hacker News May 8

Threat hunters have flagged a previously undocumented Brazilian banking trojan dubbed TCLBANKER that's capable of targeting 59 banking, fintech, and cryptocurrency platforms. The activity is being tracked by Elastic Security Labs under the moniker REF3076. The malware family is assessed to be a major update of the Maverick, which is known to leverage a worm called SORVEPOTEL to spread via

Latest

Sunday, May 10
r/cybersecurity 2h ago

Hi everyone! 1 month ago, I started with a project called "EasySec", with the objetive to help SMEs to apply mesures related to cybersecurity, based on Ansible playbooks. Currently, these are the available roles: \- Anchore tools (Grant, Grype, Syft) \- Proxychains setup \- Lynis \- Cosign (used in Anchore tools) \- SSL (certificate generation based on DNS with 3 providers and also self-signed) \- CLI menu for role execution in Vagrant I'm currently working on Keycloak and NGINX setups. I would like to receive some feedback from you and see if Im progressing correctly and to gather more ideas. Thanks for reading! Repository is here: https://github.com/Vera0011/easysec

r/netsec 2h ago

Across all major ShinyHunters campaigns (AT&T/Snowflake, Salesforce, Canvas/Instructure), only one event has both a publicly stated payment amount and a known approximate settlement date: the May 2024 AT&T payment of \~5.7 BTC (\~$370K), confirmed by *Wired* but never published with a transaction hash. I use that as the analytical anchor for an end-to-end on-chain analysis using only free public data. **Pipeline (5 stages):** 1. BigQuery bulk filter on amount and time window → 500 candidates. 2. Recipient profiling via Blockstream Esplora (lifetime tx count, spend shape). 3. Sender-side cluster analysis using common-input ownership; looking for broker-aggregation patterns. 4. Depth-12 concurrent forward trace, top-K=4 fan-out. 5. Terminal attribution via OKLink, BitInfoCharts, WalletExplorer. **Result:** A single highest-fit candidate: 5.71997804 BTC paid 2024-05-17 22:04 UTC to a fresh recipient, spent in 6 min, laundered through a 6-cycle automated peel chain, terminating at an exchange deposit cluster. Funding side shows broker-aggregation fingerprint (4× 1.147 BTC peels in a 90-min window pre-payout). Upstream hub addresses appear reused across multiple victims of the same laundering service, active through 2025. Paper closes with the legal pathway from chain endpoint to indictment and a scoped compliance-request template. **Limitations (explicit in §5):** Ranking under a scoring scheme, not positive ID. No off-chain ground truth. Documented OKLink vs. Arkham label conflict on the dominant terminal, resolved via behavioural audit. No formal null-distribution analysis yet. Score weights are author judgements. **Asking for:** 1. Technical feedback / methodology critique. 2. arXiv [cs.CR](http://cs.CR) endorsement — endorsement code: **ZQXBSQ** [github.com/tr4m0ryp/shinyhunters-gotta-catch-em-all/blob/main/Gotta\_Catch\_Em\_All\_ShinyHunters.pdf](http://github.com/tr4m0ryp/shinyhunters-gotta-catch-em-all/blob/main/Gotta_Catch_Em_All_ShinyHunters.pdf) Tooling and dataset released for reuse

r/computerforensics 2h ago

I am proud to announce the release of **Crow-Eye v0.10.0**. This milestone marks the official launch of **The Eye** a robust intelligence layer designed to integrate your own AI agents directly into **Crow-Eye,** This isn't just a regular update; it’s a massive milestone for us . My goal from day one has been to build an ecosystem that doesn't just chase known signatures, but actually gives investigators the power to hunt zero-days But as we celebrate this release and introduce our new AI layer, we need to talk about the elephant in the room. # The Problem with AI in Forensics There’s a huge rush right now to slap AI onto cybersecurity tools, and honestly, a lot of it is dangerous. We are seeing "black box" solutions where investigators feed raw data into an LLM and just trust the answers it spits out. In DFIR, an AI hallucination can ruin a case. An answer without mathematical, binary proof is worthless. If an AI agent cannot anchor its reasoning to exact offsets, hashes, and unmanipulated timestamps, we cannot trust it. To fix this, I realized we had to architect a system where the AI is bound by the exact same strict evidentiary rules as a human analyst. # The Starting Line: Automated Triage Before the AI even wakes up, Crow-Eye does the heavy lifting. When you launch **The Eye**, the platform immediately runs a high-speed Automated Triage phase. It queries the underlying SQLite databases to map out the ground truth: active users, execution histories, accessed files, USB devices, and Auto Run configs. This builds a comprehensive **Initial Report**. This report isn't the final investigation it’s the baseline. It’s the verified starting line before we let the AI touch the data. # The Brain of "The Eye" I believe you should have total control over your data and your analytical "brain." That’s why The Eye is completely modular. You can plug in whatever intelligence fits your environment: * **Cloud AI Models:** Hook up your public API keys for high-performance reasoning. * **Offline Servers & Local Inference:** For air-gapped labs where privacy is non-negotiable. * *Dev Note:* A lot of my testing and development for The Eye was actually done using **LM Studio** and Google’s open-weights models (like the **Gemma** family). If you're a solo investigator, running Gemma locally on your own machine is incredibly powerful. Just a tip: push your context window as high as possible to handle the dense forensic payloads! * **CLI Agents:** If you are a developer or researcher, you can hook up your own custom-built local agents, or seamlessly pipe in tools like **Claude Code** and the **Gemini CLI**. https://preview.redd.it/zdg32192ic0h1.png?width=2023&format=png&auto=webp&s=a1458500b3765ccb1a7fb4018a9dcd2203bd7a1a # Keeping the AI Honest: The Ghassan Elsman Protocol (GEP) Triage gives us the data, but the **Ghassan Elsman Protocol (GEP)** ensures the AI doesn't mess it up. The GEP is a strict set of rules hardcoded into the workflow to maintain a perfect chain of custody: 1. **Case Awareness:** The Initial Report is injected directly into the prompt to ground the AI in reality. 2. **Pre-Flight Ping:** Validates backend connectivity to stop silent failures. 3. **Evidence Anchoring:** Automatically tags and preserves raw hashes, IPs, and timestamps in the chat history. 4. **Chain of Custody:** Every truncation or data preservation event is meticulously logged. 5. **Non-Repudiation:** Messages are assigned deterministic, hash-linked IDs so records can't be altered. 6. **Context Pinning:** Critical evidence is locked and excluded from automated AI summarization. 7. **Tool Traceability:** Every tool the AI uses (like querying LOLBAS) is logged with exact execution counts. 8. **Machine-Readable Synthesis:** You get a clean JSON audit trail at the end to prove compliance. # What's Next: Bridging Analysis and Anatomy While The Eye handles the high-speed analysis, our educational hub, **Eye Describe**, In upcoming updates, we are going to start building a bridge between these two tools. The goal is to gradually integrate visual references alongside the AI's findings. We want to reach a point where the AI doesn't just give you an answer, but helps point you toward the structural anatomy of the artifact it analyzed. It’s an iterative, ongoing project, but we believe it is an important step toward total forensic transparency. This is the very first release of The Eye. You might hit a few bumps connecting to certain local backends or managing specific CLI tools, but we are actively squashing bugs and refining the experience over the next few weeks. Please submit any issues you find! The latest source code and release are available right now on our GitHub. For those waiting for the compiled `.exe` version, it will be dropping very soon on our official website. **GitHub :** [https://github.com/Ghassan-elsman/Crow-Eye](https://github.com/Ghassan-elsman/Crow-Eye) **good hunting**

The Hacker News 7h ago
CVE

Cybersecurity researchers have disclosed a critical security vulnerability in Ollama that, if successfully exploited, could allow a remote, unauthenticated attacker to leak its entire process memory. The out-of-bounds read flaw, which likely impacts over 300,000 servers globally, is tracked as CVE-2026-7482 (CVSS score: 9.1). It has been codenamed Bleeding Llama by Cyera. Ollama is a

r/netsec 10h ago

The traditional vulnerability disclosure timeline relies on a fundamental assumption: exploit development and vulnerability discovery take time. Over the last 12 months the integration of LLMs into offensive tooling has demonstrably broken this assumption. I recently published a technical write-up arguing that the 90-day disclosure window is effectively dead backed by three specific observations from recent incidents: 1. **Automated Diff Analysis (30-minute n-days) :** The safety net between a patch release and an in-the-wild exploit is gone. Taking a recent React security patch (CVE-2026-23870), I used an LLM to analyze the diff, identify the vulnerable path, and write a working DoS PoC in roughly 30 minutes. The human reverse-engineering bottleneck has been bypassed. 2. **Vulnerability Convergence :** I recently reported a critical P0 to a vendor and was told I was the 11th reporter in 6 weeks. LLM assisted scanners are causing independent researchers to converge on the same bugs simultaneously. An embargo no longer contains the vulnerability; it simply provides a head start to whichever threat actor also found it. 3. **The Linux Kernel (Copy Fail & Dirty Frag) :** The recent kernel exploits highlight this perfectly. Copy Fail (CVE-2026-31431) went from an automated AI scan to a public PoC to nation state weaponization in days. Shortly after the embargo for Dirty Frag (CVE-2026-43284 / CVE-2026-43500) was broken in hours because an unrelated third party independently discovered the same bug class using similar tooling. The defense cannot operate on monthly cycles when the offense is operating in hours. The focus needs to shift to real-time, PR-level AI scanning to match the pace. can read the full technical breakdown and case studies on my blog:[https://blog.himanshuanand.com/2026/05/the-90-day-disclosure-policy-is-dead/](https://blog.himanshuanand.com/2026/05/the-90-day-disclosure-policy-is-dead/) I am curious if the researchers here are experiencing similar convergence rates or if you view this as a temporary anomaly while legacy codebases are scanned with new tools.

r/blueteamsec 12h ago

I've been building a Windows event log analysis tool called EventHawk and just shipped v1.2. Sharing here for feedback from people who work in IR/forensics. What it is: A GUI + CLI tool for parsing and analyzing .evtx files. Built around a Rust-backed parallel parser with a resource monitor that throttles workers automatically so your machine stays usable mid-parse. Supports EVTX from Windows Vista through Server 2022. Parses and filters 6M rows of event logs in just 50-60 secs. https://github.com/Mihir-Choudhary/EventHawk Two parsing modes: 1. Normal Mode loads matched events into memory — fast and straightforward for most investigations. 2. Juggernaut Mode is for large captures: raw event XML goes to Parquet on disk, only metadata columns live in memory, full event detail lazy-loads on row click. Scroll 10M+ events with zero disk I/O. v1.2 rewrote Juggernaut Mode from scratch — replaced the old multi-DuckDB connection model (OOM crashes, file lock conflicts) with a single Arrow in-memory table and filter thread. Filtering now runs as vectorized DuckDB SQL, 20-120ms at 6M rows. Key features: 1. 20 built-in DFIR profiles — filter at parse time. Logon/Logoff, Process Creation, Lateral Movement, PowerShell, RDP, Defender Alerts, and 13 more. 2. 273+ event ID descriptions in plain English on click. No more looking up what 4688 or 7045 means mid-investigation. 3. ATT&CK tab — every parse maps events to MITRE techniques with ID, tactic, confidence, and source. Click any technique to filter the table to events that triggered it. 4. IOC tab — auto-extracts IPs, domains, file paths, hashes, URLs, registry keys, and suspicious command lines. Click any IOC to pivot the entire event table to events containing that indicator. 5. Chains tab — correlates events into multi-step attack chains shown as an expandable tree. Click any node to jump to that event. 6. Case tab — annotate events with analyst notes, export as a formal PDF investigation report. 7. Hayabusa integration — \\\\\\\~3,000 community Sigma rules evaluated and merged into the ATT&CK tab. 8. Sentinel anomaly engine — build a behavioral baseline from clean logs, then score a suspect capture. Each process-create event scored across five dimensions and classified into four tiers. Tier 3/4 findings include plain-English justifications. Built for novel malware, LOLBin abuse, and anything that slips past signatures. 9. Export in 8 formats — JSON, CSV, XML, HTML, PDF report, STIX 2.1, OpenIOC, YARA. 10. Full CLI and TUI for headless and automated use. If the tool looks useful, a star on GitHub goes a long way ⭐⭐ — it helps the project get visibility and keeps me motivated to keep building. Would genuinely love feedback from anyone, especially on what's missing or annoying in the existing ecosystem.

Saturday, May 9
r/netsec May 9

I recently investigated an individual operating through Odysee and Telegram who is selling a malicious Android RAT known as EagleSpy V6.0, which appears to be a rebranded version of CraxsRAT. During the investigation: \\- I was financially scammed after payment \\- The seller blocked communication afterward \\- The malware infrastructure was analyzed in detail Technical analysis confirmed: \\- Banking phishing overlays \\- Crypto wallet credential theft \\- Telegram bot exfiltration \\- Remote shell execution \\- Keylogging \\- Camera/microphone access \\- GPS tracking \\- Ransomware components \\- DEX packers for AV evasion \\- Hidden update/backdoor mechanisms The repository also contained evidence of real victim infrastructure and compromised device information. The malware appears capable of targeting not only victims, but potentially even buyers/operators through embedded update systems and hidden control mechanisms. Relevant reports have already been submitted to platform abuse teams. Odysee channel involved: https://odysee.com/@justicerat:e Telegram: @JustIcedevs This post is intended purely as a cybersecurity awareness warning to help prevent additional victims. If moderators require technical validation or indicators of compromise, I can provide structured analysis details privately.

WIRED May 9

Plus: Meta officially kills encrypted Instagram DMs, the Trump administration targets “violent left wing extremists,” leaked documents reveal Russia's school for elite hackers, and more.

The Hacker News May 9
CVE

cPanel has released updates to address three vulnerabilities in cPanel and Web Host Manager (WHM) that could be exploited to achieve privilege escalation, code execution, and denial-of-service. The list of vulnerabilities is as follows - CVE-2026-29201 (CVSS score: 4.3) - An insufficient input validation of the feature file name in the "feature::LOADFEATUREFILE" adminbin call that could result

Friday, May 8
r/netsec May 8

Existing benchmarks for LLM-based vulnerability detection compress model performance into a single metric, which fails to reflect the distinct priorities of different stakeholders. For example, a CISO may emphasize high recall of critical vulnerabilities, an engineering leader may prioritize minimizing false positives, and an AI officer may balance capability against cost. To address this limitation, we introduce SecLens-R, a multi-stakeholder evaluation framework structured around 35 shared dimensions grouped into 7 measurement categories. The framework defines five role-specific weighting profiles: CISO, Chief AI Officer, Security Researcher, Head of Engineering, and AI-as-Actor. Each profile selects 12 to 16 dimensions with weights summing to 80, yielding a composite Decision Score between 0 and 100. We apply SecLens-R to evaluate 12 frontier models on a dataset of 406 tasks derived from 93 open-source projects, covering 10 programming languages and 8 OWASP-aligned vulnerability categories. Evaluations are conducted across two settings: Code-in-Prompt (CIP) and Tool-Use (TU). Results show substantial variation across stakeholder perspectives, with Decision Scores differing by as much as 31 points for the same model. For instance, Qwen3-Coder achieves an A (76.3) under the Head of Engineering profile but a D (45.2) under the CISO profile, while GPT-5.4 shows a similar disparity. These findings demonstrate that vulnerability detection is inherently a multi-objective problem and that stakeholder-aware evaluation provides insights that single aggregated metrics obscure.

CERT/CC May 8

Overview A privilege escalation vulnerability has been discovered in Linux kernel versions version 4.17 (released 2017) and later. Many popular distributions and Linux-based containers are affected. This vulnerability was publicly disclosed on April 29, 2026, has been assigned CVE ID CVE-2026-31431 , and is commonly referred to as "Copy Fail." Description The Linux kernel, since version 4.17, includes the algif_aead module, which provides user space access to authenticated encryption with associated data (AEAD) operations via the AF_ALG interface. This module may be available as a loadable kernel module or compiled directly into the kernel, depending on the Linux distribution or the custom built Linux install. According to the https://copy.fail disclosure statement: An unprivileged local user can write 4 controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root. The vulnerability is caused by a logic flaw in the Linux kernel’s algif_aead ( AF_ALG) implementation. An unprivileged local user can reliably perform a controlled 4-byte write into the page cache of any readable file without race conditions or timing dependencies. Critically, the corrupted page is not marked dirty, so the modified contents are never written back to disk. The underlying file remains unchanged, allowing the in-memory corruption to bypass checksum and file integrity verification mechanisms. Because subsequent reads are served from the page cache, an attacker can target a setuid binary and modify it

The Hacker News May 8

Threat hunters have flagged a previously undocumented Brazilian banking trojan dubbed TCLBANKER that's capable of targeting 59 banking, fintech, and cryptocurrency platforms. The activity is being tracked by Elastic Security Labs under the moniker REF3076. The malware family is assessed to be a major update of the Maverick, which is known to leverage a worm called SORVEPOTEL to spread via

The Hacker News May 8

Cybersecurity researchers have discovered fraudulent apps on the official Google Play Store for Android that falsely claimed to offer access to call histories for any phone number, only to trick users into joining a subscription that provided fake data and incurred financial loss. The 28 apps have collectively racked up more than 7.3 million downloads, with one of them alone accounting for over

The Hacker News May 8

A previously undocumented Linux implant codenamed Quasar Linux RAT (QLNX) is targeting developers' systems to establish a silent foothold as well as facilitate a broad range of post-compromise functionality, such as credential harvesting, keylogging, file manipulation, clipboard monitoring, and network tunneling. "QLNX targets developers and DevOps credentials across the software supply chain,"

The Hacker News May 8

The dark secret of enterprise security operations is that defenders have quietly institutionalized the practice of not looking. This is not just anecdotal, but rather backed by a recent report investigating more than 25 million security alerts, including informational and low-severity, across live enterprise environments. The dataset behind these findings includes 10 million monitored

WIRED May 8

With the launch of the first 16 satellites, Russia begins construction of a network for satellite internet that aims to cover the entire country by 2030. But getting there won’t be easy.

Krebs on Security May 8

An ongoing data extortion attack targeting the widely-used education technology platform Canvas disrupted classes and coursework at school districts and universities across the United States today, after a cybercrime group defaced the service’s login page with a ransom demand that threatened to leak data from 275 million students and faculty across nearly 9,000 educational institutions. A screenshot shared by a reader showing the extortion message that was shown on the Canvas login page today. Canvas parent firm Instructure responded to today’s defacement attacks by disabling the platform, which is used by thousands of schools, universities and businesses to manage coursework and assignments, and to communicate with students. Instructure acknowledged a data breach earlier this week, after the cybercrime group ShinyHunters claimed responsibility and said they would leak data on tens of millions of students and faculty unless paid a ransom. The stated deadline for payment was initially set at May 6, but it was later pushed back to May 12. In a statement on May 6, Instructure said the investigation so far shows the stolen information includes “certain identifying information of users at affected institutions, such as names, email addresses, and student ID numbers, as well as as messages among users.” The company said it found no evidence the breached data included more sensitive information, such as passwo

Thursday, May 7
WIRED May 7

Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. The good news: You can easily uninstall it. The bad? You might not want to.

Cloudflare May 7

This afternoon, we sent the following email to our global team. One of our core values at Cloudflare is transparency, and we believe it's important that you hear this directly from us because it’s a major moment at Cloudflare. Team: We are writing to let you know directly that we’ve made the decision to reduce Cloudflare’s workforce by more than 1,100 employees globally. The way we work at Cloudflare has fundamentally changed. We don’t just build and sell AI tools and platforms. We are our own most demanding customer. Cloudflare’s usage of AI has increased by more than 600% in the last three months alone. Employees across the company from engineering to HR to finance to marketing run thousands of AI agent sessions each day to get their work done. That means we have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere. Today is a hard day. This decision unfortunately means saying goodbye to teammates who have contributed meaningfully to our mission and to building Cloudflare into one of the world’s most successful companies. We want to be clear that this decision is not a reflection of the individual work or talent of those leaving us. Instead, we are reimagining every internal process, team, and role across the company. Today’s actions are not a cost-cutting exercise or an assessment of individuals’ performance; they are about Cloudflare defining how a world-class, high-growth company operates and creates value in the agentic AI era. This is a moment we need to own as founders and leaders of the company. Matthew has personally sent out every offer letter we've extended. It is a practice he has always looked forward to because it represented our growth and the incredible talent joining our mission. It didn’t feel rig

The Hacker News May 7
CVE

Ivanti is warning that a new security flaw impacting Endpoint Manager Mobile (EPMM) has been explored in limited attacks in the wild. The high-severity vulnerability, CVE-2026-6973 (CVSS score: 7.2), is a case of improper input validation affecting EPMM before versions 12.6.1.1, 12.7.0.1, and 12.8.0.1. It allows "a remotely authenticated user with administrative access to achieve remote code

The Hacker News May 7

Cybersecurity researchers have disclosed details of a new credential theft framework dubbed PCPJack that targets exposed cloud infrastructure and ousts any artifacts linked to TeamPCP from the environments. "The toolset harvests credentials from cloud, container, developer, productivity, and financial services, then exfiltrates the data through attacker-controlled infrastructure while attempting

The Hacker News May 7

Palo Alto Networks has disclosed that threat actors may have attempted to unsuccessfully exploit a recently disclosed critical security flaw as early as April 9, 2026. The vulnerability in question is CVE-2026-0300 (CVSS score: 9.3/8.7), a buffer overflow vulnerability in the User-ID Authentication Portal service of Palo Alto Networks PAN-OS software that could allow an unauthenticated attacker

The Hacker News May 7

Bad week. Turns out the easiest way to get hacked in 2026 is still the same old garbage: shady packages, fake apps, forgotten DNS junk, scam ads, and stolen logins getting dumped into Discord channels like it’s normal. Some of these attack chains don’t even feel sophisticated anymore. More like some tired guy with a Telegram account and too much free time. The worst part is how often this stuff

The Hacker News May 7

Having an incident response retainer, or even a pre-approved external incident response firm, is not the same as being ready for an incident. A retainer means someone will answer the phone. Operational readiness determines whether that team can do meaningful work the moment they do. That distinction matters far more than many organizations realize. In the first hours of a security incident

The Hacker News May 7

Cybersecurity researchers have discovered three packages on the Python Package Index (PyPI) repository that are designed to stealthily deliver a previously unknown malware family called ZiChatBot on Windows and Linux systems. "While these wheel packages do implement the features described on their PyPI web pages, their true purpose is to covertly deliver malicious files," Kaspersky

The Hacker News May 7
CVE

A dozen critical security vulnerabilities have been disclosed in the vm2 Node.js library that could be exploited by bad actors to break out of the sandbox and execute arbitrary code on susceptible systems. vm2 is an open-source library used to run untrusted JavaScript code inside a secure sandbox by intercepting and proxying JavaScript objects to prevent sandboxed code from accessing the host

Wednesday, May 6
The Hacker News May 6

Cybersecurity researchers have exposed a new Mirai-derived botnet that self-identifies as xlabs_v1 and targets internet-exposed devices running Android Debug Bridge (ADB) to enlist them in a network capable of carrying out distributed denial-of-service (DDoS) attacks. Hunt.io, which detailed the malware, said it made the discovery after identifying an exposed directory on a Netherlands-hosted

Cloudflare May 6

On May 5, 2026, at roughly 19:30 UTC, DENIC, the registry operator for the .de country-code top-level domain (TLD), started publishing incorrect DNSSEC signatures for the .de zone. Any validating DNS resolver receiving these signatures was required by the DNSSEC specification to reject them and return SERVFAIL to clients, including 1.1.1.1 , the public DNS resolver operated by Cloudflare. The country-code top-level domain for Germany, .de , is one of the largest on the Internet. On Cloudflare Radar , it consistently ranks among the most broadly queried TLDs globally. An outage at this level of the DNS hierarchy has the potential to make millions of domains unreachable. In this post, we’ll walk through what we saw, the impact of these events, and how we applied temporary mitigations while DENIC resolved the issue. How DNSSEC works DNSSEC (Domain Name System Security Extensions) adds cryptographic authentication to DNS. When a zone is signed with DNSSEC, each set of records is accompanied by a digital signature known as an RRSIG record that lets a resolver verify the records haven’t been tampered with. Unlike encrypted DNS protocols, such as DNS over TLS (DoT) and DNS over HTTPs (DoH), DNSSEC is about integrity, not privacy. The records are visible, but their authenticity can be proven. What makes DNSSEC unique is that the signatures travel together with the records they protect. This means int

r/netsec May 6

I recently re-tested an old Binance API finding I had reported through Bugcrowd. The original issue was about Binance API IP whitelisting and derived `listenKey` stream credentials. At the time, a `listenKey` could be created from a whitelisted IP and then used from a non-whitelisted IP to consume private user data streams. That did not allow trading, withdrawals, or account takeover. But it did allow real-time access to sensitive private stream data such as balances, orders, executions, positions, timing, and strategy behavior. The core security argument was: >A derived credential should not be more portable than the credential that created it. The report was rejected as “Social Engineering” / “Not Applicable”. I disagreed, because the relevant threat model was not “convince the user to send a token”. The realistic threat model was supply-chain compromise: malicious code running inside a trusted bot server, CI job, dependency, IDE workspace, or trading environment where API keys already exist. I re-tested the behavior on May 5, 2026. Result: The old behavior appears to be gone. Spot and Margin no longer use the old `listenKey` model. Futures still uses `listenKey`, but now appears to enforce the API key IP whitelist correctly. From a whitelisted IP the calls worked; from non-whitelisted Mullvad exits they failed with the expected IP restriction error. That is good for users. But it raises an uncomfortable disclosure-process question: If a finding is “not applicable” enough to reject, not acknowledge, and not reward — but technical enough to later fix — what should a healthy disclosure process do? Full technical write-up, timeline, re-test setup, and raw outputs: [https://blog.technopathy.club/binance-fixed-the-ip-whitelist-gap-the-disclosure-process-is-still-broken](https://blog.technopathy.club/binance-fixed-the-ip-whitelist-gap-the-disclosure-process-is-still-broken) I am mainly interested in the process question here: When a rejected report later disappears from production, should the program re-open it, acknowledge it, partially reward it, or leave it closed unless the researcher can prove direct causality?

WIRED May 6

It's not just you. Scammers, hackers, and other cybercriminals are complaining about “AI shit” flooding platforms where they discuss cyberattacks and other illegal activity.

The Hacker News May 6

For nearly 20 years, we at The Hacker News have mostly told scary stories about cyberspace — big hacks, broken systems, and new threats. But behind every headline, there’s a quieter, better story. It’s the story of leaders making tough calls under pressure, teams building smarter defenses, and security products that keep hunting threats 24/7 — even when it’s hard. Most of the time, this work is

The Hacker News May 6

Analysts recently confirmed what identity security teams have quietly feared: AI agents are being deployed faster than enterprises can govern them. In their inaugural Market Guide for Guardian Agents, Gartner states that “enterprise adoption of AI agents is accelerating, outpacing maturity of governance policy controls.” Enterprise leaders can request access to the Gartner Market Guide for

r/computerforensics May 6
APT

BSides can often be the one place where you can find the most obscure talks about a technical detail. For example, "Edge Device Memory Forensics" by Richard Tuffin or maybe "Forensic analysis of privacy focused mobile browsers" by Lorena Carthy and Ruben Jernslett. Finding them is the hard part. I built a website that tracks all BSides chapters, all 8575 videos, fetches transcripts, indexes them by technology, speakers, events, tools, protocols, standards, and much more. It is free, no login, no ads, no tracking beyond basic visits (no cookies). And I'm planning to keep it so. Check out the forensics talks at [https://allbsides.com/talks.html?q=forensics](https://allbsides.com/talks.html?q=forensics), and let me know if you find the site useful or spot anything missing. Genuinely happy to receive feedback!

r/Malware May 6

Someone hacked the deadmau5 discord server by virusing an admin. Said admin gave me the malware sample. Used claude sonnet 4.6 in combination with nyxstrike MCP framework to decompile and decrypt their obfuscated code, finding a goldmine. Title speaks for itself. The discord bot token could possibly have led to their CNC. But logging into the discord bot token to check for communications and see where it leads breaks 2 federal laws alone that I can think of. I did validate the token was live however, and matched it to a bot account. I also have discovered the webhook and token that was in the malware, both of them have been nuked (not by me). So, I checked their domain that they've been using, and they recompiled and reuploaded it. So its 26 bytes larger. I suspect they replaced the webhook url and the bot token with fresh ones, and suspect further that discord nuked the previous ones themselves. Nevertheless, I have personally not seen malware like this on github, so this must have been private and not some skid level stuff. I know it was turkish (at least the devs were). [Github](https://github.com/destiny-creates/goxlr.net-malware) link attatched for the source code including the deobfuscated malware classes, and the analysis/report. Don't flame me, it's still pretty cool 😆. Cracking the zkm encryption would have taken weeks (Im a python guy not a JS guy). Nyxstrike + sonnet 4.6 = 1.5 hours and its cracked.

Troy Hunt May 6

Presently sponsored by: Report URI: Guarding you from rogue JavaScript! Don’t get pwned; get real-time alerts & prevent breaches #SecureYourSite It's a fascinating display of leverage: the ShinyHunters folks, with very limited resources and experience (their demographic will be teenagers to their early 20s), consistently gaining access to the data of massive brands. Not through technical ingenuity alone (although I'm sure there's a portion of that), but primarily through good ol' social engineering. That's coming through in the disclosure notices from the impacted companies, and Mandiant has a good write-up of it too : These operations primarily leverage sophisticated voice phishing (vishing) and victim-branded credential harvesting sites to gain initial access to corporate environments by obtaining single sign-on (SSO) credentials and multi-factor authentication (MFA) codes Question now is how long their run will go for. There's a very predictable ending if things keep going in this direction but right now, they show little sign of abating.

Tuesday, May 5
r/computerforensics May 5

Hi all — finally pushed this public after several months of work. Sharing here because this subreddit is where I'd want feedback from before anywhere else. **WAInsight** — https://github.com/akhil-dara/WAInsight (MIT) **Scope.** It doesn't extract data from a phone — that's a separate step with whatever acquisition workflow you already use. WAInsight starts after acquisition. Point it at a folder containing `msgstore.db` + `wa.db` + `Media/` + `Avatars/` and it ingests everything through a 29-stage pipeline into a normalised `analysis.db` (47 indexed tables), then opens a 30-page Qt desktop UI to actually work the case. **Why.** I wanted analysis to be the primary deliverable, not the report. So the UI is built around browsing every chat exactly like opening WhatsApp itself — home-style conversation list, bubbles with edits / revokes / replies / reactions / receipts / forwarded badges / mention chips / pinned-message strip — with forensic provenance one click away on every bubble. Reports are a snapshot of what was found, not the destination. **Capabilities, grouped by what you're actually trying to do:** *Reading the timeline* - Forensic ℹ button on every bubble: msgstore source IDs, every SQL row that fed the bubble, origination flags decoded, per-recipient receipt timeline (delivered / read / played, ms-precise). - Ghost-message recovery from `message_quoted_text` (deleted-for-everyone messages reconstructed inline next to the revoked bubble). - Edit history per message — every revision side-by-side. - Reply chains as click-through badges with cross-conversation "Go to original" jumps. - 60+ system events decoded (group / security / admin / privacy / business / ephemeral) instead of opaque type codes. - Calendar with per-day message counts shown flight-fare style; click+drag to range-filter. - Windowed-flat virtual scroller for chats with 5K+ messages — jumping to message #47K in a 47K-message chat is O(1). *Media analysis* - Folder-shaped Media Dashboard that scales to 200K+ rows at `file://` (sharded AVIF thumbs + chunked metadata + vendored UI engine, sub-millisecond bitset crossfilter). Cascading filters: conversation × sender × MIME × extension × status × date. - Perceptual visual search across the whole case — drop a screenshot, get Exact / Near-Exact / Near-Duplicate / Template-Match tiers (pHash + dHash + edge-map). - Camera-original → WhatsApp tracking: feed an original from `DCIM/`, find every chat that photo was sent in even after WhatsApp's recompression changed the SHA-256. - View-once images and voice notes downloadable from the bubble even after on-device expiry (CDN URL + media_key, AES-CBC + HMAC). - Hash-link auto-rescue: missing media that shares a SHA-256 with another message's on-disk media gets auto-resolved (tagged `recovery_method='hash_linked'`, never confused with a real local copy). - `wa.db` thumbnail blob rendered as fallback when even the bytes are gone. - HD/SD twin pairs surfaced inline with cross-jumps. - Cross-chat propagation: right-click any media → every chat that shared the same SHA-256, chronologically. Says where the bytes were *first seen*, not just where they were last forwarded. - 12-state media recovery taxonomy preserved in every report and dashboard (`original` / `downloaded` / `hash_linked` / `orphan_recovered` / etc.). - Orphaned-media browser: files in `Media/` with no surviving message row + auto-rescue against surviving message hashes. *Identity & devices* - Per-message platform attribution from `key_id` — every bubble carries an inline tag (Android / iPhone / Web/Desktop / Companion #N), confidence-scored. The classifier was its own separate research piece — collected `key_id` samples across real devices on Android, iPhone, Web, and linked companions until the rules held up. Powers the Group Report's *Device Platform Usage* breakdown and the contact's *Device Sessions* tab. - Unified contact registry merged from 5 sources (`jid_map` ∪ `wa_contacts` ∪ `lid_display_name` ∪ group labels ∪ mention names) so every JID resolves to one canonical identity. - Owner-aware everywhere — `sender_id IS NULL` for owner messages gets joined to `case_metadata` so owner activity never surfaces as "Unknown" anywhere in the UI or reports. *Groups & communities* - Past-participant reconstruction from 3 sources: `group_past_participant` ∪ `group_member.is_current=0` ∪ message-presence inference (catches members the roster purged after a long enough gap). - Owner can-post / can-edit banner on every Group Info page, sourced from `chat.participation_status` + admin flags. - Community LID resolution + comment-author resolution even when WhatsApp only stored the LID. - Group Edit History with profile-picture diff. *Calls* - Synthetic call reconstruction: calls that have no `message` row in their conversation get virtual rows so they render in every participant's chat timeline at the right position. Group voice chats appear inside the group's chat even when WhatsApp didn't write a message row for them. *Cross-case pivots* - Cross-Contact Analysis: pick 2+ contacts, instantly see shared groups, calls between them, file SHA-256 hashes any of them shared in common, cross @-mentions, every conversation any of them appears in. Owner is a first-class pickable contact. - FTS5 global search with sender / conversation / date / ghost filters; results panel as a sidebar inside the chat with click-to-jump highlights. *Reports & handoff* - Per-group landscape-A4 PDF/HTML report: case+evidence provenance banner with source-DB SHA-256 hashes, group identity, owner role, top contributors / forwarders, device platform split, mentions network, activity heatmap, calls, locations (with live-share start/final coords), message-type taxonomy (Type 64/82/90/92/112/116 etc. mapped to readable labels), bot activity, former members. - Per-contact report with section picker. - Offline HTML viewer bundle — single ZIP, opens from `file://` with no Python or server. WhatsApp-Web-style chat list, full message rendering, FTS5-equivalent search. The case officer / opposing counsel can open it in any browser. - Tagged-messages export with three modes (full / tagged-only / tagged ± N day buffer). **Forensic integrity.** Source `msgstore.db` opened with three independent guards (`?mode=ro&immutable=1` URI + `SQLITE_OPEN_READONLY` flag + `PRAGMA query_only=ON`). Source files SHA-256 hashed at ingest. Every action journaled to a hash-chained `chain_of_custody.jsonl` — each entry's hash includes the previous one, so the audit trail is tamper-evident, not just append-only. Original IDs preserved (`message.source_msg_id`, `media.source_media_row_id`, etc.) so every analysis row links back to its msgstore.db / wa.db origin. Timestamps shown local + UTC in brackets so case timezone is unambiguous. **Honest caveats.** Android-only. No automated tests yet. Schema research was done sample-by-sample so there are likely edge cases on WA versions / Business app / regional builds I haven't seen — Business app support is on the roadmap. Validated primarily against my own personal-device datasets. Built solo. PySide6 + SQLite + ~85K lines of Python. There's a deepwiki for it too (https://deepwiki.com/akhil-dara/WAInsight) if you want a deeper architectural read before cloning. Would genuinely value feedback from anyone who works WhatsApp cases regularly — especially edge cases or schema variants that break it. Issues / DMs / comments all welcome.

r/Malware May 5

Pushed a new IOCX release (v0.7.1) that’s aimed at making the engine much harder to break during static analysis. The focus was adversarial behaviour: malformed binaries, corrupted PE structures, and intentionally hostile IOC‑like strings. If you work with weird samples, tooling pipelines, or large‑scale triage, this release makes IOCX more robust under hostile conditions. **New PE structural heuristics** Six new checks added to catch structural anomalies without blowing up the parser: * overlapping/misaligned sections * inconsistent optional headers (PE32 & PE32+)   * broken entrypoint mappings   * corrupted data directories   * malformed import tables   * general PE layout inconsistencies   These aren’t detections — they’re deterministic, reason‑coded structural signals to keep analysis stable. **Expanded adversarial PE corpus** Added a full suite of malformed and corrupted PEs, including: * broken RVAs / invalid addressing   * truncated Rich headers   * fake UPX names + packed‑lookalikes   * PE32/PE32+ hybrids   * “franken‑PEs” combining multiple faults   All outputs are snapshot‑validated to guarantee deterministic behaviour. **Adversarial coverage across all IOC categories** New hostile string fixtures now stress every extractor: * homoglyph + mixed‑script domains   * malformed URLs and schemes   * broken IPv4/IPv6   * noisy or near‑miss hashes   * invalid Base64   * adversarial crypto strings (incl. Base58Check)   * long/invalid Windows paths   * malformed emails   The goal: keep extraction predictable even when the input is intentionally messy. **Parser & extractor hardening** * stable on malformed PE structures   * structured, JSON‑safe error metadata   * improved domain/URL/crypto/hash extractors   * deterministic output across platforms **Links** GitHub: [https://github.com/iocx-dev/iocx](https://github.com/iocx-dev/iocx)   PyPI: [https://pypi.org/project/iocx/](https://pypi.org/project/iocx/) **Example** `pip install iocx` `iocx suspicious.exe -a full` If you’re doing malware triage, static analysis, or building automated pipelines that need predictable IOC extraction, v0.7.1 should be a noticeable stability bump. Happy to discuss edge cases or weird samples people want covered next.

Trail of Bits May 5
AI

We recently added a C/C++ security checklist to the Testing Handbook and challenged readers to spot the bugs in two code samples : a deceptively simple Linux ping program and a Windows driver registry handler. If you found the inet_ntoa global buffer gotcha or the missing RTL_QUERY_REGISTRY_TYPECHECK flag, nice work. If not, here’s a full walkthrough of both challenges, plus a deep dive into how the Windows registry type confusion escalates from a local denial of service to a kernel write primitive. Since we first released the new C/C++ security checklist, we also developed a new Claude skill, c-review . It turns the checklist into bug-finding prompts that an LLM can run against a codebase. It’s also platform and threat-model aware. Run these commands to install the skill: claude skills add-marketplace https://github.com/trailofbits/skills claude skills enable c-review --marketplace trailofbits/skills The Linux ping program challenge The Linux warmup challenge we showed you in the last blog post has an obvious command injection issue. #include <stdio.h> #include <s

r/computerforensics May 5
CVE

I've been working on this for the last few months and just wanted to share. It's a free browser-based tool for inspecting and removing metadata from photos, videos, audio, PDFs and Office documents — and it has a small image-forensics lab built in. Live: [https://midgardmud.de/tools/exif/](https://midgardmud.de/tools/exif/) Why I built it: every other "EXIF remover" online asks you to upload your private files to a server. That's the opposite of privacy. So I wrote one that runs 100% in the browser via the File API — your file never leaves your device. F12 → Network tab → drop a 50 MB photo → you'll see zero outbound requests. What it does: • Strips metadata from JPG/PNG/WebP/GIF/HEIC/TIFF, MP4/MOV/MKV/WebM/AVI, MP3/FLAC/OGG/WAV, PDF, DOCX/XLSX/PPTX • Privacy Risk Score 0–100 with per-file breakdown so you see what's actually leaking • 4 one-click privacy profiles (Anonymous / Social-safe / Keep camera / GPS-only) • Forensics: ELA, JPEG-Ghost re-save heatmap, DQT compression fingerprint, Noise + CFA/Bayer pattern (defensible alternative to AI-image detectors), Copy-Move clone detection, embedded-thumbnail audit, RGB histogram, hex viewer, structure inspector • SHA-256 + perceptual hash (pHash) per file • ExifTool-compatible JSON export • Per-tag EXIF editor + GPS spoofing for JPEG • C2PA self-signed Content Credentials • Works fully offline as a PWA after first visit • 19 languages Stack: vanilla JS, no framework, no build step, \~12k lines. libheif WASM lazy-loaded for HEIC. Web Worker for big videos so the UI stays responsive. Happy to answer anything about how the parsers work, why I avoided React, or how the JPEG-Ghost / Copy-Move detection is implemented. Feedback very welcome.

Monday, May 4
Synack May 4

The Model We’ve Relied on Is Starting to Break Over the past 20 years, I’ve seen the threat landscape evolve from opportunistic attackers, to organized cybercrime, to nation-state campaigns. Each shift forced security teams to adapt. What’s happening right now is different. AI models coming out of Anthropic, OpenAI, Google, and X are rewriting the […] The post Sara AI Pentesting Is Now Generally Available: The Model Is Changing appeared first on Synack .

Synack May 4

Key Takeaways Over the past year, the conversation in security has changed faster than most programs have. AI is compressing attacker timelines. Environments are changing daily rather than quarterly. And the model most enterprises still rely on to validate security—periodic penetration testing—is starting to break under the weight of both. The real question isn’t whether […] The post The Shift to Continuous Security Validation: Why Detection Is No Longer Enough appeared first on Synack .

r/ReverseEngineering May 4

To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the [Reverse Engineering StackExchange](http://reverseengineering.stackexchange.com/). See also /r/AskReverseEngineering.

Story Overview