Cybersecurity News and Vulnerability Aggregator

Cybersecurity news aggregator

Top Cybersecurity Stories Today

The Hacker News May 8

A previously undocumented Linux implant codenamed Quasar Linux RAT (QLNX) is targeting developers' systems to establish a silent foothold as well as facilitate a broad range of post-compromise functionality, such as credential harvesting, keylogging, file manipulation, clipboard monitoring, and network tunneling. "QLNX targets developers and DevOps credentials across the software supply chain,"

CERT/CC 19h ago

Overview A privilege escalation vulnerability has been discovered in Linux kernel versions version 4.17 (released 2017) and later. Many popular distributions and Linux-based containers are affected. This vulnerability was publicly disclosed on April 29, 2026, has been assigned CVE ID CVE-2026-31431 , and is commonly referred to as "Copy Fail." Description The Linux kernel, since version 4.17, includes the algif_aead module, which provides user space access to authenticated encryption with associated data (AEAD) operations via the AF_ALG interface. This module may be available as a loadable kernel module or compiled directly into the kernel, depending on the Linux distribution or the custom built Linux install. According to the https://copy.fail disclosure statement: An unprivileged local user can write 4 controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root. The vulnerability is caused by a logic flaw in the Linux kernel’s algif_aead ( AF_ALG) implementation. An unprivileged local user can reliably perform a controlled 4-byte write into the page cache of any readable file without race conditions or timing dependencies. Critically, the corrupted page is not marked dirty, so the modified contents are never written back to disk. The underlying file remains unchanged, allowing the in-memory corruption to bypass checksum and file integrity verification mechanisms. Because subsequent reads are served from the page cache, an attacker can target a setuid binary and modify it

The Hacker News 20h ago

Threat hunters have flagged a previously undocumented Brazilian banking trojan dubbed TCLBANKER that's capable of targeting 59 banking, fintech, and cryptocurrency platforms. The activity is being tracked by Elastic Security Labs under the moniker REF3076. The malware family is assessed to be a major update of the Maverick, which is known to leverage a worm called SORVEPOTEL to spread via

Latest

Saturday, May 9
r/cybersecurity Just now

Im an entry lead for LLM 08 [https://www.linkedin.com/posts/rocklambros\_owasp-llmsecurity-aisecurity-activity-7457476594241011712-0EzC?utm\_source=share&utm\_medium=member\_desktop&rcm=ACoAAFcmwXkBV3xIyoq0I8IaYBBna3xA\_h\_bN-U](https://www.linkedin.com/posts/rocklambros_owasp-llmsecurity-aisecurity-activity-7457476594241011712-0EzC?utm_source=share&utm_medium=member_desktop&rcm=ACoAAFcmwXkBV3xIyoq0I8IaYBBna3xA_h_bN-U)

r/cybersecurity 3h ago

We built [gateweb.io](http://gateweb.io) \- a local SWG with HTTPS inspection that doesn't send your traffic through someone else's cloud. Free for up to 5 users. Curious what the security community thinks about the local-first approach.

WIRED 4h ago

Plus: Meta officially kills encrypted Instagram DMs, the Trump administration targets “violent left wing extremists,” leaked documents reveal Russia's school for elite hackers, and more.

The Hacker News 7h ago
CVE

cPanel has released updates to address three vulnerabilities in cPanel and Web Host Manager (WHM) that could be exploited to achieve privilege escalation, code execution, and denial-of-service. The list of vulnerabilities is as follows - CVE-2026-29201 (CVSS score: 4.3) - An insufficient input validation of the feature file name in the "feature::LOADFEATUREFILE" adminbin call that could result

r/blueteamsec 7h ago

You can run the docker and access the panel and start the web vulnerability scan. [https://github.com/CSPF-Founder/taka-docker](https://github.com/CSPF-Founder/taka-docker) It combines a rule-based detection engine with optional AI-assisted verification, an intelligent crawler, and a web UI for managing scans and reviewing results. * **Broad vulnerability coverage** \- detects SQL injection, XSS, SSRF, LFI, command injection, XXE, SSTI, open redirects, CORS and CSRF issues, insecure headers and cookies, cache poisoning, request smuggling, GraphQL flaws, and more. * **AI-assisted verification** \- pluggable LLM backends (Anthropic, OpenAI) review the evidence behind each finding to cut down on false positives, and can optionally suggest additional payloads to confirm a vulnerability. * Tested against multiple vulnerable web apps and efficiently detected majority of the vulnerabilities with near zero False positives. **Feedbacks and suggestions are welcome.**

r/cybersecurity 8h ago

Hi everyone, Disclosure: I own the project linked below. I’m sharing it because I’m working on the technical side of NIS2 evidence collection, not to pitch services or solicit DMs. Project context: [https://www.softwareapp-hb.de/projekte.html](https://www.softwareapp-hb.de/projekte.html) The security engineering problem I’m looking at is this: NIS2 Article 21 requires organizations to address areas like risk management, incident handling, business continuity, supply-chain security, vulnerability handling, access control, asset management, MFA, secure communications, and cyber hygiene. In practice, a lot of “evidence” for these areas still ends up as screenshots, policy PDFs, manual exports, spreadsheets, or consultant-maintained checklists. That may satisfy some audit workflows, but from a security operations perspective it has obvious weaknesses: evidence goes stale, checks are difficult to reproduce, and there is often a gap between what the policy says and what the infrastructure actually looks like. I’m building an open-source, self-hostable platform that tries to map NIS2 requirements to concrete technical checks and produce traceable evidence from actual system state. The current design focus is not to replace GRC platforms, legal review, auditors, or an ISMS. The goal is narrower: make certain parts of the evidence layer more repeatable, technical, and defensible. Examples of evidence areas where this might be useful: * asset inventory and system classification * patch/vulnerability state * account and privilege configuration * MFA and authentication posture * backup existence and test evidence * logging and monitoring configuration * firewall and network exposure checks * incident-response process evidence * technical control mappings to NIS2 Article 21 The hard question is where automation helps and where it becomes misleading. For example, a system can verify that logging is enabled, but not necessarily that logs are reviewed effectively. A tool can collect patch state, but not decide whether risk acceptance was appropriate. It can validate backup configuration, but not prove that recovery objectives are realistic unless restore tests are captured properly. For people working in security engineering, SOC, vulnerability management, infrastructure, audit support, or compliance operations: Where do you think technical automation genuinely improves NIS2 evidence quality? And where do you think compliance automation creates false confidence? I’m especially interested in the boundary between measurable technical state and areas that still require human assessment, process maturity, or auditor judgment.

r/cybersecurity 12h ago

[https://www.virustotal.com/gui/file/f6d2e7092831b983318b685132a19567ff5e6428665255738c4e5a63371bcce3/behavior](https://www.virustotal.com/gui/file/f6d2e7092831b983318b685132a19567ff5e6428665255738c4e5a63371bcce3/behavior) So i would love to understand why this is happening, as its not an executable and only 1 sandbox are actually "running" it.

Friday, May 8
r/netsec 14h ago

Existing benchmarks for LLM-based vulnerability detection compress model performance into a single metric, which fails to reflect the distinct priorities of different stakeholders. For example, a CISO may emphasize high recall of critical vulnerabilities, an engineering leader may prioritize minimizing false positives, and an AI officer may balance capability against cost. To address this limitation, we introduce SecLens-R, a multi-stakeholder evaluation framework structured around 35 shared dimensions grouped into 7 measurement categories. The framework defines five role-specific weighting profiles: CISO, Chief AI Officer, Security Researcher, Head of Engineering, and AI-as-Actor. Each profile selects 12 to 16 dimensions with weights summing to 80, yielding a composite Decision Score between 0 and 100. We apply SecLens-R to evaluate 12 frontier models on a dataset of 406 tasks derived from 93 open-source projects, covering 10 programming languages and 8 OWASP-aligned vulnerability categories. Evaluations are conducted across two settings: Code-in-Prompt (CIP) and Tool-Use (TU). Results show substantial variation across stakeholder perspectives, with Decision Scores differing by as much as 31 points for the same model. For instance, Qwen3-Coder achieves an A (76.3) under the Head of Engineering profile but a D (45.2) under the CISO profile, while GPT-5.4 shows a similar disparity. These findings demonstrate that vulnerability detection is inherently a multi-objective problem and that stakeholder-aware evaluation provides insights that single aggregated metrics obscure.

r/netsec 18h ago

As a maintainer, this is Cilium's take on how we secure our Github Actions in the OSS project. A few highlights: * SHA pinning every GitHub Action * Separating trusted vs untrusted code paths in `pull_request_target` * Isolating CI credentials from production release credentials * Cosign signing + SBOM attestations * Vendoring Go dependencies to make supply chain changes visible in review * Treating blast radius reduction as the core design principle and a few gaps: * no SLSA provenance yet * remaining mutable [u/main](https://www.reddit.com/user/main/) references * no dependency review at PR time * missing govulncheck integration

r/blueteamsec 19h ago
CVE

I published SunnyDayBPF, an eBPF-based research project focused on post-syscall user-buffer telemetry deception. The research is about telemetry integrity and detection engineering. Core question: Can a user-space security or logging agent successfully read telemetry, but still observe a modified version of that data before parsing and forwarding it to a SIEM, EDR, audit backend, or detection pipeline? SunnyDayBPF focuses on the trust boundary between read-like syscall completion and user-space telemetry parsing. Repository: [https://github.com/azqzazq1/SunnyDayBPF](https://github.com/azqzazq1/SunnyDayBPF) SunnyDayBPF was originally proposed, named, and publicly documented by Azizcan Daştan. To the best of my knowledge, it is the first public research framing of post-syscall user-buffer telemetry deception with eBPF under this technique name. This is published as authorized lab research and defensive telemetry integrity analysis, not as a production bypass framework. I’d especially appreciate feedback from defenders on: * eBPF monitoring ideas * telemetry integrity validation * cross-source correlation * detection engineering approaches * limitations and prior art

CERT/CC 19h ago

Overview A privilege escalation vulnerability has been discovered in Linux kernel versions version 4.17 (released 2017) and later. Many popular distributions and Linux-based containers are affected. This vulnerability was publicly disclosed on April 29, 2026, has been assigned CVE ID CVE-2026-31431 , and is commonly referred to as "Copy Fail." Description The Linux kernel, since version 4.17, includes the algif_aead module, which provides user space access to authenticated encryption with associated data (AEAD) operations via the AF_ALG interface. This module may be available as a loadable kernel module or compiled directly into the kernel, depending on the Linux distribution or the custom built Linux install. According to the https://copy.fail disclosure statement: An unprivileged local user can write 4 controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root. The vulnerability is caused by a logic flaw in the Linux kernel’s algif_aead ( AF_ALG) implementation. An unprivileged local user can reliably perform a controlled 4-byte write into the page cache of any readable file without race conditions or timing dependencies. Critically, the corrupted page is not marked dirty, so the modified contents are never written back to disk. The underlying file remains unchanged, allowing the in-memory corruption to bypass checksum and file integrity verification mechanisms. Because subsequent reads are served from the page cache, an attacker can target a setuid binary and modify it

The Hacker News 20h ago

Threat hunters have flagged a previously undocumented Brazilian banking trojan dubbed TCLBANKER that's capable of targeting 59 banking, fintech, and cryptocurrency platforms. The activity is being tracked by Elastic Security Labs under the moniker REF3076. The malware family is assessed to be a major update of the Maverick, which is known to leverage a worm called SORVEPOTEL to spread via

The Hacker News 23h ago

Cybersecurity researchers have discovered fraudulent apps on the official Google Play Store for Android that falsely claimed to offer access to call histories for any phone number, only to trick users into joining a subscription that provided fake data and incurred financial loss. The 28 apps have collectively racked up more than 7.3 million downloads, with one of them alone accounting for over

r/Malware May 8

JDownloader is compromised! * The replaced malicious executable contains the official and benign JDownloader in resources along with an XOR encrypted blob also available in resources * The encrypted blob after 8 minutes of waiting to prevent sandbox noise is decrypted and executed, the next stage contains also several XOR encrypted resources and the official Python installer * After decrypting resources, they contain PyArmor encrypted file and PyArmor runtime * Delivers sophisticated Python remote access malware See AnyRun execution chain along with the 8 minute wait before the payload starts: [https://app.any.run/tasks/e0cecc2d-5571-49fe-a549-cc7d1b8b5908](https://app.any.run/tasks/e0cecc2d-5571-49fe-a549-cc7d1b8b5908) IOC's: * Initial delivered installer -> 5a6636ce490789d7f26aaa86e50bd65c7330f8e6a7c32418740c1d009fb12ef3 * Stage 2 payload -> 77a60b5c443f011dc67ace877f5b2ad7773501f3d82481db7f4a5238cf895f80 * PyArmor encrypted blob: 5fdbee7aa7ba6a5026855a35a9fe075967341017d3cb932e736a12dd00ed590a * hxxps://parkspringshotel\[.\]com/m/Lu6aeloo.php (most likely another compromised URL) * hxxpx://auraguest\[.\]lk/m/douV2quu.php (most likely another compromised URL)

The Hacker News May 8

A previously undocumented Linux implant codenamed Quasar Linux RAT (QLNX) is targeting developers' systems to establish a silent foothold as well as facilitate a broad range of post-compromise functionality, such as credential harvesting, keylogging, file manipulation, clipboard monitoring, and network tunneling. "QLNX targets developers and DevOps credentials across the software supply chain,"

r/cybersecurity May 8

I run a few small SaaS platforms and static websites. When my websites were first launched, I didn't pay much attention because there were only very basic scanning attempts, like trying to load WordPress wp-admin.php pages. However, starting a few weeks ago, I've noticed attempts to perform SQL injections or extract server information through feedback forms, login forms, and other POST requests. These requests are coming in every hour. After checking hundreds of log entries, they seem to follow the same patterns as Burp Suite’s automated scanning features. When I double-checked with Claude, it also suggested these look like scans from Burp or ZAP. (I've attached images of two log entries: https://cln.sh/VSw3xy6Q) About once a week, in addition to these automated requests, I occasionally see attacks that aren't automated scans but seem to actually consider the website's structure. (Last week, there was a 30-minute attempt specifically trying to bypass the CAPTCHA on the login form.) I'm very interested in cybersecurity, but since I'm just a student still learning and without professional experience, I'm not very familiar with attack attempts or patterns on live services. So, I have a few questions: 1. Are attack attempts common even for small websites (less than 50 daily visitors)? 2. I understand that Cloudflare blocks most SQL injection attempts before they even reach the server. Is this feature actually effective in practice? 3. Besides these two questions, if anyone working in this field has any tips or other useful info, I’d really appreciate it if you could share. Lastly, this post might feel a bit awkward or sound like it was written by an AI. I live in a non-English speaking country and my English isn't great, so I used a translator for this post. Please bear with me.

The Hacker News May 8

The dark secret of enterprise security operations is that defenders have quietly institutionalized the practice of not looking. This is not just anecdotal, but rather backed by a recent report investigating more than 25 million security alerts, including informational and low-severity, across live enterprise environments. The dataset behind these findings includes 10 million monitored

WIRED May 8

With the launch of the first 16 satellites, Russia begins construction of a network for satellite internet that aims to cover the entire country by 2030. But getting there won’t be easy.

The Hacker News May 8
CVE

Details have emerged about a new, unpatched local privilege escalation (LPE) vulnerability impacting the Linux kernel. Dubbed Dirty Frag, it has been described as a successor to Copy Fail (CVE-2026-31431, CVSS score: 7.8), a recently disclosed LPE flaw impacting the Linux kernel that has since come under active exploitation in the wild. The vulnerability was reported to Linux kernel maintainers

Krebs on Security May 8

An ongoing data extortion attack targeting the widely-used education technology platform Canvas disrupted classes and coursework at school districts and universities across the United States today, after a cybercrime group defaced the service’s login page with a ransom demand that threatened to leak data from 275 million students and faculty across nearly 9,000 educational institutions. A screenshot shared by a reader showing the extortion message that was shown on the Canvas login page today. Canvas parent firm Instructure responded to today’s defacement attacks by disabling the platform, which is used by thousands of schools, universities and businesses to manage coursework and assignments, and to communicate with students. Instructure acknowledged a data breach earlier this week, after the cybercrime group ShinyHunters claimed responsibility and said they would leak data on tens of millions of students and faculty unless paid a ransom. The stated deadline for payment was initially set at May 6, but it was later pushed back to May 12. In a statement on May 6, Instructure said the investigation so far shows the stolen information includes “certain identifying information of users at affected institutions, such as names, email addresses, and student ID numbers, as well as as messages among users.” The company said it found no evidence the breached data included more sensitive information, such as passwo

Thursday, May 7
WIRED May 7

Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. The good news: You can easily uninstall it. The bad? You might not want to.

Cloudflare May 7

This afternoon, we sent the following email to our global team. One of our core values at Cloudflare is transparency, and we believe it's important that you hear this directly from us because it’s a major moment at Cloudflare. Team: We are writing to let you know directly that we’ve made the decision to reduce Cloudflare’s workforce by more than 1,100 employees globally. The way we work at Cloudflare has fundamentally changed. We don’t just build and sell AI tools and platforms. We are our own most demanding customer. Cloudflare’s usage of AI has increased by more than 600% in the last three months alone. Employees across the company from engineering to HR to finance to marketing run thousands of AI agent sessions each day to get their work done. That means we have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere. Today is a hard day. This decision unfortunately means saying goodbye to teammates who have contributed meaningfully to our mission and to building Cloudflare into one of the world’s most successful companies. We want to be clear that this decision is not a reflection of the individual work or talent of those leaving us. Instead, we are reimagining every internal process, team, and role across the company. Today’s actions are not a cost-cutting exercise or an assessment of individuals’ performance; they are about Cloudflare defining how a world-class, high-growth company operates and creates value in the agentic AI era. This is a moment we need to own as founders and leaders of the company. Matthew has personally sent out every offer letter we've extended. It is a practice he has always looked forward to because it represented our growth and the incredible talent joining our mission. It didn’t feel rig

The Hacker News May 7
CVE

Ivanti is warning that a new security flaw impacting Endpoint Manager Mobile (EPMM) has been explored in limited attacks in the wild. The high-severity vulnerability, CVE-2026-6973 (CVSS score: 7.2), is a case of improper input validation affecting EPMM before versions 12.6.1.1, 12.7.0.1, and 12.8.0.1. It allows "a remotely authenticated user with administrative access to achieve remote code

The Hacker News May 7

Bad week. Turns out the easiest way to get hacked in 2026 is still the same old garbage: shady packages, fake apps, forgotten DNS junk, scam ads, and stolen logins getting dumped into Discord channels like it’s normal. Some of these attack chains don’t even feel sophisticated anymore. More like some tired guy with a Telegram account and too much free time. The worst part is how often this stuff

The Hacker News May 7

Having an incident response retainer, or even a pre-approved external incident response firm, is not the same as being ready for an incident. A retainer means someone will answer the phone. Operational readiness determines whether that team can do meaningful work the moment they do. That distinction matters far more than many organizations realize. In the first hours of a security incident

The Hacker News May 7

Cybersecurity researchers have discovered three packages on the Python Package Index (PyPI) repository that are designed to stealthily deliver a previously unknown malware family called ZiChatBot on Windows and Linux systems. "While these wheel packages do implement the features described on their PyPI web pages, their true purpose is to covertly deliver malicious files," Kaspersky

The Hacker News May 7
CVE

A dozen critical security vulnerabilities have been disclosed in the vm2 Node.js library that could be exploited by bad actors to break out of the sandbox and execute arbitrary code on susceptible systems. vm2 is an open-source library used to run untrusted JavaScript code inside a secure sandbox by intercepting and proxying JavaScript objects to prevent sandboxed code from accessing the host

Wednesday, May 6
The Hacker News May 6

Cybersecurity researchers have exposed a new Mirai-derived botnet that self-identifies as xlabs_v1 and targets internet-exposed devices running Android Debug Bridge (ADB) to enlist them in a network capable of carrying out distributed denial-of-service (DDoS) attacks. Hunt.io, which detailed the malware, said it made the discovery after identifying an exposed directory on a Netherlands-hosted

Cloudflare May 6

On May 5, 2026, at roughly 19:30 UTC, DENIC, the registry operator for the .de country-code top-level domain (TLD), started publishing incorrect DNSSEC signatures for the .de zone. Any validating DNS resolver receiving these signatures was required by the DNSSEC specification to reject them and return SERVFAIL to clients, including 1.1.1.1 , the public DNS resolver operated by Cloudflare. The country-code top-level domain for Germany, .de , is one of the largest on the Internet. On Cloudflare Radar , it consistently ranks among the most broadly queried TLDs globally. An outage at this level of the DNS hierarchy has the potential to make millions of domains unreachable. In this post, we’ll walk through what we saw, the impact of these events, and how we applied temporary mitigations while DENIC resolved the issue. How DNSSEC works DNSSEC (Domain Name System Security Extensions) adds cryptographic authentication to DNS. When a zone is signed with DNSSEC, each set of records is accompanied by a digital signature known as an RRSIG record that lets a resolver verify the records haven’t been tampered with. Unlike encrypted DNS protocols, such as DNS over TLS (DoT) and DNS over HTTPs (DoH), DNSSEC is about integrity, not privacy. The records are visible, but their authenticity can be proven. What makes DNSSEC unique is that the signatures travel together with the records they protect. This means int

r/netsec May 6

I recently re-tested an old Binance API finding I had reported through Bugcrowd. The original issue was about Binance API IP whitelisting and derived `listenKey` stream credentials. At the time, a `listenKey` could be created from a whitelisted IP and then used from a non-whitelisted IP to consume private user data streams. That did not allow trading, withdrawals, or account takeover. But it did allow real-time access to sensitive private stream data such as balances, orders, executions, positions, timing, and strategy behavior. The core security argument was: >A derived credential should not be more portable than the credential that created it. The report was rejected as “Social Engineering” / “Not Applicable”. I disagreed, because the relevant threat model was not “convince the user to send a token”. The realistic threat model was supply-chain compromise: malicious code running inside a trusted bot server, CI job, dependency, IDE workspace, or trading environment where API keys already exist. I re-tested the behavior on May 5, 2026. Result: The old behavior appears to be gone. Spot and Margin no longer use the old `listenKey` model. Futures still uses `listenKey`, but now appears to enforce the API key IP whitelist correctly. From a whitelisted IP the calls worked; from non-whitelisted Mullvad exits they failed with the expected IP restriction error. That is good for users. But it raises an uncomfortable disclosure-process question: If a finding is “not applicable” enough to reject, not acknowledge, and not reward — but technical enough to later fix — what should a healthy disclosure process do? Full technical write-up, timeline, re-test setup, and raw outputs: [https://blog.technopathy.club/binance-fixed-the-ip-whitelist-gap-the-disclosure-process-is-still-broken](https://blog.technopathy.club/binance-fixed-the-ip-whitelist-gap-the-disclosure-process-is-still-broken) I am mainly interested in the process question here: When a rejected report later disappears from production, should the program re-open it, acknowledge it, partially reward it, or leave it closed unless the researcher can prove direct causality?

WIRED May 6

It's not just you. Scammers, hackers, and other cybercriminals are complaining about “AI shit” flooding platforms where they discuss cyberattacks and other illegal activity.

The Hacker News May 6

For nearly 20 years, we at The Hacker News have mostly told scary stories about cyberspace — big hacks, broken systems, and new threats. But behind every headline, there’s a quieter, better story. It’s the story of leaders making tough calls under pressure, teams building smarter defenses, and security products that keep hunting threats 24/7 — even when it’s hard. Most of the time, this work is

The Hacker News May 6

Analysts recently confirmed what identity security teams have quietly feared: AI agents are being deployed faster than enterprises can govern them. In their inaugural Market Guide for Guardian Agents, Gartner states that “enterprise adoption of AI agents is accelerating, outpacing maturity of governance policy controls.” Enterprise leaders can request access to the Gartner Market Guide for

r/computerforensics May 6
APT

BSides can often be the one place where you can find the most obscure talks about a technical detail. For example, "Edge Device Memory Forensics" by Richard Tuffin or maybe "Forensic analysis of privacy focused mobile browsers" by Lorena Carthy and Ruben Jernslett. Finding them is the hard part. I built a website that tracks all BSides chapters, all 8575 videos, fetches transcripts, indexes them by technology, speakers, events, tools, protocols, standards, and much more. It is free, no login, no ads, no tracking beyond basic visits (no cookies). And I'm planning to keep it so. Check out the forensics talks at [https://allbsides.com/talks.html?q=forensics](https://allbsides.com/talks.html?q=forensics), and let me know if you find the site useful or spot anything missing. Genuinely happy to receive feedback!

The Hacker News May 6

Google has announced expanded Binary Transparency for Android as a way to safeguard the ecosystem from supply chain attacks. "This new public ledger ensures the Google apps on your device are exactly what we intended to build and distribute," Google's product and security teams said. The initiative builds upon the foundation of Pixel Binary Transparency, which Google introduced in October 2021

r/Malware May 6

Someone hacked the deadmau5 discord server by virusing an admin. Said admin gave me the malware sample. Used claude sonnet 4.6 in combination with nyxstrike MCP framework to decompile and decrypt their obfuscated code, finding a goldmine. Title speaks for itself. The discord bot token could possibly have led to their CNC. But logging into the discord bot token to check for communications and see where it leads breaks 2 federal laws alone that I can think of. I did validate the token was live however, and matched it to a bot account. I also have discovered the webhook and token that was in the malware, both of them have been nuked (not by me). So, I checked their domain that they've been using, and they recompiled and reuploaded it. So its 26 bytes larger. I suspect they replaced the webhook url and the bot token with fresh ones, and suspect further that discord nuked the previous ones themselves. Nevertheless, I have personally not seen malware like this on github, so this must have been private and not some skid level stuff. I know it was turkish (at least the devs were). [Github](https://github.com/destiny-creates/goxlr.net-malware) link attatched for the source code including the deobfuscated malware classes, and the analysis/report. Don't flame me, it's still pretty cool 😆. Cracking the zkm encryption would have taken weeks (Im a python guy not a JS guy). Nyxstrike + sonnet 4.6 = 1.5 hours and its cracked.

Troy Hunt May 6

Presently sponsored by: Report URI: Guarding you from rogue JavaScript! Don’t get pwned; get real-time alerts & prevent breaches #SecureYourSite It's a fascinating display of leverage: the ShinyHunters folks, with very limited resources and experience (their demographic will be teenagers to their early 20s), consistently gaining access to the data of massive brands. Not through technical ingenuity alone (although I'm sure there's a portion of that), but primarily through good ol' social engineering. That's coming through in the disclosure notices from the impacted companies, and Mandiant has a good write-up of it too : These operations primarily leverage sophisticated voice phishing (vishing) and victim-branded credential harvesting sites to gain initial access to corporate environments by obtaining single sign-on (SSO) credentials and multi-factor authentication (MFA) codes Question now is how long their run will go for. There's a very predictable ending if things keep going in this direction but right now, they show little sign of abating.

Tuesday, May 5
r/computerforensics May 5

Hi all — finally pushed this public after several months of work. Sharing here because this subreddit is where I'd want feedback from before anywhere else. **WAInsight** — https://github.com/akhil-dara/WAInsight (MIT) **Scope.** It doesn't extract data from a phone — that's a separate step with whatever acquisition workflow you already use. WAInsight starts after acquisition. Point it at a folder containing `msgstore.db` + `wa.db` + `Media/` + `Avatars/` and it ingests everything through a 29-stage pipeline into a normalised `analysis.db` (47 indexed tables), then opens a 30-page Qt desktop UI to actually work the case. **Why.** I wanted analysis to be the primary deliverable, not the report. So the UI is built around browsing every chat exactly like opening WhatsApp itself — home-style conversation list, bubbles with edits / revokes / replies / reactions / receipts / forwarded badges / mention chips / pinned-message strip — with forensic provenance one click away on every bubble. Reports are a snapshot of what was found, not the destination. **Capabilities, grouped by what you're actually trying to do:** *Reading the timeline* - Forensic ℹ button on every bubble: msgstore source IDs, every SQL row that fed the bubble, origination flags decoded, per-recipient receipt timeline (delivered / read / played, ms-precise). - Ghost-message recovery from `message_quoted_text` (deleted-for-everyone messages reconstructed inline next to the revoked bubble). - Edit history per message — every revision side-by-side. - Reply chains as click-through badges with cross-conversation "Go to original" jumps. - 60+ system events decoded (group / security / admin / privacy / business / ephemeral) instead of opaque type codes. - Calendar with per-day message counts shown flight-fare style; click+drag to range-filter. - Windowed-flat virtual scroller for chats with 5K+ messages — jumping to message #47K in a 47K-message chat is O(1). *Media analysis* - Folder-shaped Media Dashboard that scales to 200K+ rows at `file://` (sharded AVIF thumbs + chunked metadata + vendored UI engine, sub-millisecond bitset crossfilter). Cascading filters: conversation × sender × MIME × extension × status × date. - Perceptual visual search across the whole case — drop a screenshot, get Exact / Near-Exact / Near-Duplicate / Template-Match tiers (pHash + dHash + edge-map). - Camera-original → WhatsApp tracking: feed an original from `DCIM/`, find every chat that photo was sent in even after WhatsApp's recompression changed the SHA-256. - View-once images and voice notes downloadable from the bubble even after on-device expiry (CDN URL + media_key, AES-CBC + HMAC). - Hash-link auto-rescue: missing media that shares a SHA-256 with another message's on-disk media gets auto-resolved (tagged `recovery_method='hash_linked'`, never confused with a real local copy). - `wa.db` thumbnail blob rendered as fallback when even the bytes are gone. - HD/SD twin pairs surfaced inline with cross-jumps. - Cross-chat propagation: right-click any media → every chat that shared the same SHA-256, chronologically. Says where the bytes were *first seen*, not just where they were last forwarded. - 12-state media recovery taxonomy preserved in every report and dashboard (`original` / `downloaded` / `hash_linked` / `orphan_recovered` / etc.). - Orphaned-media browser: files in `Media/` with no surviving message row + auto-rescue against surviving message hashes. *Identity & devices* - Per-message platform attribution from `key_id` — every bubble carries an inline tag (Android / iPhone / Web/Desktop / Companion #N), confidence-scored. The classifier was its own separate research piece — collected `key_id` samples across real devices on Android, iPhone, Web, and linked companions until the rules held up. Powers the Group Report's *Device Platform Usage* breakdown and the contact's *Device Sessions* tab. - Unified contact registry merged from 5 sources (`jid_map` ∪ `wa_contacts` ∪ `lid_display_name` ∪ group labels ∪ mention names) so every JID resolves to one canonical identity. - Owner-aware everywhere — `sender_id IS NULL` for owner messages gets joined to `case_metadata` so owner activity never surfaces as "Unknown" anywhere in the UI or reports. *Groups & communities* - Past-participant reconstruction from 3 sources: `group_past_participant` ∪ `group_member.is_current=0` ∪ message-presence inference (catches members the roster purged after a long enough gap). - Owner can-post / can-edit banner on every Group Info page, sourced from `chat.participation_status` + admin flags. - Community LID resolution + comment-author resolution even when WhatsApp only stored the LID. - Group Edit History with profile-picture diff. *Calls* - Synthetic call reconstruction: calls that have no `message` row in their conversation get virtual rows so they render in every participant's chat timeline at the right position. Group voice chats appear inside the group's chat even when WhatsApp didn't write a message row for them. *Cross-case pivots* - Cross-Contact Analysis: pick 2+ contacts, instantly see shared groups, calls between them, file SHA-256 hashes any of them shared in common, cross @-mentions, every conversation any of them appears in. Owner is a first-class pickable contact. - FTS5 global search with sender / conversation / date / ghost filters; results panel as a sidebar inside the chat with click-to-jump highlights. *Reports & handoff* - Per-group landscape-A4 PDF/HTML report: case+evidence provenance banner with source-DB SHA-256 hashes, group identity, owner role, top contributors / forwarders, device platform split, mentions network, activity heatmap, calls, locations (with live-share start/final coords), message-type taxonomy (Type 64/82/90/92/112/116 etc. mapped to readable labels), bot activity, former members. - Per-contact report with section picker. - Offline HTML viewer bundle — single ZIP, opens from `file://` with no Python or server. WhatsApp-Web-style chat list, full message rendering, FTS5-equivalent search. The case officer / opposing counsel can open it in any browser. - Tagged-messages export with three modes (full / tagged-only / tagged ± N day buffer). **Forensic integrity.** Source `msgstore.db` opened with three independent guards (`?mode=ro&immutable=1` URI + `SQLITE_OPEN_READONLY` flag + `PRAGMA query_only=ON`). Source files SHA-256 hashed at ingest. Every action journaled to a hash-chained `chain_of_custody.jsonl` — each entry's hash includes the previous one, so the audit trail is tamper-evident, not just append-only. Original IDs preserved (`message.source_msg_id`, `media.source_media_row_id`, etc.) so every analysis row links back to its msgstore.db / wa.db origin. Timestamps shown local + UTC in brackets so case timezone is unambiguous. **Honest caveats.** Android-only. No automated tests yet. Schema research was done sample-by-sample so there are likely edge cases on WA versions / Business app / regional builds I haven't seen — Business app support is on the roadmap. Validated primarily against my own personal-device datasets. Built solo. PySide6 + SQLite + ~85K lines of Python. There's a deepwiki for it too (https://deepwiki.com/akhil-dara/WAInsight) if you want a deeper architectural read before cloning. Would genuinely value feedback from anyone who works WhatsApp cases regularly — especially edge cases or schema variants that break it. Issues / DMs / comments all welcome.

r/netsec May 5
CVE

In this blog post I introduced several novel techniques: 1.How to get all routes - no need to authenticate. 2. How to get methods to fuzz from pages and not just the bootstrap JS files - the vast majority of methods are in those pages and not the JS files that existing tools and guides point to. 3. How to parse "LWC" components and not just legacy components.

r/netsec May 5

The majority of widely used AI clients like: * Claude Code * Claude Desktop * Cursor * LibreChat * Amazon Q CLI have not implemented the critical refresh-token flow of the OAuth standard.  This is forcing developers to issue long lived tokens creating a serious security regression in an already solved problem. This write up includes a matrix table of 14 major clients with notes linking to feature requests, pull requests, and multiple forum discussions.  It is not all gloom and doom though! There is a work-around solution that security conscious users are using as a stop-gap also discussed, along with a best practices guide for developers implementing their own MCP OAuth Solution. The plan is to update this reference on a monthly basis to track if there is any movement on this open requests.

r/Malware May 5

Pushed a new IOCX release (v0.7.1) that’s aimed at making the engine much harder to break during static analysis. The focus was adversarial behaviour: malformed binaries, corrupted PE structures, and intentionally hostile IOC‑like strings. If you work with weird samples, tooling pipelines, or large‑scale triage, this release makes IOCX more robust under hostile conditions. **New PE structural heuristics** Six new checks added to catch structural anomalies without blowing up the parser: * overlapping/misaligned sections * inconsistent optional headers (PE32 & PE32+)   * broken entrypoint mappings   * corrupted data directories   * malformed import tables   * general PE layout inconsistencies   These aren’t detections — they’re deterministic, reason‑coded structural signals to keep analysis stable. **Expanded adversarial PE corpus** Added a full suite of malformed and corrupted PEs, including: * broken RVAs / invalid addressing   * truncated Rich headers   * fake UPX names + packed‑lookalikes   * PE32/PE32+ hybrids   * “franken‑PEs” combining multiple faults   All outputs are snapshot‑validated to guarantee deterministic behaviour. **Adversarial coverage across all IOC categories** New hostile string fixtures now stress every extractor: * homoglyph + mixed‑script domains   * malformed URLs and schemes   * broken IPv4/IPv6   * noisy or near‑miss hashes   * invalid Base64   * adversarial crypto strings (incl. Base58Check)   * long/invalid Windows paths   * malformed emails   The goal: keep extraction predictable even when the input is intentionally messy. **Parser & extractor hardening** * stable on malformed PE structures   * structured, JSON‑safe error metadata   * improved domain/URL/crypto/hash extractors   * deterministic output across platforms **Links** GitHub: [https://github.com/iocx-dev/iocx](https://github.com/iocx-dev/iocx)   PyPI: [https://pypi.org/project/iocx/](https://pypi.org/project/iocx/) **Example** `pip install iocx` `iocx suspicious.exe -a full` If you’re doing malware triage, static analysis, or building automated pipelines that need predictable IOC extraction, v0.7.1 should be a noticeable stability bump. Happy to discuss edge cases or weird samples people want covered next.

Trail of Bits May 5
AI

We recently added a C/C++ security checklist to the Testing Handbook and challenged readers to spot the bugs in two code samples : a deceptively simple Linux ping program and a Windows driver registry handler. If you found the inet_ntoa global buffer gotcha or the missing RTL_QUERY_REGISTRY_TYPECHECK flag, nice work. If not, here’s a full walkthrough of both challenges, plus a deep dive into how the Windows registry type confusion escalates from a local denial of service to a kernel write primitive. Since we first released the new C/C++ security checklist, we also developed a new Claude skill, c-review . It turns the checklist into bug-finding prompts that an LLM can run against a codebase. It’s also platform and threat-model aware. Run these commands to install the skill: claude skills add-marketplace https://github.com/trailofbits/skills claude skills enable c-review --marketplace trailofbits/skills The Linux ping program challenge The Linux warmup challenge we showed you in the last blog post has an obvious command injection issue. #include <stdio.h> #include <s

r/computerforensics May 5
CVE

I've been working on this for the last few months and just wanted to share. It's a free browser-based tool for inspecting and removing metadata from photos, videos, audio, PDFs and Office documents — and it has a small image-forensics lab built in. Live: [https://midgardmud.de/tools/exif/](https://midgardmud.de/tools/exif/) Why I built it: every other "EXIF remover" online asks you to upload your private files to a server. That's the opposite of privacy. So I wrote one that runs 100% in the browser via the File API — your file never leaves your device. F12 → Network tab → drop a 50 MB photo → you'll see zero outbound requests. What it does: • Strips metadata from JPG/PNG/WebP/GIF/HEIC/TIFF, MP4/MOV/MKV/WebM/AVI, MP3/FLAC/OGG/WAV, PDF, DOCX/XLSX/PPTX • Privacy Risk Score 0–100 with per-file breakdown so you see what's actually leaking • 4 one-click privacy profiles (Anonymous / Social-safe / Keep camera / GPS-only) • Forensics: ELA, JPEG-Ghost re-save heatmap, DQT compression fingerprint, Noise + CFA/Bayer pattern (defensible alternative to AI-image detectors), Copy-Move clone detection, embedded-thumbnail audit, RGB histogram, hex viewer, structure inspector • SHA-256 + perceptual hash (pHash) per file • ExifTool-compatible JSON export • Per-tag EXIF editor + GPS spoofing for JPEG • C2PA self-signed Content Credentials • Works fully offline as a PWA after first visit • 19 languages Stack: vanilla JS, no framework, no build step, \~12k lines. libheif WASM lazy-loaded for HEIC. Web Worker for big videos so the UI stays responsive. Happy to answer anything about how the parsers work, why I avoided React, or how the JPEG-Ghost / Copy-Move detection is implemented. Feedback very welcome.

Monday, May 4
Synack May 4

The Model We’ve Relied on Is Starting to Break Over the past 20 years, I’ve seen the threat landscape evolve from opportunistic attackers, to organized cybercrime, to nation-state campaigns. Each shift forced security teams to adapt. What’s happening right now is different. AI models coming out of Anthropic, OpenAI, Google, and X are rewriting the […] The post Sara AI Pentesting Is Now Generally Available: The Model Is Changing appeared first on Synack .

Synack May 4

Key Takeaways Over the past year, the conversation in security has changed faster than most programs have. AI is compressing attacker timelines. Environments are changing daily rather than quarterly. And the model most enterprises still rely on to validate security—periodic penetration testing—is starting to break under the weight of both. The real question isn’t whether […] The post The Shift to Continuous Security Validation: Why Detection Is No Longer Enough appeared first on Synack .

r/ReverseEngineering May 4

To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the [Reverse Engineering StackExchange](http://reverseengineering.stackexchange.com/). See also /r/AskReverseEngineering.

Saturday, May 2
r/computerforensics May 2

*As one tends to do on Saturday mornings with coffee in hand, I was reviewing two samples that were attributed to the LunaStealer / LunaGrabber family. Originally I was validating that* `tiquery` *was working with the MCP configuration, however what started as a quick TI check turned into a full static analysis session — and it gave me a good opportunity to put the MalChela MCP integration through its paces in a real workflow. This post walks through how that investigation unfolded, what the pivot points were, and what we found at the bottom of the rabbit hole.*

Story Overview