A dozen critical security vulnerabilities have been disclosed in the vm2 Node.js library that could be exploited by bad actors to break out of the sandbox and execute arbitrary code on susceptible systems. vm2 is an open-source library used to run untrusted JavaScript code inside a secure sandbox by intercepting and proxying JavaScript objects to prevent sandboxed code from accessing the host
Cybersecurity News and Vulnerability Aggregator
Cybersecurity news aggregator
treemd <(curl -sL https://allsec.sh/md) (as Markdown) Top Cybersecurity Stories Today
A new malware framework called PCPJack is stealing credentials from exposed cloud infrastructure while actively removing TeamPCP's access to the systems. [...]
Ivanti is warning that a new security flaw impacting Endpoint Manager Mobile (EPMM) has been explored in limited attacks in the wild. The high-severity vulnerability, CVE-2026-6973 (CVSS score: 7.2), is a case of improper input validation affecting EPMM before versions 12.6.1.1, 12.7.0.1, and 12.8.0.1. It allows "a remotely authenticated user with administrative access to achieve remote code
Palo Alto Networks has disclosed that threat actors may have attempted to unsuccessfully exploit a recently disclosed critical security flaw as early as April 9, 2026. The vulnerability in question is CVE-2026-0300 (CVSS score: 9.3/8.7), a buffer overflow vulnerability in the User-ID Authentication Portal service of Palo Alto Networks PAN-OS software that could allow an unauthenticated attacker
Latest
Instructure (Canvas) Breached by Shiny Hunters — 275M Records from ~9,000 Schools/Universities, Ransom Deadline May 12
Shiny Hunters breached Instructure, operator of the Canvas platform. They claim \~275 million records stolen from nearly 9,000 educational institutions, plus billions of private messages. Live Canvas websites were defaced today with a May 12 ransom demand. Instructure took affected sites offline. [https://6abc.com/post/canvas-hacked-massive-data-breach-affects-schools-using-nationwide-penn-reportedly-impacted/19059691/](https://6abc.com/post/canvas-hacked-massive-data-breach-affects-schools-using-nationwide-penn-reportedly-impacted/19059691/)
Standard Kubernetes network security is fundamentally broken by NAT blindness. When an intrusion alert fires, traditional tools show a physical node IP, leaving you guessing which of the hundreds of ephemeral pods is actually compromised. I engineered a custom SIEM pipeline that uses eBPF and Linux Traffic Control to mirror virtual CNI traffic directly to Suricata. By binding this telemetry to a deterministic O(1) Logstash memory router, the system maps transient IPs to exact pod names and namespaces in under 5 milliseconds. This architecture completely eliminates the Kubernetes blind spot, providing true zero-trust visibility across both kernel execution and East-West lateral network movement. Read the full technical architecture breakdown here: [https://medium.com/@mouhamed.yeslem.kh/engineering-a-zero-trust-kubernetes-siem-bypassing-nat-blindness-with-ebpf-tc-and-suricata-767c70a55058](https://medium.com/@mouhamed.yeslem.kh/engineering-a-zero-trust-kubernetes-siem-bypassing-nat-blindness-with-ebpf-tc-and-suricata-767c70a55058)
The ShinyHunters extortion gang has breached education technology giant Instructure again, this time exploiting another vulnerability to deface Canvas login portals for hundreds of colleges and universities. [...]
A new trojan named TCLBanker, which targets 59 banking, fintech, and cryptocurrency platforms, uses a trojanized MSI installer for Logitech AI Prompt Builder to infect systems. [...]
Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. The good news: You can easily uninstall it. The bad? You might not want to.
This afternoon, we sent the following email to our global team. One of our core values at Cloudflare is transparency, and we believe it's important that you hear this directly from us because it’s a major moment at Cloudflare. Team: We are writing to let you know directly that we’ve made the decision to reduce Cloudflare’s workforce by more than 1,100 employees globally. The way we work at Cloudflare has fundamentally changed. We don’t just build and sell AI tools and platforms. We are our own most demanding customer. Cloudflare’s usage of AI has increased by more than 600% in the last three months alone. Employees across the company from engineering to HR to finance to marketing run thousands of AI agent sessions each day to get their work done. That means we have to be intentional in how we architect our company for the agentic AI era in order to supercharge the value we deliver to our customers and to honor our mission to help build a better Internet for everyone, everywhere. Today is a hard day. This decision unfortunately means saying goodbye to teammates who have contributed meaningfully to our mission and to building Cloudflare into one of the world’s most successful companies. We want to be clear that this decision is not a reflection of the individual work or talent of those leaving us. Instead, we are reimagining every internal process, team, and role across the company. Today’s actions are not a cost-cutting exercise or an assessment of individuals’ performance; they are about Cloudflare defining how a world-class, high-growth company operates and creates value in the agentic AI era. This is a moment we need to own as founders and leaders of the company. Matthew has personally sent out every offer letter we've extended. It is a practice he has always looked forward to because it represented our growth and the incredible talent joining our mission. It didn’t feel rig
A new malware framework called PCPJack is stealing credentials from exposed cloud infrastructure while actively removing TeamPCP's access to the systems. [...]
The Australian Cyber Security Center (ACSC) is warning organizations of an ongoing malware campaign using the ClickFix social engineering technique to distribute the Vidar Stealer info-stealing malware. [...]
Ivanti is warning that a new security flaw impacting Endpoint Manager Mobile (EPMM) has been explored in limited attacks in the wild. The high-severity vulnerability, CVE-2026-6973 (CVSS score: 7.2), is a case of improper input validation affecting EPMM before versions 12.6.1.1, 12.7.0.1, and 12.8.0.1. It allows "a remotely authenticated user with administrative access to achieve remote code
Hi all, I’ve seen how lately companies are shifting the conversation from “our product has an AI chatbot” to “you can integrate our tool with your agent”, which I find more interesting. I haven’t interacted much with security tools, and TBH I find them a bit intimidating. However, when I saw Anthropic’s announcements of [project Glasswing](https://www.anthropic.com/glasswing) and [Claude Code Security](https://www.anthropic.com/news/claude-code-security), I started to warm up to the idea of an agent helping me fix vulnerabilities in my code. Today, I stumbled across a new [AI tool from Sysdig](https://www.sysdig.com/blog/introducing-headless-cloud-security), that although is oriented for sysadmins, but it has the potential to help developers too. And I started to think: * Is this where things are going forward? * Should I start getting more involved with the cybersecurity part of my code? So, I have two questions for security people: * Are AI agents really helping in the security space? * What is your position when it comes to tools like these? Are you glad that security newbies like me can address security issues on my side, or would you fear I can cause more harm than good?
EventHawk v1.2 -open source Windows EVTX log analysis tool for DFIR (Juggernaut Mode, ATT&CK mapping, Sentinel anomaly engine)
I've been building a Windows event log analysis tool called EventHawk and just shipped v1.2. Sharing here for feedback from people who work in IR/forensics. What it is: A GUI + CLI tool for parsing and analyzing .evtx files. Built around a Rust-backed parallel parser with a resource monitor that throttles workers automatically so your machine stays usable mid-parse. Supports EVTX from Windows Vista through Server 2022. Parses and filters 6M rows of event logs in just 50-60 secs. https://github.com/Mihir-Choudhary/EventHawk Two parsing modes: 1. Normal Mode loads matched events into memory — fast and straightforward for most investigations. 2. Juggernaut Mode is for large captures: raw event XML goes to Parquet on disk, only metadata columns live in memory, full event detail lazy-loads on row click. Scroll 10M+ events with zero disk I/O. v1.2 rewrote Juggernaut Mode from scratch — replaced the old multi-DuckDB connection model (OOM crashes, file lock conflicts) with a single Arrow in-memory table and filter thread. Filtering now runs as vectorized DuckDB SQL, 20-120ms at 6M rows. Key features: 1. 20 built-in DFIR profiles — filter at parse time. Logon/Logoff, Process Creation, Lateral Movement, PowerShell, RDP, Defender Alerts, and 13 more. 2. 273+ event ID descriptions in plain English on click. No more looking up what 4688 or 7045 means mid-investigation. 3. ATT&CK tab — every parse maps events to MITRE techniques with ID, tactic, confidence, and source. Click any technique to filter the table to events that triggered it. 4. IOC tab — auto-extracts IPs, domains, file paths, hashes, URLs, registry keys, and suspicious command lines. Click any IOC to pivot the entire event table to events containing that indicator. 5. Chains tab — correlates events into multi-step attack chains shown as an expandable tree. Click any node to jump to that event. 6. Case tab — annotate events with analyst notes, export as a formal PDF investigation report. 7. Hayabusa integration — \\\~3,000 community Sigma rules evaluated and merged into the ATT&CK tab. 8. Sentinel anomaly engine — build a behavioral baseline from clean logs, then score a suspect capture. Each process-create event scored across five dimensions and classified into four tiers. Tier 3/4 findings include plain-English justifications. Built for novel malware, LOLBin abuse, and anything that slips past signatures. 9. Export in 8 formats — JSON, CSV, XML, HTML, PDF report, STIX 2.1, OpenIOC, YARA. 10. Full CLI and TUI for headless and automated use. If the tool looks useful, a star on GitHub goes a long way ⭐⭐ — it helps the project get visibility and keeps me motivated to keep building. Would genuinely love feedback from anyone, especially on what's missing or annoying in the existing ecosystem.
Your security controls aren't failing, they're missing where most of today's work actually happens. Keep Aware shows how browser activity like copy/paste and AI prompts bypass traditional protections. [...]
Two U.S. nationals were sentenced to 18 months in prison each for operating so-called laptop farms that helped North Korean IT workers fraudulently obtain remote employment at nearly 70 American companies. [...]
Palo Alto Networks has disclosed that threat actors may have attempted to unsuccessfully exploit a recently disclosed critical security flaw as early as April 9, 2026. The vulnerability in question is CVE-2026-0300 (CVSS score: 9.3/8.7), a buffer overflow vulnerability in the User-ID Authentication Portal service of Palo Alto Networks PAN-OS software that could allow an unauthenticated attacker
On April 29, 2026, a Linux kernel local privilege escalation vulnerability was publicly disclosed under the name "Copy Fail" ( CVE-2026-31431 ). Cloudflare’s Security and Engineering teams began assessing the vulnerability as soon as it was disclosed. We reviewed the exploit technique, evaluated exposure across our infrastructure, and validated that our existing behavioral detections could identify the exploit pattern within minutes. There was no impact to the Cloudflare environment, no customer data was at risk, and no services were disrupted at any point. Read on to learn how our preparedness paid off. Background Our Linux kernel release process Cloudflare operates a global Linux server infrastructure at an immense scale, with datacenters located across 330 cities . We maintain a custom Linux kernel build based on the community's Long-Term Support (LTS) versions to manage updates effectively at this volume. At any given time, we may utilize multiple LTS versions from various series, such as 6.12 or 6.18, which benefit from extended update periods. The community regularly merges and releases security and stability updates which trigger an automated job to generate a new internal kernel build approximately every week. These builds undergo testing in our staging data centers to ensure stability before a global rollout. Following a successful release, the Edge Reboot Release (ERR) pipeline manages a systematic update and reboot of the edge infrastructure on a four-week cycle. Our control plane infrastructure typically adopts the most recent kernel, with reboots scheduled according to specific workload requir
A 20-year-old California man was sentenced to 78 months in prison for serving as a home invader and money launderer in a criminal ring that stole over $250 million in cryptocurrency. [...]
Bad week. Turns out the easiest way to get hacked in 2026 is still the same old garbage: shady packages, fake apps, forgotten DNS junk, scam ads, and stolen logins getting dumped into Discord channels like it’s normal. Some of these attack chains don’t even feel sophisticated anymore. More like some tired guy with a Telegram account and too much free time. The worst part is how often this stuff
Companies like Lovable, Base44, Replit, and Netlify use AI to let anyone build a web app in seconds—and in thousands of cases, spill highly sensitive data onto the public internet.
Palo Alto Networks warned customers that suspected state-sponsored hackers have been exploiting a critical-severity PAN-OS firewall zero-day vulnerability for nearly a month. [...]
Having an incident response retainer, or even a pre-approved external incident response firm, is not the same as being ready for an incident. A retainer means someone will answer the phone. Operational readiness determines whether that team can do meaningful work the moment they do. That distinction matters far more than many organizations realize. In the first hours of a security incident
A fake version for the Claude AI website offers a malicious Claude-Pro Relay download that pushes a previously undocumented backdoor for Windows named Beagle. [...]
Cybersecurity researchers have discovered three packages on the Python Package Index (PyPI) repository that are designed to stealthily deliver a previously unknown malware family called ZiChatBot on Windows and Linux systems. "While these wheel packages do implement the features described on their PyPI web pages, their true purpose is to covertly deliver malicious files," Kaspersky
A dozen critical security vulnerabilities have been disclosed in the vm2 Node.js library that could be exploited by bad actors to break out of the sandbox and execute arbitrary code on susceptible systems. vm2 is an open-source library used to run untrusted JavaScript code inside a secure sandbox by intercepting and proxying JavaScript objects to prevent sandboxed code from accessing the host
A phishing campaign delivered through Google sponsored search results is targeting credentials for ManageWP, GoDaddy's platform for managing fleets of WordPress websites. [...]
To stop children from bypassing its age checks, Meta is revamping its age-verification tools with an AI system that analyzes images and videos for “visual cues,” such as height and bone structure.
Cybersecurity researchers have exposed a new Mirai-derived botnet that self-identifies as xlabs_v1 and targets internet-exposed devices running Android Debug Bridge (ADB) to enlist them in a network capable of carrying out distributed denial-of-service (DDoS) attacks. Hunt.io, which detailed the malware, said it made the discovery after identifying an exposed directory on a Netherlands-hosted
Converted detection logic from LinPEAS, Sigma and Splunk into Falco rulesets. Might be useful if you're getting started with Falco. https://github.com/sammonsempes/OSS2Falco Stars welcome ⭐
Cisco patched a Crosswork Network Controller and Network Services Orchestrator denial-of-service vulnerability that requires manually rebooting targeted systems for recovery. [...]
On May 5, 2026, at roughly 19:30 UTC, DENIC, the registry operator for the .de country-code top-level domain (TLD), started publishing incorrect DNSSEC signatures for the .de zone. Any validating DNS resolver receiving these signatures was required by the DNSSEC specification to reject them and return SERVFAIL to clients, including 1.1.1.1 , the public DNS resolver operated by Cloudflare. The country-code top-level domain for Germany, .de , is one of the largest on the Internet. On Cloudflare Radar , it consistently ranks among the most broadly queried TLDs globally. An outage at this level of the DNS hierarchy has the potential to make millions of domains unreachable. In this post, we’ll walk through what we saw, the impact of these events, and how we applied temporary mitigations while DENIC resolved the issue. How DNSSEC works DNSSEC (Domain Name System Security Extensions) adds cryptographic authentication to DNS. When a zone is signed with DNSSEC, each set of records is accompanied by a digital signature known as an RRSIG record that lets a resolver verify the records haven’t been tampered with. Unlike encrypted DNS protocols, such as DNS over TLS (DoT) and DNS over HTTPs (DoH), DNSSEC is about integrity, not privacy. The records are visible, but their authenticity can be proven. What makes DNSSEC unique is that the signatures travel together with the records they protect. This means int
I recently re-tested an old Binance API finding I had reported through Bugcrowd. The original issue was about Binance API IP whitelisting and derived `listenKey` stream credentials. At the time, a `listenKey` could be created from a whitelisted IP and then used from a non-whitelisted IP to consume private user data streams. That did not allow trading, withdrawals, or account takeover. But it did allow real-time access to sensitive private stream data such as balances, orders, executions, positions, timing, and strategy behavior. The core security argument was: >A derived credential should not be more portable than the credential that created it. The report was rejected as “Social Engineering” / “Not Applicable”. I disagreed, because the relevant threat model was not “convince the user to send a token”. The realistic threat model was supply-chain compromise: malicious code running inside a trusted bot server, CI job, dependency, IDE workspace, or trading environment where API keys already exist. I re-tested the behavior on May 5, 2026. Result: The old behavior appears to be gone. Spot and Margin no longer use the old `listenKey` model. Futures still uses `listenKey`, but now appears to enforce the API key IP whitelist correctly. From a whitelisted IP the calls worked; from non-whitelisted Mullvad exits they failed with the expected IP restriction error. That is good for users. But it raises an uncomfortable disclosure-process question: If a finding is “not applicable” enough to reject, not acknowledge, and not reward — but technical enough to later fix — what should a healthy disclosure process do? Full technical write-up, timeline, re-test setup, and raw outputs: [https://blog.technopathy.club/binance-fixed-the-ip-whitelist-gap-the-disclosure-process-is-still-broken](https://blog.technopathy.club/binance-fixed-the-ip-whitelist-gap-the-disclosure-process-is-still-broken) I am mainly interested in the process question here: When a rejected report later disappears from production, should the program re-open it, acknowledge it, partially reward it, or leave it closed unless the researcher can prove direct causality?
Disc Soft Limited, the maker of DAEMON Tools Lite, confirmed that the software had been trojanized in a supply chain attack and released a new, malware-free version. [...]
The start of support for macOS malware analysis in MalChela...
It's not just you. Scammers, hackers, and other cybercriminals are complaining about “AI shit” flooding platforms where they discuss cyberattacks and other illegal activity.
The Iranian state-sponsored hacking group known as MuddyWater (aka Mango Sandstorm, Seedworm, and Static Kitten) has been attributed to a ransomware attack in what has been described as a "false flag" operation. The attack, observed by Rapid7 in early 2026, has been found to leverage social engineering techniques via Microsoft Teams to initiate the infection sequence. Although the incident
For nearly 20 years, we at The Hacker News have mostly told scary stories about cyberspace — big hacks, broken systems, and new threats. But behind every headline, there’s a quieter, better story. It’s the story of leaders making tough calls under pressure, teams building smarter defenses, and security products that keep hunting threats 24/7 — even when it’s hard. Most of the time, this work is
Analysts recently confirmed what identity security teams have quietly feared: AI agents are being deployed faster than enterprises can govern them. In their inaugural Market Guide for Guardian Agents, Gartner states that “enterprise adoption of AI agents is accelerating, outpacing maturity of governance policy controls.” Enterprise leaders can request access to the Gartner Market Guide for
BSides can often be the one place where you can find the most obscure talks about a technical detail. For example, "Edge Device Memory Forensics" by Richard Tuffin or maybe "Forensic analysis of privacy focused mobile browsers" by Lorena Carthy and Ruben Jernslett. Finding them is the hard part. I built a website that tracks all BSides chapters, all 8575 videos, fetches transcripts, indexes them by technology, speakers, events, tools, protocols, standards, and much more. It is free, no login, no ads, no tracking beyond basic visits (no cookies). And I'm planning to keep it so. Check out the forensics talks at [https://allbsides.com/talks.html?q=forensics](https://allbsides.com/talks.html?q=forensics), and let me know if you find the site useful or spot anything missing. Genuinely happy to receive feedback!
Google has announced expanded Binary Transparency for Android as a way to safeguard the ecosystem from supply chain attacks. "This new public ledger ensures the Google apps on your device are exactly what we intended to build and distribute," Google's product and security teams said. The initiative builds upon the foundation of Pixel Binary Transparency, which Google introduced in October 2021
Someone hacked the deadmau5 discord server by virusing an admin. Said admin gave me the malware sample. Used claude sonnet 4.6 in combination with nyxstrike MCP framework to decompile and decrypt their obfuscated code, finding a goldmine. Title speaks for itself. The discord bot token could possibly have led to their CNC. But logging into the discord bot token to check for communications and see where it leads breaks 2 federal laws alone that I can think of. I did validate the token was live however, and matched it to a bot account. I also have discovered the webhook and token that was in the malware, both of them have been nuked (not by me). So, I checked their domain that they've been using, and they recompiled and reuploaded it. So its 26 bytes larger. I suspect they replaced the webhook url and the bot token with fresh ones, and suspect further that discord nuked the previous ones themselves. Nevertheless, I have personally not seen malware like this on github, so this must have been private and not some skid level stuff. I know it was turkish (at least the devs were). [Github](https://github.com/destiny-creates/goxlr.net-malware) link attatched for the source code including the deobfuscated malware classes, and the analysis/report. Don't flame me, it's still pretty cool 😆. Cracking the zkm encryption would have taken weeks (Im a python guy not a JS guy). Nyxstrike + sonnet 4.6 = 1.5 hours and its cracked.
Ran four attacks through a three-VM home lab (Kali, Windows 11, Ubuntu/Splunk), each mapped to a MITRE ATT&CK technique and named after a Cowboy Bebop episode. Full walkthrough with screenshots and Splunk queries in the article: [https://medium.com/@jwilliams.cyber/see-you-space-cowboy-bounty-hunting-threats-with-splunk-911ffbed051a](https://medium.com/@jwilliams.cyber/see-you-space-cowboy-bounty-hunting-threats-with-splunk-911ffbed051a) (No paywall, free to read.)
Presently sponsored by: Report URI: Guarding you from rogue JavaScript! Don’t get pwned; get real-time alerts & prevent breaches #SecureYourSite It's a fascinating display of leverage: the ShinyHunters folks, with very limited resources and experience (their demographic will be teenagers to their early 20s), consistently gaining access to the data of massive brands. Not through technical ingenuity alone (although I'm sure there's a portion of that), but primarily through good ol' social engineering. That's coming through in the disclosure notices from the impacted companies, and Mandiant has a good write-up of it too : These operations primarily leverage sophisticated voice phishing (vishing) and victim-branded credential harvesting sites to gain initial access to corporate environments by obtaining single sign-on (SSO) credentials and multi-factor authentication (MFA) codes Question now is how long their run will go for. There's a very predictable ending if things keep going in this direction but right now, they show little sign of abating.
Hi all — finally pushed this public after several months of work. Sharing here because this subreddit is where I'd want feedback from before anywhere else. **WAInsight** — https://github.com/akhil-dara/WAInsight (MIT) **Scope.** It doesn't extract data from a phone — that's a separate step with whatever acquisition workflow you already use. WAInsight starts after acquisition. Point it at a folder containing `msgstore.db` + `wa.db` + `Media/` + `Avatars/` and it ingests everything through a 29-stage pipeline into a normalised `analysis.db` (47 indexed tables), then opens a 30-page Qt desktop UI to actually work the case. **Why.** I wanted analysis to be the primary deliverable, not the report. So the UI is built around browsing every chat exactly like opening WhatsApp itself — home-style conversation list, bubbles with edits / revokes / replies / reactions / receipts / forwarded badges / mention chips / pinned-message strip — with forensic provenance one click away on every bubble. Reports are a snapshot of what was found, not the destination. **Capabilities, grouped by what you're actually trying to do:** *Reading the timeline* - Forensic ℹ button on every bubble: msgstore source IDs, every SQL row that fed the bubble, origination flags decoded, per-recipient receipt timeline (delivered / read / played, ms-precise). - Ghost-message recovery from `message_quoted_text` (deleted-for-everyone messages reconstructed inline next to the revoked bubble). - Edit history per message — every revision side-by-side. - Reply chains as click-through badges with cross-conversation "Go to original" jumps. - 60+ system events decoded (group / security / admin / privacy / business / ephemeral) instead of opaque type codes. - Calendar with per-day message counts shown flight-fare style; click+drag to range-filter. - Windowed-flat virtual scroller for chats with 5K+ messages — jumping to message #47K in a 47K-message chat is O(1). *Media analysis* - Folder-shaped Media Dashboard that scales to 200K+ rows at `file://` (sharded AVIF thumbs + chunked metadata + vendored UI engine, sub-millisecond bitset crossfilter). Cascading filters: conversation × sender × MIME × extension × status × date. - Perceptual visual search across the whole case — drop a screenshot, get Exact / Near-Exact / Near-Duplicate / Template-Match tiers (pHash + dHash + edge-map). - Camera-original → WhatsApp tracking: feed an original from `DCIM/`, find every chat that photo was sent in even after WhatsApp's recompression changed the SHA-256. - View-once images and voice notes downloadable from the bubble even after on-device expiry (CDN URL + media_key, AES-CBC + HMAC). - Hash-link auto-rescue: missing media that shares a SHA-256 with another message's on-disk media gets auto-resolved (tagged `recovery_method='hash_linked'`, never confused with a real local copy). - `wa.db` thumbnail blob rendered as fallback when even the bytes are gone. - HD/SD twin pairs surfaced inline with cross-jumps. - Cross-chat propagation: right-click any media → every chat that shared the same SHA-256, chronologically. Says where the bytes were *first seen*, not just where they were last forwarded. - 12-state media recovery taxonomy preserved in every report and dashboard (`original` / `downloaded` / `hash_linked` / `orphan_recovered` / etc.). - Orphaned-media browser: files in `Media/` with no surviving message row + auto-rescue against surviving message hashes. *Identity & devices* - Per-message platform attribution from `key_id` — every bubble carries an inline tag (Android / iPhone / Web/Desktop / Companion #N), confidence-scored. The classifier was its own separate research piece — collected `key_id` samples across real devices on Android, iPhone, Web, and linked companions until the rules held up. Powers the Group Report's *Device Platform Usage* breakdown and the contact's *Device Sessions* tab. - Unified contact registry merged from 5 sources (`jid_map` ∪ `wa_contacts` ∪ `lid_display_name` ∪ group labels ∪ mention names) so every JID resolves to one canonical identity. - Owner-aware everywhere — `sender_id IS NULL` for owner messages gets joined to `case_metadata` so owner activity never surfaces as "Unknown" anywhere in the UI or reports. *Groups & communities* - Past-participant reconstruction from 3 sources: `group_past_participant` ∪ `group_member.is_current=0` ∪ message-presence inference (catches members the roster purged after a long enough gap). - Owner can-post / can-edit banner on every Group Info page, sourced from `chat.participation_status` + admin flags. - Community LID resolution + comment-author resolution even when WhatsApp only stored the LID. - Group Edit History with profile-picture diff. *Calls* - Synthetic call reconstruction: calls that have no `message` row in their conversation get virtual rows so they render in every participant's chat timeline at the right position. Group voice chats appear inside the group's chat even when WhatsApp didn't write a message row for them. *Cross-case pivots* - Cross-Contact Analysis: pick 2+ contacts, instantly see shared groups, calls between them, file SHA-256 hashes any of them shared in common, cross @-mentions, every conversation any of them appears in. Owner is a first-class pickable contact. - FTS5 global search with sender / conversation / date / ghost filters; results panel as a sidebar inside the chat with click-to-jump highlights. *Reports & handoff* - Per-group landscape-A4 PDF/HTML report: case+evidence provenance banner with source-DB SHA-256 hashes, group identity, owner role, top contributors / forwarders, device platform split, mentions network, activity heatmap, calls, locations (with live-share start/final coords), message-type taxonomy (Type 64/82/90/92/112/116 etc. mapped to readable labels), bot activity, former members. - Per-contact report with section picker. - Offline HTML viewer bundle — single ZIP, opens from `file://` with no Python or server. WhatsApp-Web-style chat list, full message rendering, FTS5-equivalent search. The case officer / opposing counsel can open it in any browser. - Tagged-messages export with three modes (full / tagged-only / tagged ± N day buffer). **Forensic integrity.** Source `msgstore.db` opened with three independent guards (`?mode=ro&immutable=1` URI + `SQLITE_OPEN_READONLY` flag + `PRAGMA query_only=ON`). Source files SHA-256 hashed at ingest. Every action journaled to a hash-chained `chain_of_custody.jsonl` — each entry's hash includes the previous one, so the audit trail is tamper-evident, not just append-only. Original IDs preserved (`message.source_msg_id`, `media.source_media_row_id`, etc.) so every analysis row links back to its msgstore.db / wa.db origin. Timestamps shown local + UTC in brackets so case timezone is unambiguous. **Honest caveats.** Android-only. No automated tests yet. Schema research was done sample-by-sample so there are likely edge cases on WA versions / Business app / regional builds I haven't seen — Business app support is on the roadmap. Validated primarily against my own personal-device datasets. Built solo. PySide6 + SQLite + ~85K lines of Python. There's a deepwiki for it too (https://deepwiki.com/akhil-dara/WAInsight) if you want a deeper architectural read before cloning. Would genuinely value feedback from anyone who works WhatsApp cases regularly — especially edge cases or schema variants that break it. Issues / DMs / comments all welcome.
In this blog post I introduced several novel techniques: 1.How to get all routes - no need to authenticate. 2. How to get methods to fuzz from pages and not just the bootstrap JS files - the vast majority of methods are in those pages and not the JS files that existing tools and guides point to. 3. How to parse "LWC" components and not just legacy components.
The Apache Software Foundation (ASF) has released security updates to address several security vulnerabilities in the HTTP Server, including a severe vulnerability that could potentially lead to remote code execution (RCE). The vulnerability, tracked as CVE-2026-23918 (CVSS score: 8.8), has been described as a case of "double free and possible RCE" in the HTTP/2 protocol handling. This issue
The majority of widely used AI clients like: * Claude Code * Claude Desktop * Cursor * LibreChat * Amazon Q CLI have not implemented the critical refresh-token flow of the OAuth standard. This is forcing developers to issue long lived tokens creating a serious security regression in an already solved problem. This write up includes a matrix table of 14 major clients with notes linking to feature requests, pull requests, and multiple forum discussions. It is not all gloom and doom though! There is a work-around solution that security conscious users are using as a stop-gap also discussed, along with a best practices guide for developers implementing their own MCP OAuth Solution. The plan is to update this reference on a monthly basis to track if there is any movement on this open requests.
EMBA v2.0.1 with interactive firmware dependency map available - Check it out and let us know what you are missing
A sophisticated China-nexus advanced persistent threat (APT) group has been attributed to attacks targeting government entities in South America since at least late 2024 and government agencies in southeastern Europe in 2025. The activity is being tracked by Cisco Talos under the moniker UAT-8302, with post-exploitation involving the deployment of custom-made malware families that have been put
Topic of this article: Burp AI.
IOCX v0.7.1 — robustness update focused on malformed PEs, hostile strings, and static‑analysis hardening
Pushed a new IOCX release (v0.7.1) that’s aimed at making the engine much harder to break during static analysis. The focus was adversarial behaviour: malformed binaries, corrupted PE structures, and intentionally hostile IOC‑like strings. If you work with weird samples, tooling pipelines, or large‑scale triage, this release makes IOCX more robust under hostile conditions. **New PE structural heuristics** Six new checks added to catch structural anomalies without blowing up the parser: * overlapping/misaligned sections * inconsistent optional headers (PE32 & PE32+) * broken entrypoint mappings * corrupted data directories * malformed import tables * general PE layout inconsistencies These aren’t detections — they’re deterministic, reason‑coded structural signals to keep analysis stable. **Expanded adversarial PE corpus** Added a full suite of malformed and corrupted PEs, including: * broken RVAs / invalid addressing * truncated Rich headers * fake UPX names + packed‑lookalikes * PE32/PE32+ hybrids * “franken‑PEs” combining multiple faults All outputs are snapshot‑validated to guarantee deterministic behaviour. **Adversarial coverage across all IOC categories** New hostile string fixtures now stress every extractor: * homoglyph + mixed‑script domains * malformed URLs and schemes * broken IPv4/IPv6 * noisy or near‑miss hashes * invalid Base64 * adversarial crypto strings (incl. Base58Check) * long/invalid Windows paths * malformed emails The goal: keep extraction predictable even when the input is intentionally messy. **Parser & extractor hardening** * stable on malformed PE structures * structured, JSON‑safe error metadata * improved domain/URL/crypto/hash extractors * deterministic output across platforms **Links** GitHub: [https://github.com/iocx-dev/iocx](https://github.com/iocx-dev/iocx) PyPI: [https://pypi.org/project/iocx/](https://pypi.org/project/iocx/) **Example** `pip install iocx` `iocx suspicious.exe -a full` If you’re doing malware triage, static analysis, or building automated pipelines that need predictable IOC extraction, v0.7.1 should be a noticeable stability bump. Happy to discuss edge cases or weird samples people want covered next.
Every AI tool, workflow automation, and productivity app your employees connected to Google or Microsoft this year left something behind: a persistent OAuth token with no expiration date, no automatic cleanup, and in most organizations, no one watching it. Your perimeter controls don't see it. Your MFA doesn't stop it. And when an attacker gets hold of one, they don't need a password. OAuth
We recently added a C/C++ security checklist to the Testing Handbook and challenged readers to spot the bugs in two code samples : a deceptively simple Linux ping program and a Windows driver registry handler. If you found the inet_ntoa global buffer gotcha or the missing RTL_QUERY_REGISTRY_TYPECHECK flag, nice work. If not, here’s a full walkthrough of both challenges, plus a deep dive into how the Windows registry type confusion escalates from a local denial of service to a kernel write primitive. Since we first released the new C/C++ security checklist, we also developed a new Claude skill, c-review . It turns the checklist into bug-finding prompts that an LLM can run against a codebase. It’s also platform and threat-model aware. Run these commands to install the skill: claude skills add-marketplace https://github.com/trailofbits/skills claude skills enable c-review --marketplace trailofbits/skills The Linux ping program challenge The Linux warmup challenge we showed you in the last blog post has an obvious command injection issue. #include <stdio.h> #include <s
Quick note from a scanning project I've been running. We hit 6,000 web apps with a payment-bypass probe last week, sending a minimal fake \`checkout.session.completed\` event to common webhook paths (\`/api/webhook/stripe\`, \`/api/payments/webhook\`, etc.) without a \`Stripe-Signature\` header. 1,542 returned 200. That means anyone with curl can fire a forged Stripe event at those endpoints and the server processes it as legitimate. Depending on what the handler does with it, the consequences range from "logs a fake event" to "marks attacker's account as paid" to "creates a confirmed order with no payment". The split was roughly: * Custom domains (real production SaaS): \~720 * Render: 198 * Vercel: 142 * Replit: 121 * Railway, Fly, Heroku, others: \~360 Why so many? The Stripe library makes signature verification a one-liner. Every framework has the canonical example. But the dev journey usually goes: build the route locally with a stub handler that just \`console.log\`s the event body, get the upgrade-the-user logic working, leave signature verification on the TODO, ship. Six months later nobody remembers it was ever a TODO. The fix in Express: `\`\`\`js` `app.post('/api/webhook/stripe',` `express.raw({type: 'application/json'}),` `(req, res) => {` `const sig = req.headers['stripe-signature'];` `let event;` `try {` `event = stripe.webhooks.constructEvent(` `req.body, sig, process.env.STRIPE_WEBHOOK_SECRET);` `} catch (err) {` `return res.status(400).send(\`Webhook Error: ${err.message}\`); }` `// proceed with event` `res.json({received: true});` `});` `\`\`\`` The trap: \`express.json()\` globally parses the body before your handler sees it, leaving Stripe's library to compute the signature against parsed-then-stringified JSON, which never matches. Use \`express.raw()\` specifically on the webhook route, before any global JSON parser. FastAPI / Python: read \`await request.body()\` directly, not \`request.json()\`. Same idea. Caveats: a 200 response doesn't prove the app actually grants the attacker something. Some endpoints log every webhook for analytics and return 200 regardless. The 1,542 number is "endpoints accepting unsigned events", not "definitely exploitable". But the misconfiguration is real on its own. Full writeup with the methodology and platform-by-platform breakdown: [https://securityscanner.dev/blog/stripe-webhook-signature-bypass-1500-apps](https://securityscanner.dev/blog/stripe-webhook-signature-bypass-1500-apps) Curious if anyone here has shipped a Stripe webhook recently and can double-check theirs.
The Model We’ve Relied on Is Starting to Break Over the past 20 years, I’ve seen the threat landscape evolve from opportunistic attackers, to organized cybercrime, to nation-state campaigns. Each shift forced security teams to adapt. What’s happening right now is different. AI models coming out of Anthropic, OpenAI, Google, and X are rewriting the […] The post Sara AI Pentesting Is Now Generally Available: The Model Is Changing appeared first on Synack .
Key Takeaways Over the past year, the conversation in security has changed faster than most programs have. AI is compressing attacker timelines. Environments are changing daily rather than quarterly. And the model most enterprises still rely on to validate security—periodic penetration testing—is starting to break under the weight of both. The real question isn’t whether […] The post The Shift to Continuous Security Validation: Why Detection Is No Longer Enough appeared first on Synack .
Using a 1930s trade law, Homeland Security targeted the man—who hasn’t entered the US in more than a decade—following posts on X condemning the killings of Renee Good and Alex Pretti.
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the [Reverse Engineering StackExchange](http://reverseengineering.stackexchange.com/). See also /r/AskReverseEngineering.
"AccountDumpling": Hunting Down the Google-Sent Phishing Wave Compromising 30,000+ Facebook Accounts
*As one tends to do on Saturday mornings with coffee in hand, I was reviewing two samples that were attributed to the LunaStealer / LunaGrabber family. Originally I was validating that* `tiquery` *was working with the MCP configuration, however what started as a quick TI check turned into a full static analysis session — and it gave me a good opportunity to put the MalChela MCP integration through its paces in a real workflow. This post walks through how that investigation unfolded, what the pivot points were, and what we found at the bottom of the rabbit hole.*
Plus: The NSA tests Anthropic’s Mythos Preview to find vulnerabilities, a Finnish teen is charged over the Scattered Spider hacking spree, and more.
Over the past two and a bit quarters, we've undertaken an intensive engineering effort, internally code-named " Code Orange: Fail Small ", focused on making Cloudflare's infrastructure more resilient, secure, and reliable for every customer. Earlier this month, the Cloudflare team finished this work. While improving resiliency will never be a “job done” and will always be a top priority across our development lifecycle, we have now completed the work that would have avoided the November 18, 2025 and December 5, 2025 global outages. This work focused on several key areas: safer configuration changes, reducing the impact of failure, and revising our “break glass” procedures and incident management. We also introduced measures to prevent drift and regressions over time, and strengthened the way we communicate to our customers during an outage. Here we explain in depth what we shipped, and what it means for you. Safer configuration changes What it means for you : In most cases, Cloudflare internal configuration changes no longer reach our network instantly and are instead rolled out progressively with real-time health monitoring. This allows our observability tools to catch problems and revert issues before they affect your traffic. In order to catch potentially dangerous deployments before they reach production, we've identified high-risk configuration pipelines, and built new tools to manage configuration changes better. For products that run on our network processing customer traffic and receive configuration changes, we no longer deploy these changes instantly across the
MalChela v4.0 is out. The desktop GUI is gone — replaced by a PWA you can reach from any browser on the network. Battery-powered Pi on the table, iPad in hand, no keyboard required. The field kit finally makes sense.
When we first launched Workers eight years ago, it was a direct-to-developers platform. Over the years, we have expanded and scaled the ecosystem so that platforms could not only build on Workers directly, but they could also enable their customers to ship code to us through many multi-tenant applications. We now see on Workers: Applications where users describe what they want, and the AI writes the implementation. Multi-tenant SaaS where every customer's business logic is, at runtime, some TypeScript the platform has never seen before. Agents that write and run their own tools. CI/CD products where every repo defines its own pipeline. Last month, when we shipped the Dynamic Workers open beta , we gave those platforms a clean primitive for the compute side: hand the Workers runtime some code at runtime, get back an isolated, sandboxed Worker, on the same machine, in single-digit milliseconds. Durable Object Facets extended the same idea to storage — each dynamically-loaded app can have its own SQLite database, spun up on demand, with the platform sitting in front, as a supervisor. Artifacts did the same for source control : a Git-native, versioned filesystem you can create by the tens of millions, one per agent, one per session, one per tenant. So, we have dynamic deployment for storage and source control. What’s next? Today, we are bridging durable execution and dynamic deployment with Dynamic Workflows . The gap between durable and dynamic execution