Unit 42 on non-phishing vectors
Digging into the details
In the context of Unit 42’s recent threat report context, the authors refer to “non-phishing techniques” including SEO poisoning, fake system prompts, and help desk manipulation. Today, we’re aiming to clarify what these issues are, how they lead to infiltration, and how you can better manage them.
The vectors in detail
SEO poisoning
SEO poisoning is a technique where attackers engineer malicious web pages (or compromise existing sites) so that they rank high in search engine results for relevant queries. A user searching for software, documentation, or support often clicks on top results with some trust. If one of those is malicious, the user may land on a fake site that harvests credentials, installs malware, or mimics legitimate services. By manipulating search engine optimization (SEO) signals (keywords, backlinks, page structure) or using compromised domains, attackers “poison” the search results to lure victims.
Recent work also shows that SEO poisoning is evolving: attackers now aim to influence large language models or AI assistants by planting poisoned content so that AI tools return malicious or spoofed links. Because this vector relies on users pulling content (searching), it can bypass email filtering or URL filtering tied to inbound traffic.
Fake system prompts (or fake browser / system alerts)
This refers to deception layered into the user’s device or browser experience, masquerading as legitimate system or application prompts. Examples include fake update alerts, bogus security warnings, browser popups saying “your system is infected, click here to repair,” or crafted overlays that mimic operating system dialogs.
In practice, an attacker might embed scripts or use malvertising to show these prompts, or exploit vulnerabilities to insert them. A user believing the prompt to be genuine may enter credentials, approve permissions, or install malware. These prompts exploit trust in familiar UI elements and conditioning to respond to system alerts.
Help desk manipulation (or support impersonation)
In this method, attackers impersonate internal or external support personnel (IT help desk, vendor support, service desk) to trick legitimate users or internal staff into divulging credentials, resetting multi-factor authentication, or granting access.
For example, an attacker may call or email pretending to be an employee who is locked out, and ask the help desk to reset credentials. Or they may impersonate a vendor and ask for privileged access “for maintenance.” They can exploit standard operational procedures, internal trust, or gaps in verification protocols.
Unit 42 notes that some actors escalate access in minutes by interacting with IT processes, especially when permissions are loosely controlled.
In sum, these techniques differ from “classic phishing via email” because they don’t start with an unsolicited message; instead, they embed themselves into the user’s own activity (searching, responding to a device prompt, or interacting with support channels).
Comparison with industry data & caveats
Unit 42 reports that in their incident response caseload from May 2024 to May 2025, 36 % of all incidents began with a social engineering method, and among those, more than one-third involved non-phishing techniques (i.e. SEO poisoning, fake prompts, help desk manipulation). They also state that within just the social engineering incidents, phishing accounts for about 65 % of them, leaving ~35 % to other vectors.
This suggests that roughly 12–13 % of all incidents in their sample were non-phishing social engineering vectors—which matches their chart breakdown: “12% SEO poisoning / malvertising and other non-phishing share”).
When we look at broader, public reporting and academic studies, the picture is somewhat consistent but with differences in scale, taxonomy, and measurement:
The Verizon DBIR (2025) shows that around 60 % of confirmed breaches involve a human element (clicks, misuse, social engineering) rather than purely technical failures.
In that same DBIR, “Social Engineering” is a known category, but its share of all breaches is more modest. In the 2025 DBIR, the share of breaches attributed to social engineering lies lower (e.g. ~22 % in the 2024 DBIR) compared to Unit 42’s higher “initial access via social engineering” metric.
Public cybersecurity blogs and aggregators sometimes estimate that phishing continues to be the dominant social engineering method (often claiming 50–70 % of social engineering attacks), but they give smaller proportions to newer vectors like SEO poisoning or malvertising.
Some academic work questions how effective user training is in reducing phishing success. For instance, a large-scale reproduction study (n = 12,511) reported that standard anti-phishing training had no significant effect on click rates in that setting.
Suggested solutions / mitigations
Given that attackers are increasingly exploiting human workflows, trust, and identity systems, mitigation must go beyond standalone awareness campaigns. Below are practical, structural, and behavioral defenses.
Hardening search / browser trust surfaces
Deploy solutions (browser isolation, secure browsing gateways, DNS filtering) that block or inspect suspicious sites, including those landing via search.
Leverage reputation services and threat intelligence to detect malicious domains that may be seeded via SEO poisoning.
Monitor for domain names or PDFs using combinations of your organization’s name + “support,” “helpdesk,” “update,” etc.
Educate users to verify domain names (especially for support or update pages) and not trust search results blindly.
Strengthen device and UI-level security
Restrict or harden scripting capabilities so that overlay popups or fake prompts are harder to inject.
Use security tools that can detect anomalous UI events or prompt injections.
Enforce that system updates come from vetted channels and digitally signed sources only.
Incorporate “trusted UI” indicators (e.g. OS-integrated prompts vs. web overlays) and train users to recognize when a prompt is coming from outside the system context.
Tighten help desk and internal support protocols
Require multi-factor verification before any credential reset or privilege escalation, even for internal support requests.
Limit what support staff can do automatically; maintain “step-up” checks for high-risk actions.
Log and audit all help desk resets and access changes, and flag anomalies (volume, timing, requester patterns).
Simulate red team “support impersonation” attacks as drills to test staff around verification protocols.
Identity-centric defenses and monitoring
Use Identity Threat Detection and Response (ITDR) tools to spot credential misuse or abnormal identity behavior (e.g. geographic changes, odd hours).
Enforce conditional access and risk-based authentication (e.g. require reauthentication or MFA when context shifts).
Adopt zero trust for identities, not just networks: don’t implicitly trust credentials or internal identity flows.
Behavioral training with realistic evaluation
Move away from static, annual awareness modules; use ongoing, scenario-based training including realistic prompts (fake UI, help desk pretexting).
Use red teaming and phishing simulations that include non-email vectors (pretend update alerts, help desk calls).
Measure not just clicks but also incident reports, recognition of lures, and resilience over time.
Be aware of the limits: as the academic study above showed, training alone may have limited effect, especially if it remains generic.
Incident response and detection layering
Monitor lateral movement, privilege escalation, and abnormal service account use, so that even if an attacker gains entry, further damage is contained.
Build kill chains that assume social engineering will succeed; plan containment and response.
Incorporate behavioral anomaly detection in layers that monitor identity behavior across systems.
Periodic review and governance
Continuously reassess help desk procedures, access policies, and trust boundaries.
Conduct tabletop exercises for social engineering scenarios involving help desk or system prompt deception.
Encourage a culture of questioning unusual permission requests, even if they seem internal.
Further reading…
Palo Alto Networks, 2025 Unit 42 Global Incident Response Report: Social Engineering Edition
Verizon Data Breach Investigations Report 2025
Rozema & Davis, “Anti-Phishing Training (Still) Does Not Work: A Large-Scale Reproduction…” (2025)
ZeroFox, SEO Poisoning: How Threat Actors Are Tricking AI and LLMs
Vectra, SEO Poisoning Attacks: Detection & Defense



Hey, great read as always; it’s almost impressive how creative attackers get with these non-phishing vectors that bypass traditional filtres, isnt it?
The statistic about 12-13% of all incidents coming from non-phishing social engineering is eye-opening. What strikes me most is the help desk manipulation vector - it exploits the operational trust built into standard IT processes. The suggestion to simulate red team support impersonation attacks is excellent, but I wonder if most organizations have the maturity to run these drills without creating paranoia that hampers legitimate support workflows. The point about moving beyond static annual awarenss training to ongoing scenario-based approaches really resonates, especially given the cited study showing limited training effectiveness. The identity-centric monitoring recommendations are spot on.