2025 Unit 42 Global Incident Response Report: Social Engineering Edition
An introductory overlook of Unit 42's recent report
Recently, along with a wealth of other industry-critical information and resources, Palo Alto’s Unit 42 published their incident response report concerning social engineering. As an area of practice that has always fascinated me—as more art than science—this immediately grabbed my attention and almost forced me to start taking notes. With this in mind, we as a team are heading out over the next few weeks to dig deeper into social engineering and help you discern the golden kernels that you need to access.
The report argues that social engineering has matured into one of the most reliable and high-impact vectors for intrusion. Rather than relying solely on zero-days or novel exploits, attackers are increasingly targeting identity systems, human workflows and trust relationships. In many of the incidents analyzed, social engineering served as the initial access vector, and success hinged more on process and control gaps than on advanced technical sophistication.
The authors contend that defenders must shift their mindset: social engineering should be treated as a systemic, identity-centric threat, not merely a user-education problem. Detection, identity controls, conditional access, and behavioral analytics are central to the response posture.
Key Findings & Trends
Prevalence of Social Engineering as Initial Access
Social engineering was the root cause in 36 % of all incident response engagements during the covered period. While phishing remains a dominant mechanism, a significant proportion of attacks now employ non-phishing methods, including SEO poisoning, fake system prompts and direct manipulation of help desks.
High-Touch Attacks Escalating Privilege Quickly
In “high-touch” campaigns, adversaries impersonate internal staff, exploit help desk processes or use voice lures to bypass MFA or identity verification procedures. The report cites cases in which attackers escalated from initial compromise to domain administrator in under 40 minutes, using only built-in tools and social pretexts. Threat groups such as Muddled Libra, state-aligned actors (e.g. Agent Serpens), and synthetic insider campaigns from North Korean actors are highlighted.
Rise of At-Scale (Low-Touch) Deception Campaigns
Beyond targeted attacks, social engineering is increasingly automated and scalable. The “ClickFix” model illustrates this: adversaries use fake browser or system prompts, SEO-boosted malicious landing pages, malvertising, or spoofed update alerts to induce users to self-initiate compromise.
ClickFix campaigns often begin with credential harvesting or information-stealers, then escalate via loaders or remote access tools.
Telemetry & Attribution Insights
In social engineering–originated incidents, 66 % targeted privileged accounts, and 45 % used internal impersonation. Voice-based or callback methods were employed in around 23 % of these cases.
Overall, the motive in nearly all social engineering cases (93 %) was financial gain. The rate of data exposure following a social engineering incident was 60 %—significantly higher than the baseline across all intrusion types.
Control Gaps & Detection Deficiencies
The success of these attacks is tied less to extreme sophistication and more to gaps in process, tooling and human response. Key issues include alert fatigue, misclassification of anomalous behaviour, over-permissioned access, and weak identity recovery procedures.
Many organizations lacked mature Identity Threat Detection and Response (ITDR) or User and Entity Behavior Analytics (UEBA) capabilities, reducing their ability to detect lateral movement or post-compromise escalation.
Recommendations for Defenders
To mitigate evolving social engineering risks, Unit 42 offers a set of prescriptive countermeasures:
Integrate identity, device, and session signals to detect misuse earlier, rather than relying solely on observables like malware or signature-based detections.
Extend Zero Trust to cover user identity and access paths: conditional access, just-in-time permissions, segmentation and least privilege.
Harden the human layer by training not just end users but interfaces like help desks, HR or support staff, and simulate realistic social engineering efforts (impersonation, voice lures, MFA manipulation).
Deploy behavioral analytics and identity threat detection to spot anomalies like session abuse, impersonation, credential replay or lateral movement.
Monitor and restrict use of native tooling and business process workflows (PowerShell, WMI, fast-track approvals) that can be abused without detection.
Conduct red-teaming, playbook rehearsals and organizational simulations aligned with current social engineering tactics.
Enforce network-level controls (DNS security, URL filtering) to block spoofed domains, SEO poisoning links, and malicious infrastructure.
Strengthen identity recovery paths: tighten verification, limit who can invoke resets, and monitor anomalous help-desk activity.
This kind of advice is always in danger of becoming a kind of abstract “ought” that hangs over you, the already very busy cybersecurity professionals on the front line. With that in mind, over the next 8 weeks, we will be rolling out helpful resources for each of these points to help you do your job better, train the unintiated more effectively, and get a step ahead of the adversary.
Implications & Strategic Observations
This report underscores that the human and identity layer is now the central battleground. As social engineering campaigns grow more automated and realistic, traditional perimeter or signature-based defenses become less effective. Organizations must assume that someday an adversary will get in via deception; the goal is to detect and contain that intrusion before it spreads.
Shifting to identity-centric defenses, enriching visibility across workflows, and operationalizing behavioral detection will be critical.


