#221: Digging into Social Engineering, part 1
Exploring Unit 42's findings
Don’t miss out!
Welcome to another _secpro!
This week, we’re poking the brain of CISO expert David Gee to deliver you some insights which line up nicely with his new book: A Day in the Life of a CISO. We’ve also included our popular PDF resource again, to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. Check it out!
If you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!
Cheers!
Austin Miller
Editor-in-Chief
This week’s article
Unit 42 on non-phishing vectors
Recently, along with a wealth of other industry-critical information and resources, Palo Alto’s Unit 42 published their incident response report concerning social engineering. As an area of practice that has always fascinated me—as more art than science—this immediately grabbed my attention and almost forced me to start taking notes. With this in mind, we as a team are heading out over the next few weeks to dig deeper into social engineering and help you discern the golden kernels that you need to access.
News Bytes
Unit 42 Threat Bulletin – October 2025: Published 21 October 2025, this monthly bulletin by Unit 42 (the threat-research arm of Palo Alto Networks) surfaces multiple emerging threats. Highlights include the self-propagating supply-chain worm “Shai-Hulud”, an advanced supply-chain attack targeting npm packages; detailed technical IOCs; and spotting a new Chinese-nexus APT “Phantom Taurus” targeting government/telecom across Africa/Middle East/Asia.
PacketWatch Cyber Threat Intelligence Report: Crafted by Intelligence Team and published 20 October 2025, this bi-weekly briefing highlights: (a) the major breach incident at F5 Networks (source code + undisclosed vulnerabilities); (b) a list of critical and high-severity vulnerabilities across major platforms (Oracle, Microsoft, Veeam, SAP, 7-Zip, Ivanti); and (c) a renewed emphasis on user-targeted attacks such as credential phishing, fake CAPTCHA software, and fake downloads.
Disrupting malicious uses of AI (PDF): Released by OpenAI, this October 2025 update (PDF) details how threat actors are increasingly leveraging multiple AI tools (e.g., using one model for planning and another for execution), integrating AI into existing cyber-attack workflows, rather than inventing wholly new attack methods. The report also gives case studies of misuse (scams, code-signing abuse, social engineering) and how defence and detection are adapting.
Microsoft Digital Defense Report 2025: Lighting the path to a secure future (PDF): Published by Microsoft 21 October 2025, this annual-style report provides their threat intelligence view: major uptick in AI-enabled adversary operations, increasing geopolitical cyber-conflict, supply chain risk, and the imperative for defenders to rethink traditional security models given the speed and scale of modern attacks
ENISA Threat Landscape 2025 (PDF): Published 7 October 2025 by ENISA (European Union Agency for Cyber Security), this comprehensive PDF analyses 4,875 incidents (1 July 2024–31 June 2025) to map global threat trends: shift toward mixed/campaign-style operations, AI-enabled threat activity, supply chain convergence, and increased adversary speed. Though slightly earlier than your window, its release date is timely and gives context for many of the current week’s incidents.
This week’s academia
From Texts to Shields: Convergence of Large Language Models and Cybersecurity (Tao Li, Ya-Ting Yang, Yunian Pan & Quanyan Zhu): This paper explores how large language models (LLMs) are increasingly converging with cybersecurity tasks: for example, using LLMs for vulnerability analysis, network and software security tasks, 5G-vulnerability assessment, generative security engineering and automated reasoning in defence scenarios. The authors highlight socio-technical challenges (trust, transparency, human-in-the-loop, interpretability) when deploying LLMs in high-stakes security settings, and propose a forward-looking research agenda to integrate formal methods, human-centred design and organisational policy in LLM-enhanced cyber-operations.
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo & Huansheng Ning): This survey conducts a PRISMA-style review (2021–Aug 2025) of how Generative Adversarial Networks (GANs) are being used both as attack tools and defensive tools in cybersecurity. They analyse 185 peer-reviewed studies, develop a taxonomy across four dimensions (defensive function, GAN architecture, cybersecurity domain, adversarial threat model), and identify key gaps: training instability, lack of standard benchmarks, high computational cost, limited explainability. They propose a roadmap towards scalable, trustworthy GAN-powered defences.
Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity (Vikram Kulothungan): This article examines the ethical and regulatory challenges arising from the deployment of AI in cybersecurity. It traces historical regulation of AI, analyses current global frameworks (e.g., the EU AI Act), and discusses key issues including bias, transparency, accountability, privacy, human oversight. The paper proposes strategies for enhancing AI literacy, public engagement, and global harmonisation of regulation in AI-driven cyber-systems.
A Defensive Framework Against Adversarial Attacks on Machine Learning-Based Network Intrusion Detection Systems (Benyamin Tafreshian & Shengzhi Zhang): The authors propose a multi-layer defensive framework aimed at ML‐based Network Intrusion Detection Systems (NIDS) which are vulnerable to adversarial evasion. Their framework integrates adversarial training, dataset balancing, advanced feature engineering, ensemble learning, and fine-tuning. On benchmark datasets NSL-KDD and UNSW-NB15, they report on average a ~35% increase in detection accuracy and ~12.5% reduction in false positives under adversarial conditions.
Cyber Security: State of the Art, Challenges and Future (W.S. Admass et al.): This article presents an overview of the state of the art in cybersecurity: existing architectures, key challenges, and emerging trends globally. It reviews tactics, techniques, and procedures (TTPs), current defence mechanisms and future research directions.
DYNAMITE: Dynamic Defense Selection for Enhancing Machine Learning-based Intrusion Detection Against Adversarial Attacks (Jing Chen, Onat Güngör, Zhengli Shang, Elvin Li & Tajana Rosing): This paper introduces “DYNAMITE”, a framework for dynamically selecting the optimal defence mechanism for ML-based Intrusion Detection Systems (IDS) when under adversarial attack. Instead of applying a static defence, DYNAMITE uses a meta-ML selection mechanism to pick the best defence in real-time, reducing computational overhead by ~96.2% compared to an oracle and improving F1-score by ~76.7% over random defence and ~65.8% over the best static defence.





I resonate with what you wrote about social engineering being more art than science, and its so important to help non-specialists navigate this crucial area in the age of AI.