Social Engineering and Red Teaming: Risks, rewards and approaches that work
Originally published in Computing Security Magazine, July 2019
Paul Harris, CEO, Pentest Ltd, A Shearwater Group plc company, explains the benefits that Open Source Intelligence (OSINT) can deliver in the on-going battle against cyber attacks.
The report is unequivocal: “Cyber attackers are increasingly focusing their attention on people, not technical defences.” Automated or AI-driven technologies are conducting monitoring and incident-response with unprecedented power and speed, yet human users remain the entry point into networks that most cyber attackers exploit.
The prevalence of malicious social engineering means that its inclusion in red teaming exercises is a standard offering. The objective is to assess the integrity of a company’s defensive mechanisms, and to then use these findings to improve overall security culture and incident response.
But what can mock social engineering actually reveal about a company’s staff? Performed unethically, or without clear objectives, mock social engineering could undermine employee confidence or trust in management. A certain percentage of staff will consistently click malicious links or install misrepresented programs; there’s nothing new to learn here.
ALL COMPANIES CAN BE SOCIALLY ENGINEERED…THE QUESTION IS HOW?
For some companies, revealing the scenarios a malicious actor could employ may be a more valuable exercise. A successful social engineering campaign is, after all, the result of research conducted by the attacker into key business partners, technology stacks, current deals, or staff changes; everything that could be used to establish a believable pretext, gain trust or create a sense of urgency. Open Source Intelligence (OSINT) replicates this process, and combined with expert knowledge of active threat groups, OSINT analysts can specify the ruses that attackers might present, or the areas of corporate policy (social media use, for example), that need to be tightened.
Companies can take these findings and alert their employees to emails, phone calls, or network breaches that attackers (may) have attempted; pre-emptively warning security teams and keeping good security policy (eg, do NOT enable macros) at the forefront of employees’ minds, without an unnecessary game of ‘gotcha’.
SIMULATION, NOT HUMILIATION
The inevitability of social engineering prompts other companies to focus upon the efficacy of incident response. Pentest (a Shearwater Group plc company) has conducted simulations with selected employees consenting to be mock-phished and reporting the incident to internal security. This approach highlights failures in network segmentation, communication channels, or technical ability. After conducting such exercises, Pentest clients have created new email addresses (firstname.lastname@example.org), implemented data backup procedures, or placed assets behind an internal VPN. If the worst should transpire, the network and its first-line defenders can contain the impact of an attack.
KNOWLEDGE IS ALWAYS VALUABLE
Standard social engineering tests may reap valuable data. Identifying the percentage of vulnerable employees can inform company training programs – or evaluate previous security investments. In these circumstances, anonymising the results of such tests is the basis of ethical practice. A company like Pentest has extensive experience in red teaming and social engineering, while they are also adept at presenting the results in such a way as to protect the identity of affected employees.
Overall, when considering the inclusion of social engineering in a red teaming exercise, identifying the informational outcome that the company is seeking should be the primary step. A red teaming provider who seeks to understand the end goal and design an exercise appropriately is ideally placed to conduct mock social engineering in an ethical manner that maximises its rewards.