
By: Dr. Sriram Birudavolu, CEO, CCoE
Lethal Mind Traps: Cognitive Errors in Cybersecurity
INTRODUCTION
Cognitive errors made in Cybersecurity domain can engender risky behaviours that end up compromising security. The perception and evaluation of risk play an important role in Cybersecurity in all areas, e.g. Security checks during programming, cyber investments made, security culture adopted, prioritization of risks, incident response, configuration management controls, etc.
As cybersecurity professionals, we're constantly on the lookout for threats to our systems and data. However, some of the most significant threats arise not from external attackers, but from our own minds. Cognitive biases and mental shortcuts can lead us into “deadly cyber traps" that compromise our security and put our organizations at risk.
Let us examine a few key cognitive errors:
1. Redlining:
This refers to simply overstepping the set limits of safety parameters (i.e. crossing the “redline”). The temptation could be strong under pressure of deadlines, when faced with complexity and overload, or during major transitions (such as in large digital transformation projects). Examples include Dismissing a risk as “false positive” without digging deeper, ignoring a “yellow” (or even a “red”) flag on a report, explaining away a minor security incident without taking the opportunity to tighten the processes, not carrying out VAPT on a website after an upgrade due to a deadline, or simply taking a chance with the hope that probabilities will work in one’s favour.
It is essential to set and enforce tripwires that raise alarms or even prevent crossing of redlines.
2. The Domino Effect:
Simple incidents can have cascading effects that rapidly deteriorate into a major disaster for the organization. The domino effect refers to the way in which a single vulnerability can have a ripple effect, leading to a cascade of subsequent vulnerabilities and breaches. Underestimating the impact of a single vulnerability can lead to failure to prioritize patching and remediation.
For instance, if an endpoint is found to be compromised (or even vulnerable), it is essential to contain it (or patch it) at the earliest or there could be lateral movement of the malware in the network, or even privilege escalation. “Nip it in the bud” should be the guiding principle. Automation can obviously play a big role here. A physical breach detected in the facility, if not attended to properly, can lead to a major compromise in the entire IT infrastructure, and unusual requests for access (whether internal or external) need to be closely monitored and tightly contained, scope and time wise, to prevent misuse. A robust Incident Response framework must be integrated into the strategy and implementation.
3. Situational Blindness and the Availability Trap:
Situational Blindness can arise when one follows the preset guidelines too closely, and finds nothing amiss, but has actually missed out on other environmental cues that indicate severe danger.
For e.g. if the current IoCs and IoAs are not upgraded constantly to suit one’s domain/organization/environment then the exploits can sail right through, under the organization’s security radar. Another example is failing to consider the risks posed by emerging technologies, such as Quantum Computing, AI, IoT, etc.
Availability Trap refers to the tendency to overestimate the likelihood of a threat based on how easily examples come to mind. In cybersecurity, this can lead to an overemphasis on high-profile threats, while neglecting more subtle but equally damaging attacks.
Example: A company focuses its security efforts on preventing high-profile ransomware attacks, but neglects to implement robust access controls, leaving it vulnerable to insider threats.
Staffing the cyber team with experienced people, avoiding over-reliance on automation, and following mature processes can help avert these errors. The hilarious “let’s automate everything and replace manpower” syndrome can lead to many gaps and problems. AI and Automation can help in many areas but can’t replace everything that is gained from experience, e.g. from sizing of servers to design related issues to the tailoring and review of controls being adopted. Experienced teams armed with strong processes and appropriate tools and automation will afford a 360o view and provide deep insights into security.
4. Double or Nothing:
This refers to the unfortunate phenomenon of doubling down on risky behaviour in the blind pursuit of profits/business benefits or during cost-cutting / business loss aversion. There could be erroneous reasoning to cut down (or keep down) investments in cybersecurity measures, while evaluating costs and risks, or when doing a superficial analysis of history such as “we’ve had no cyber issues so far and our measures though modest, seem adequate, so why invest more now in cybersecurity?”
An organization can go far down in the game for too long with this thinking when massive risks loom large, unbeknownst to its leadership, until it is suddenly hit by a serious cyber-attack. Group think also perpetuates this problem.
A formal, ground-up review of risks, security by design, privacy by design, adoption of framework-based implementation e.g. NIST, ZTA, ISO 27001, Defense-in-Depth will help avoid this risk.
5. Confirmation Bias and Anchoring Trap:
Confirmation Bias leads to overconfidence, resulting in the silent piling up of grave risks. An organization can get fixated on certain hypotheses and measures, while ignoring others. This can cause biased self-validation caused by verification using data chosen based on one’s pet hypotheses. Views and habits get hardened over time, with repetition. Normalization of deviance refers to the way in which insecure practices can become accepted as normal over time. This can lead to a culture of complacency and neglect of security best practices.
In cybersecurity, we often rely on familiar security measures, such as firewalls and antivirus software. However, this can lead to an anchoring bias, where we overestimate the effectiveness of these measures and overlook new and emerging threats.
Example: A company relies heavily on its firewall to block malicious traffic, but neglects to implement regular software updates and patching, leaving it vulnerable to exploits.
To avoid these biases, one must constantly upgrade the security strategy to incorporate new knowledge, intelligence, and hypotheses. The tools, techniques, procedures must be re-looked at. Industry frameworks, regulations, technologies constantly evolve, and appropriate changes must be incorporated. Comprehensive metrics and tools must be used, e.g. BitSight score, Attack Surface Management tools, etc.
6. The Sunk-Cost Trap:
This is about continuing to invest in ineffective security measures.
We tend to continue investing in security measures that aren't working, simply because we've already sunk so much time and money into them. However, this can lead to a sunk cost bias, where we throw good money after bad.
Example: A company continues to invest in an intrusion detection system that's consistently failing to detect threats, simply because they've already spent so much money on it.
7. The Framing Trap: Focusing on the Wrong Security Metrics
We tend to focus on security metrics that are easy to measure, rather than those that are truly important. This can lead to a framing bias, where we optimize for the wrong metrics.
Example: A company focuses on measuring the number of malware detections, rather than the number of successful attacks or the overall risk to the organization.
8. Information Overload and Decision Fatigue:
Faced with a deluge of information and choices, be it in filtering false positives in a SoC or in choosing an appropriate cybersecurity solution, the cognitive overload can lead to poor decisions under fatigue. This can result in compromised outcomes for security. Processing vast amounts of information and making high-pressure decisions must both be cut down to manageable levels, so that the Signal/Noise Ratio can be maximized. The solutions to mitigate information overload issues are Automation, Visualization Tools, Tailored Threat Intelligence, and Machine Learning.
Decision Fatigue can be solved using Playbooks and Procedures, Decision Support Systems, Collaboration and Communication, and also regular periodic Rest and Rotation.
CONCLUSION
Cognitive biases/errors and mental shortcuts can lead us into deadly cyber traps that can severely compromise an organization’s security. By recognizing these biases and errors and taking steps to mitigate them, we can improve our cybersecurity posture and reduce the risk of breaches. It is vital to stay vigilant, challenge assumptions, and prioritize robust security measures to stay ahead of emerging threats.