Organizations are living, breathing synergies, where the totality of the organization’s output exceeds the sum of its individual agents. We call these “complex systems.” Within this environment, achieving security and control over each individual element never equates to security and control over the whole. Complex systems are just too … complex.
Cybersecurity, enterprise risk management planning, internal controls and incident response protocols focus on individual agents in a vacuum. While adept at identifying obvious component risks, they aren’t sufficient to address “emergent risks”—those that only manifest themselves as individual agents interact. Also, those interactions are seldom predictable. As such, we often hear of natural disasters, frauds and accidents, and how aftereffects are as catastrophic as the events themselves because the organizations or communities “never saw it coming.”
Emergent risks don’t exist until some set of interactions occurs to trigger them. We call this “cascading risk,” and the related effects “cascading failures.” Beyond natural and physical phenomena, employees find loopholes in controls to commit fraud because of system complexity, and hackers exploit the characteristics of complex systems.
Change drives emergent risks. New policies, procedures, innovations, system conversions, new security software, regulatory changes, a move from competitors, new employees and vendors are a few examples. Just as the system settles into a status quo, a change is inserted into it that creates more uncertainty and instability, the effects of which cannot be fully understood without applying complex system approaches.
By viewing your organization as the complex system it is, you stand a better chance of adjusting your risk appetite in more effective ways. This takes enterprise and cybersecurity risk management beyond controls questionnaires and risk-scoring matrices and forces us to think in terms of nonlinear relationships between people, departments, outsiders, functions and technological systems—how they interact and how a failure in one along with a distraction in another can multiply into an incident. Consider these recommendations:
Risk Assessments That Incorporate a Complex Systems Approach – Characteristics of this approach include expertise in complex systems modeling, mapping dynamic relationships and visualization of metrics and components, applying big data and data analytics to uncover risk, applying information governance principles, understanding a control’s life cycle may be measured in months in dynamic organizations and creative testing and simulations of high-impact events discovered during modeling.
Testing – The testing component is critical. Run fire drills of high-impact scenarios to measure and learn from responses to them. Cybersecurity penetration testing is a good example, provided the hacking simulation uses tactics similar to real threats. Many don’t. Get creative with testing possible risk scenarios, and involve employees. The lessons learned from an actual drill far exceed the benefits of describing them on paper, which seldom mirrors reality. Also, scenarios for fraud, safety and ransomware incidents will help instill a culture of diligence better than policies and traditional methods.
Encourage the Minority Report – Emergent threats are created by shifts in policy, procedure and systems, the ripple effect of which is often not fully explored due to cognitive bias, i.e., those who introduce and implement the changes often have a vested interest in their success. This creates a blinder effect to risk. A “minority report” helps. This means assigning a group outside the team responsible for changes to take the opposite stance on ideas and receive resources such as time and money to find holes in the ideas. While new changes may proceed, this minority reporting process challenges assumptions and forces discussion on possible risks from different perspectives. Since risk in a complex system cannot be fully understood, this adds an extra layer of defense. Conflict should be welcomed because it results in valuable insights.
Hotlines as Force Multipliers – A force multiplier uses existing resources to help multiply an individual or department’s effectiveness. Risk, safety and ethics hotlines are a powerful force multiplier that use the eyes and ears of employees, vendors, customers and even competitors and the public to spot fraud, safety risks, ethics and human resources violations and other threats. Because no complex system can be fully described by any individual agent, “crowdsourcing” risk management to all agents in the system greatly enhances the existing risk management and control structure. For more information on hotlines, view this sheet.
Use Data Analytics – The clues and symptoms of risk are embedded within your data. Knowledge of what data you have and how to analyze it is key to spotting potential risks. Some data is hard to identify even though it’s in plain sight—video surveillance, email, internet history logs, human intelligence, social media, documents, computer activity, personnel reviews and freely available third-party “open data” are all data assets that can be used to anticipate risk. Too often this data goes unused as a risk assessment weapon, to the organization’s detriment.
No one can anticipate all possible risk scenarios in a complex system. However, by embracing that fact and planning for risk, the organization can more likely handle the “unknown unknowns” inherent in the complex system that’s your organization.
Latest posts by Lanny Morrow (see all)
- State of Michigan Employees Fall Victim to Test Phishing Scam - April 25, 2018
- Your Organization Is a Complex System—Treat It That Way - August 22, 2017
- Look over Here! (While I Steal Your Data over There) - August 14, 2017