How to recognise an unprofessional
Cyber-Risk Advisory or Service

  1. ?

    Professional
    1. They use an absolute scale to predict or measure Cyber-Confidence & Cyber-Risk Testimation uses an IT-System of infinite size & complexity as its frame of reference. This ensures that all IT-Systems are measured consistently, objectively & scientifically Q: What is a practical real world example of an IT-System of infinite size & complexity ? A: The Internet 2. Cyber-Confidence & Cyber-Risk is scientifically defined Testimation technology provides numerical predictions & measurements; for example: "Your IT-System is measured to possess 97.37% Penetration-Free Confidence; therefore, your Residual Penetration-Risk is 2.63%. If you wish to reduce the Risk, you will need to execute additional Penetration-Tests" 3. "People & Process" statements are not applied for emotive leverage to manipulate your decision making Testimation technology minimizes the compounding of human perceptual errors, by utilizing science as much as possible. It facilitates the standardization of Cyber-Confidence & Cyber-Risk assessment processes, fostering a Continuous Improvement Culture (CIC)
  2. ?

    Unprofessional
    1. They use a relative scale to assign a Risk-Score You are assessed subjectively. There is no science to their approach, they tell you what "they feel" based upon previous experience or their own commercial data which could easily be falsified. Typically, they will state claims which cannot be verified. You will be emotively forced into a "trust me" situation 2. They will claim that their relative scale is the "only way" to solve the Risk Assessment problem This is completely wrong. People often like to think that if they can't solve a problem themselves, the problem can't be solved. This of course is childish & egocentric psychology. The world around us is filled with problems that were solved by other people; so, just because "you" can't solve a problem, doesn't mean that someone else can't solve it 3. They'll tell you: "Our people are good, the best !", "Our processes are good, the best !", "We have many years of experience !", "We're the biggest, for good reason !" etc. Let's forensically decompose these claims shall we: 3.1 "Good" or "the best" These are highly subjective statements 3.2 "Our people are good, the best !" People = Population Large populations of "anything" obey the mathematical properties of a Normal Distribution. This means that most of their people will possess average ability, not exceptional ability. Hence, claims of "good or best people" is statistically speaking, probably an exaggeration intended to emotionally manipulate your decision making process 3.3 "Many years of experience !" Many years of experience not solving the problem is more likely to be the case. Too much reliance upon experience can stifle innovation. When we convince ourselves that we have "seen it all", we may become complacent & thereby fall-behind 3.4 "We're the biggest, for good reason !" Being "big" is not something to boast about. No company is bigger than a Government & we are all very well aware of how inefficient Governments can be. As much knowledge exists outside big corporations as exists within it, so being "big" means that you're paying a higher price due to greater bureaucracy. Employees of large organizations have greater opportunity to "hide" from work, & human nature being what it is, that's what they'll tend to do. A smaller organization will often produce an equivalent result at a lower cost, & more quickly 4. Their reporting metrics will be observational, not predictive Observational metrics are extremely easy to produce; these are basically reports on your current state, gathered by machines, the machines do most of the work, not the people 5. They may attempt to fool you with the concept of Artificial Intelligence (AI) AI is the latest catchphrase: machines only measure, & perform functions based upon these measurements, they cannot problem solve unless the appropriate response has been predefined by a human being If "what to do next" has been programmed into a machine, it is evidence of access to memory storage & functional execution, not intelligence .... don't be fooled The real talent, the real engineering skill is to measure your Cyber-Confidence with respect to vulnerabilities. Unless they can provide an actual % number associated to your situation, you're getting a lot less from them than you deserve 6. They'll not list their actual Test Cases, just Test Scenarios Quite often, they will just quote OWASP Test Scenarios as being their Test Cases. This immediately demonstrates that they don't know the difference between a Test Case & a Test Scenario. If they don't know the difference between them, then they do not understand the fundamentals of Quality Assurance (QA); this is a cause for concern because they perform a QA role Executing Test Cases against the OWASP Test Scenarios is definitely a good thing, but not specifying the detail of the actual Test Cases leaves your knowledge of the Penetration Testing Effort somewhat vague. Moreover, by not specifically counting the number of Tests Executed, accurately measuring Cyber-Confidence is impossible. It is a simple fact that more Tests equals greater Cyber-Confidence & if the Number of Tests are not accurately counted, how is it possible to measure Cyber-Confidence?