Common use of System trustworthiness modelling Clause in Contracts

System trustworthiness modelling. The other approach that is relevant to 5G-ENSURE involves creating a model of the system, which can then be analysed to detect potential threats and identify potential countermeasures. The analyst using such a model is then able to improve trustworthiness (by specifying countermeasures to reduce risks), or at least highlight where users or system components may need to trust other parts of the system. This approach is especially useful if the models can capture risks (and trust) in relation to system components involved in threats, and thus provide insights on how the system architecture and design lead to those specific risks being present. Many methods have been developed to try to identify and analyse threats in ICT-based systems. [▇▇▇▇▇▇▇▇ 2014] breaks the threat modelling process down into four stages: system modelling, threat identification, threat addressing, and validation. Threat identification is usually the most difficult step, for which a range of methodologies have been devised. Three broad classes are normally used:  Asset centric methods: are based on analysing the system to identify assets that contribute to its success, then identifying ways those assets (or their contribution) may be compromised.  Attacker centric methods: are based on understanding who might attack the system and what means they might be able to use, and then identifying where the system may be vulnerable to those attacks.  Software centric methods: are based on finding potential vulnerabilities in the software assets in the system, with a view to guiding implementers to avoid introducing them. Software centric methods are most amenable to automated analysis. For example, Microsoft’s Secure Development Lifecycle (SDL) framework [▇▇▇▇▇▇ 2009] can be supported by STRIDE [▇▇▇▇▇▇▇▇▇ 2004] which is a secure software design tool designed to help developers identify and address threats from spoofing, tempering, repudiation, denial of service, information disclosure, and elevation of privilege. The main problem with automated software centric methods is that the vulnerability databases they use are often quite specific, e.g. based on specific known vulnerabilities in specific operating systems, platforms or application software. Ultimately, the goal is to help programmers avoid making errors, and today the most common approach is still based on raising awareness and providing checklists such as the OWASP Top 10 [OWASP 2013] which are used for manual analysis by software developers or in tools like STRIDE or [ThreatModeller 2016] which helps developers identify attack paths based on a library of possible threats. Finally, software centric methods are limited to finding and addressing software vulnerabilities (i.e. programming errors) or their potential consequences. They cannot easily identify or address threats involving human factors such as social engineering or user error, or threats from inappropriate use of (correctly implemented) system functions. Attacker centric methods are, not surprisingly, much better at identifying threats from or involving humans. However, these approaches are much more difficult to automate, as they depend on expert knowledge of likely attackers and attack methods. It may also be difficult to decide how various attacks relate to the system being analysed, and hence where security measures could be introduced to counter specific threats. Some tools do exist such as SeaMonster [▇▇▇▇▇▇ 2008], and typically use attack trees to help analysts decide how potential system vulnerabilities (which may be software centric) could be used to attack the system. The commercial Nessus tool [Nessus] which can scan a network for potential threats from viruses, malware and hosts communicating with undesirable systems falls into this category as well as the MulVAL tool [Xinming 2006], a logic-based enterprise network security analyser which encodes the network topology and discovered vulnerabilities in Datalog statements to compute and reasons over an attack tree. Both Nessus and MulVAL are used by the PulSAR enabler developed in 5G-ENSURE. Asset centric methods are the ‘gold standard’ for risk analysis purposes, because they make no assumptions about the nature of the threats that may need to be addressed. These methods include the standardised approach from [ISO 27005], and (if not limited to information systems) [ISO 31010]. Their main drawback is that they depend on an analysis by a security expert with extensive knowledge of the types of threats that could potentially affect the system. Even if that expertise is available, the process (being manual) is usually carried out imperfectly, especially where threats relate to the purpose or function of the system, with which the security expert may be less than familiar. Finally, a manual analysis to identify threats and appropriate responses can take a long time, and is unsuited to agile development using DevOps methods on virtualised platforms [Drissi 2013]. However, in the last decade some efforts have been made to use machine understanding in an attempt to capture information about possible threats and relate this knowledge to the design of a system. [Hogganvik 2006] devised a graphical representation of security threats and risk scenarios, while the Secure Tropos language [Matulevi 2008] also supports modelling of security risks. [▇▇▇▇▇▇ et al 2011] provided a useful review of the early approaches, and concluded that the Security Ontology from Secure Business Austria [Fenz 2009] was the most complete, providing an OWL ontology for modelling system assets, threats and controls based on the German IT Grundshutz specification [IT Grundschutz 2004]. However, this model provides a description rather than a classification of security concepts. It is good for describing security issues in a system, but less useful as a basis for machine reasoning, and as a result it doesn’t provide much assistance (except as a checklist) for threat identification and analysis. This gap was first addressed by one of the 5G- ENSURE partners in the FP7 SERSCIS project [▇▇▇▇▇▇▇▇ 2013], which devised a model designed to support a machine inference procedure for identifying which classes of threats affect a given system. The core ontology is shown in Figure 6. Superficially it looks similar to the SBA ontology, but it is based entirely on OWL classes, and has a simpler structure so that fewer facts need be asserted before useful knowledge can be inferred. The ontology is used to support a machine reasoning procedure to decide which types of threats affect a system based on its composition in terms of asset types. Where a threat affects a pattern of interacting assets, a rule base can be used to determine whether the security mechanisms used to protect those assets are sufficient to block or mitigate the threat. In FP7 SERSCIS, the ontology was also used to construct a Bayesian belief graph describing the effect of threats on the behaviour of system assets, which was used to diagnose which threat(s) might be the cause of any run-time misbehaviour.

Appears in 3 contracts

Sources: Trust Model Draft, Trust Model Draft, Trust Model Draft