In previous posts we have discussed two of the most critical phases in the “The Lifecycle of a Threat Pattern”: analysis and design. In the analysis phase the objective is to fully understand the asset in scope by getting deeper into the context to formulate a set of residual risks to which the asset might be exposed. In Engineering the Design of a Threat Pattern, Davide Veneziano illustrated the importance of a design phase to effectively identify the conceptual models and prototypes which will eventually be implemented.
The concepts articulated in the analysis and design phases come to life through the deployment of specific controls in the security ecosystem. However, long gone are the days when a single security solution addresses the majority of security challenges organizations face. Today the security ecosystem is characterized by a multitude of distributed technologies collectively comprising the foundation of the corporate security strategy.
Security monitoring capabilities exist in a number of different platforms and, whether the final objective of those resources is to detect or prevent a specific risk from occurring, careful consideration must be given to the choice of the target security platforms.
“Layered detection” is one of the primary principles to take into account. It is where a main element of the threat pattern is configured into the asset (or in the monitoring platform closest to the asset) with additional capabilities implemented into other threat detection technologies, integrating and extending detection. This enables monitoring for attacks at different levels, thus reducing the dwell time a threat actor succeeds in establishing a foothold.
Let’s walk through a scenario where a web server exposes an authentication system to the users. The different elements of the threat pattern are applied into the tiers of the service architecture. This is followed by the deployment of additional controls in the security platforms responsible for monitoring the data being processed, such as web application firewall, database security monitoring and network analytics platforms. In this case, a misuse by a threat actor creates artifacts and traces in all of these technologies in the form of HTTP server-side error messages, SQL statements by the backend, unexpected business logic behaviors, requests to admin interfaces, call of system functionalities for privilege escalation, upload of file, outbound peaks and so on. All of these can be carefully monitored to implement the logic of the threat pattern.
Another key aspect to consider is the data–analysis techniques leveraged in implementing the pattern. Each and every threat pattern has unique objectives; a single technique that fits all doesn’t obviously exist. The data collection and analysis phases are heavily based on the inherent data characteristics, the security platform being used and the scenario to be detected, investigated and reported. The data-analysis technique also depends upon the type of data collected. For example, in some scenarios, it must be comprehensive enough to cover not only the logs generated by all the units, but also the network traffic to augment the visibility and enhance the detection. All this data needs to be married with internal contextual information such as the residual risks and external threat intelligence.
The data-analysis techniques need to be implemented with both in-depth and in-breadth analysis approaches in mind to actually detect the wide-range of threats to be monitored by looking for anomalies, unknown threats and suspicious behaviors. Eventually, the choice of the analysis technique is mainly driven by the data sources with the inherent value of the collected data only as relevant as the technique applied.
Additional principles and best practices security engineers should also consider in order to be effective and to keep up with the rapidly changing landscape are:
- Keep implementation of the threat pattern simple (KISS principle);
- ”Don’t Repeat Yourself” when it comes to code writing in multi-tier architectures (DRY principle);
- “You aren’t going to need it” so follow the results of the analysis and design phase to avoid the introduction of unneeded functions (YAGNI principle);
- Make it easy to maintain and extend (SOLID principle).At the same time, the continuing evolution of business needs and pressure on the IT ecosystem also contribute to make the ever-changing landscape even worse.
Threat actors need to be right only once, while the defenders must be right every time.Even if all attacks share some common elements, there is no single or consistent attack methodology commonly used by all threat actors. They are elusive and adjust their techniques and workflows to circumnavigate the defenses to effectively compromise the target.
The principles and practices outlined above can empower the organization to implement effective threat patterns that remain current and better aligned with business requirements.