Measuring attack paths in web applications

Appsec / October 29, 2022 • 7 min read

Tags: Code Review

Recently a customer asked us after our penetration test against their web application, the percentage of possible attack paths we had covered. It was a difficult question to answer because, a) the customer wanted us to focus on SQL injection and XSS (long story why) and b) it was a legacy application from 2003 containing a lot of code. The short answer I gave was that since the test was focused towards SQLi and XSS, naturally, some attack paths were not considered. The customer understood this and accepted it.

During the following days after the test, I started to think more about measuring attack paths. I understand that, stating that a test has covered e.g. 45% of possible attack paths within a codebase, is valuable for a manager or C-level executive. However, I believe this statement contains too much nuance and can actually have a negative effect on any decisions derived from this statement. Which is the purpose of this article, to get into more detail and unravel the nuance.

Throughout this article, I will refer to the measurement of attack paths as, attack coverage.

Defining an attack path

In the context of the customer I previously described, the application being tested was a legacy ASP.NET application, most likely containing over 100k lines of code in total. Before diving into attack coverage, let’s define an attack path. The way I see it, is that an attack path represents a context where a vulnerability of some sort can be exploited in order to perform some action, resulting in some consequence. The vulnerability and the consequence may have a high or a low severity, and a high or a low impact, depending on the context of the vulnerability. With this definition in mind, how do we determine our attack coverage? First we have to, once again, define where and how to look for vulnerabilities. This is where experience and expertise come into play, but for sake of simplicity, let’s assume that vulnerabilities (can) occur where input from users are consumed. Keep in mind though, vulnerabilities may not necessarily be present in the first function call, they can reveal themselves several function calls later.

Consider the following request example:


We could say that the endpoint /api/user is an attack path, since it contains user input. However, it is technically not the endpoint itself that is the attack path, it is the HTTP verbs that it accepts. The image above shows a PUT request, used for modifying existing data. As shown in the image, there are a few vulnerability classes listed that can be investigated. But we can also examine DELETE requests which will delete users. POST requests are used for creating users, can we overwrite users? Can we create a user without verifying our email? Finally, GET requests can be used to retrieve users. How much data can we read from a user?

It is clear that for every endpoint, there exists sub-attack paths (HTTP verbs) with various exploitation possibilities resulting in unique consequences.

Why enumerating attack paths is not simple

Enumerating attack paths can be done statically, by counting the amount of supported HTTP verbs for each endpoint that accepts user input. This method assumes a whitebox approach, which may not always be the case. If so, enumeration can be performed in a blackbox, with zero insight into the codebase. In that case, attack paths can be enumerated dynamically, by crawling the application in order to determine all possible areas that accept user input. The downside of this approach is that some attack paths can be missed and therefore not included in the attack coverage.

However, simply stating that user input related endpoints are subject to application vulnerabilities, is wrong. The reason why is because vulnerabilities do not have to be explicit. For example, SQL injections are very clear and unequivocal (sometimes), other issues may not be, such as logic errors.

Without going into a detailed explanation of what logic errors are, let’s assume for the sake of argument that a logic error is not an application vulnerability, rather an implementation based on a flawed or incomplete design.

An application can have one or multiple endpoints, perfectly secure, but may be entirely broken when seen from a design perspective. The code may be highly secure, but since its implementation is based on a faulty design, logic errors can be exploited.

Therefore, it is hard to reliably enumerate endpoints statically and say we have X attack paths. Because there exists a number of unknown attack paths that are not easily enumerated. For example, consider two low severity vulnerabilities that are insignificant by themselves, but if combined, can suddenly form a highly significant vulnerability. This type of an attack path is often hidden and not obvious, depending on how deep it’s buried. Does not even have to be a combination of vulnerable endpoints that leads to a severe vulnerability. It can also be a set of requests executed in a specific order, resulting in a vulnerability.

Testing methodology

In addition to the problems expressed in the previous section regarding enumeration, testing methodology is another issue. If 50 REST API endpoints have been discovered, and the pursuit of vulnerabilities is based on OWASP Top 10, most certainly some attack paths will be missed. However, let’s say for the sake of simplicity, that these 50 endpoints only contain vulnerabilities described in OWASP Top 10, how can each vulnerability be discovered? Simply spraying "><script>alert(1)</script> everywhere may miss several contexts that would otherwise execute arbitrary JavaScript. Using a polyglot based XSS payload might increase the chances, but is not bullet proof. However, if it is a whitebox test, then enumerating all outputs where user input is presented is a very good way to determine if XSS vulnerabilities are present. Some template languages contain safe and unsafe functions used for displaying output. By simply evaluating whether unsafe methods are used within the codebase, it is possible to say with high confidence that XSS vulnerabilities do not exist. Of course, this depends on several factors, but is true in general.

Same process can be applied when looking for SQL injections. If all SQL queries are performed with prepared statements, it can be said with a high degree of certainty, that this codebase does not contain SQL injections (at this point in time). In a blackbox approach, nothing can be said with certainty, and requires more time and effort to uncover vulnerabilities, thereby requiring a higher degree of expertise from the tester.

This leads me to believe that in order to reliably determine the attack coverage, a whitebox approach is required. Although, a whitebox approach is not guaranteed to provide 100% attack coverage. But it will provide information that will produce a more accurate attack coverage.


To summarize, it is hard to reliably enumerate endpoints statically and say we have x attack paths. In addition, any identified attack path is a snapshot in time and may change in the future, both in severity and what type of vulnerability needed in order to access it. New attack paths may also be introduced at any time.

Even though attack coverage is difficult to determine and contains a lot of nuance, it is still possible to say that we covered XX% of endpoints, where we found Y number of vulnerabilities. The issue with that statement, as I have described in this article, is that it does not convey how the endpoints were enumerated and what types of tests were performed. I would argue that, this is a general problem in the cyber security space, which leads us to this famous quote:

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones

Perhaps strange quote to end with, but let me elaborate. Enumerating attack paths and identifying vulnerabilities can in some cases be automated and be statically verifiable. Other types of vulnerabilities require manual inspection. Both methods require a level of understanding of the particular context. It is hard to quantify the level of understanding required in order to identify vulnerabilities. But I would argue that at any level of expertise for a given area or context, there exists a knowledge gap where there are unknown unknowns, meaning vulnerabilities that we don’t know, that we don’t know.

Future of AppSec

We are seeing more and more improvements within the machine learning field. I hope that one day, ML tools can begin to understand context when “reading” code. Understanding what goes in and what goes out, and how that connects to the bigger picture. Hopefully, it can be combined with static analysis tools such as Semgrep to reliably identify vulnerabilities. The future of application security will not be dynamic or static analysis, it will be context driven analysis. You heard it here first, *drops the mic*.