If you ask most application security consultants how to prevent username enumeration, the answer will most always be: “Return the same information, regardless of whether or not the user account exists.” This is a great start but isn’t a guaranteed solution.
Consider the following authentication login. It’s quite a silly example, but illustrates the point:
We first check to see if the username exists; if not, we render an error message that’s stored in a constant variable. By leveraging a constant variable to store the error message we ensure that the returned message is always identical, as was recommended.
If the username exists, we need to compare the passwords to ensure the user has provided the correct password. We do this by calling the compute function. This function is designed to iterate 100 million times, doing exactly nothing. Why? Who knows—developers do weird things all the time.
After executing compute, we return the NOT_EXISTS error message. Doing so ensures that the HTTP response is identical for authentication attempts with valid and invalid usernames.
In this case, we quickly notice the HTTP request takes a substantial amount of time to execute when we provided a valid username. And, there we have it, username enumeration!
Essentially, an attacker can leverage the time differential to determine (in this case, with relative accuracy) whether or not a username exists. Similar attacks are used for exploiting blind SQL injections.
In our example, sending a request with an invalid username resulted in a 1,006ms HTTP response time, whereas sending a request with a valid username resulted in a 4,515ms response time—a noticeable difference.
In this example we’re operating in a local environment, which is relatively stable. Anyone who’s ever used the internet can attest to the fact that most websites are not quite this stable. In fact, there are many more variables involved than just the target website. Response times can easily vary upwards of a second without users noticing.
So the question becomes, how do we notice a time difference in an unstable environment? Well, let’s make several requests for a given username and take the average response time. By taking the average we can attempt to normalize the data, weeding out the possible outliers. In doing so, however, we increase the number of requests that must be sent and, therefore, the overall run time.
I have created a basic proof of concept script to aid in this process. The PoC (available here) attempts to determine a site’s exploitability and then attempts to exploit the site using a dictionary file of email addresses.
The PoC leverages a user-defined margin that is used when determining the existence of a username. For example, if the web application typically responds with a 300-350ms response time, we have a potential difference of 50ms and an average of 325ms. Setting the margin to 50 will flag any response with time of 375ms or greater as a potential username.
Obviously this type of attack will not always provide accurate results. The success of this strategy depends highly on the stability of the server and the internet connection between the hosts.
For some more information on username enumeration and the PoC, check out my SecCasts video:
Username enumeration is a very prevalent vulnerability, and it is not always remediated by simply ensuring that the application responds the same in situations where the username exists and does not exist.
In this blog post, we used a very extreme example to demonstrate the feasibility of a time-based attack. In the real world it is unlikely that we will run into such an obvious situation. Due to high-cost password hashing algorithms like BCrypt and SCrypt, such an attack is still plausible (though with shorter response time margins than 50ms). In improper implementations of these algorithms, it is still quite possible to exploit the time differential to discover valid usernames.