09 Jul, 2015

Intro to BurpSuite, Part VI: Burpsuite Sequencer

by Ken Toler

Welcome to the next edition of the Intro to BurpSuite series. This time around I wanted to draw attention to one of the more advanced features of the BurpSuite toolset, Burp’s built-in sequencer. The Sequencer tool has a lot to offer, but it is often overlooked and seen as a complex instrument to be used by only the most intelligent security engineers. If you’ve been following along in the series and have a few application assessments under your belt, this is a good addition to your mental toolkit to expand your capabilities as a security analyst or penetration tester.

First, let’s talk about what Sequencer actually does

In the simplest sense and as noted by PortSwigger’s website, it is a tool for analyzing the degree of randomness in security-critical tokens. If you’ve done any work in the field, you’ve probably come across findings from some of the more popular static analysis tools that note “insecure randomness” as a vulnerability. Burp Sequencer can help you understand this vulnerability and the consequences of an improper implementation. Hopefully, after reading this article you’ll have a better understanding of both.

One of the most popular ways random tokens are used is in session cookies. These are values that identify a user and their current session for the purposes of authorization in a web application.

In an effort to minimize confusion, I’ve put together a small demonstration application that offers three ways simple session tokens are generated. With the app, we can fire Burp off with abandon and see just how it analyzes the tokens. This post will focus on the application’s Weak tokenizer.

Setting up the Demo Application

Application setup is relatively easy, but you will need to be somewhat familiar with Ruby, Ruby on Rails, and have a basic understanding of git. You can find the application here:

https://github.com/relotnek/tokenizer

To get started, clone the repository:

git clone https://github.com/relotnek/tokenizer.git

Navigate to the root directory of the application and run the following commands:

bundle install
rake db:setup
rails server

The application will now be available on your local machine at:

http://localhost:3000

Sometimes, developers will look for an easy way to include some basic information in a session token and append a random value to this information in order to track important attributes of a user that can be reused later. Additionally the random portion of the token ensures that an attacker can’t simply guess what the session token will be and bypass the authentication process. In this example, we’ll use Burp to identify this method of token generation and determine how you can use this method to your advantage during a pentest.

First, let’s just take a look at logging into the application:

Enter any username and password and log in:

You can see that a token is generated on the next page:

If we do this repeatedly, we can see that our token changes each time:

You can probably see the flaw here, but if not, we can use the Burp Sequencer to help us.

Configure Sequencer

So with the app’s functionality in mind, let’s capture the request using the proxy and examine the parameters that are sent to the application.

You will notice that there is a username and password field submitted by the form. If you’d like to follow along exactly, notice that I am using the username user123 and the password password:

Right click on the request window and select Send to Sequencer :

On the Sequencer screen, you will see that our request is sitting in the table at the top under the heading Select Live Capture Request.

Since we only have one request in the queue, it’s a pretty easy game from here.

Take a look at the next section,  Token Location within Response.

This is where we tell the Sequencer tool where to look for the token that is generated by the application. The tool will attempt to identify cookies and other tokens it sees in the response in a normal application. Here, it selects the authenticity token generated by the application, but this isn’t the one we’re interested in. You can also tell Sequencer to look in custom locations. Since our token is in the body of the HTML, we’re going to go ahead and look there.

Click on the Custom Location  button radio button, and select Configure :

You will notice the HTML of the response is shown in the bottom portion of the window. Go ahead and scroll down until you find the token in the body and highlight it with your mouse cursor.

Burp Sequencer automatically makes some smart expression decisions; you will see that the start and end of your token is defined automatically. Now we’re ready to start grabbing tokens.

Click Ok  on the bottom portion of the window. Let’s tweak the levels of Sequencer, so we can capture as many tokens as possible.

Under the Threads  configuration option, make sure that it’s bumped up to 20:

This is an appropriate number for our demo environment because it’s local to the machine and we’re not going to break anything. If we do, we have the peace of mind that the application isn’t important. In a live pentest, you may want to consider the “low-and-slow” approach depending on the circumstances of the live application.

Then click Start Live Capture :

On the next screen, check the Auto Analyze  box so that Sequencer analyzes the tokens in pseudo-real-time and you can watch the changes happen:

This next part may take a while, so feel free to grab your beverage of choice until you have about 7000 tokens.

About 10 Minutes in…

Now that we have a suitable number of tokens, let’s take a look at what Sequencer analyzes for us.

You can click Stop if you have enough requests for all of the available analysis options. We’re going to take a cursory look at each one.

On the first page, we’re greeted with a nice summary about entropy and significance level which, if you’re really into statistics, might make sense to you (or it might not). Don’t worry too much about this at the moment, but if you’re really into entropy and significance levels, PortSwigger has a great write-up on understanding these concepts here.

What we really want to look at in this case are some of the ah-ha triggers in the token. These really manifest themselves in the character-level analysis, so let’s head over to that tab.

Character Analysis

Whoa! We see red, so that’s bad, right?

Well, yes, it is, and this is the Weak tokenizer so what can we glean from this information?

We see that the first 8 characters are pretty random and they’re green, so Sequencer’s magic sauce says that’s fine. But those last 20 characters are pretty low on the bar graph totem pole, so let’s take a look at our tokens.

You can click Save tokens at the top of the window to save all of the tokens you’ve captured to a file. This way, you can take what you see in the graph and apply it to the actual data.

Looking at the actual tokens file you’ve saved, you can see pretty clearly that those last 20 characters are exactly the same:

Now this is a pretty rudimentary example, but can you figure out what they mean?

Try another username and password and see if anything changes.

If you take those last 20 characters and throw them into the decoder for Base64:

You will see that we’ve got the username and password.

Okay, so what? We have the username and password, so we can log in, but let’s suppose it wasn’t that obvious. What else can we gather from this character analysis?

Character Set

If we take a closer look at the character count as seen in the Character Set tab:

You’ll see something a little interesting. The left hand columns are all exactly the same size, and the y-axis of the bar graph tells us that the maximum value here is 26. There are 26 letters in the alphabet, and judging from our saved tokens file, it doesn’t look like there are any numbers or symbols. So we can safely guess that our random value is being generated from all lowercase letters in the English alphabet.

Which means if we wanted to try to guess a user token, we would just have to encode a username and password and brute force an 8 character value with a known character set.

Now that we know all this, we’re ready to perform the same exercise on a more difficult example. The Average tokenizer doesn’t require any user input; it just generates random “session” tokens. See if you can use this technique to determine how you might figure out the character set.

The goal here is to introduce you to the simpler side of Sequencer so that you can use it without feeling intimidated. There’s a TON of information in this tool that you can use for FIPS compliance and cryptographic analysis, but that’s not ALL you can use it for. Sometimes running tokens through the tool enables you to see some weaknesses that are otherwise hidden from the naked eye when trolling through thousands of tokens by hand.

I hope this has lifted a piece of the veil for Sequencer as a weapon in your arsenal. Keep an eye out for more posts where we dig into the crypto.