What about Concurrency?
One of the more interesting and useful features in Go is the first-class language support for concurrency. Using goroutines, it’s easy to spin up a concurrent thread for code execution, making Go a great choice for a networked or responsive application. However, it’s also important to realize some of the gotchas that come with threaded programming that aren’t present in other common web frameworks (Rails, Django, etc.).
Before we delve into some of the potential problems, let’s briefly cover how goroutines work in a common scenario:
As you can see, goroutines are called using the
go keyword and can operate as independent functions or as anonymous functions. The
go keyword is nonblocking, and the goroutines are eligible to be scheduled immediately. We use channels, a Go concept that allows goroutines to pass variables to each other, to neatly solve the problem of passing information. When something is sent on them, channels block until a read occurs. This way, there’s no risk of losing messages.
In Go <1.5, goroutines all run on a single process by default, meaning they are concurrent but not actually parallel. There is only one goroutine running at a time, and an internal scheduler handles switching off between them to ensure they all run.
This simple example illustrates the difference. The first goroutine immediately returns a
1 value on its channel while the second has an infinite loop. With a single process, the program hangs, waiting for both goroutines to finish sequentially. With two processes, the program can exit as soon as the first returns.
Now that we’ve demonstrated the basics of goroutines, let’s move on to basic race conditions. Let’s take a simple online bank application as an example. The bank is only capable of sending money from me to you, but it does come with an online display of the total balances. Each time the request is made to send money, the bank transfers the cash and outputs the new balances:
This is a common design for Go web applications. We define a simple object our system will work with (User) and a method call for that object to transfer money. Afterwards, using the net/http package, we create a simple HTTP server route to transfer $50. Under normal operation, the code runs as planned and transfers $50 every time the route is accessed. Once one user’s account balance hits $0.00, all further transfers should be prevented. However, if we send many requests very quickly, we can withdraw more money than an account has and drop the balance negative!
This is a textbook race condition. In this code, the check for the account balance is separate from the operation of withdrawing from that balance. In a hypothetical example, if one request has just finished checking the balance but has not yet decremented it, another thread that checks the balance will find it has still not reached $0! This is referred to as a “check-then-act” race condition because of the order of the operations, and it is surprisingly common. Simply by reading the code, it’s not immediately obvious that a problem even exists! Such is the nature of concurrency bugs.
Doing it right
So how do we avoid being exploited by this problem? Clearly we can’t just remove the check, so instead we have to ensure that nothing can happen between the check and the action (changing the balances). Like other languages, we can do this relatively easily using a locking variable to track when the balances are being updated, ensuring only a single process operates at a time (in other words, a mutex):
But using channels, we can create a more elegant example with event loops. We delegate a background goroutine to listen on a channel and process transfer operations as they come in. Since this goroutine operates on channel inputs sequentially, there is no risk of race conditions and no need for a state variable:
We’ve now created a pretty reliable system to avoid race conditions. But now we’ve exposed ourselves to another problem: Denial of Service (DoS). If our money transfer operation slows down, incoming requests may need to wait for it to read off those channels. This backlog could easily build up faster than it can be addressed and lead to a full blown DoS attack on the site!
Go provides some basic mechanism to deal with this in the form of buffered channels, but they aren’t always enough. The optimal solution is to combine timeouts with Go’s excellent
In this snippet, we update our event loop code to wait no longer than 10 seconds to try to send a money transfer order. If the event loop doesn’t single on the result channel without 10 seconds, we’ll just return a message to the user telling them the request was received but may take a while to update. With this method, we’ve decreased the damage a single denial of service can do and created a robust system to process money transfers without race conditions!
Go is a powerful language with great built-in security and concurrency features. Using the power of the standard library, writing secure and high-performance applications is easier but not always obvious. It’s still important to approach development with a security-focused mindset and thoroughly consider your app architecture.