There are many options for load generators to stress-test a web site. The problem can be that most of them only stress test at a fixed load level – the apache benchmark tools ab – for example. Stress testing at a fixed load level is helpful to find out the maximum capacity a service can handle. However, for testing HPA and other auto scaling systems, we need load generators that can produce variable load over time. Ideally, they would record the load profile as the QPS over a series of time, and then the load generator would replay those load profiles by reading the recorded QPS series.

I decided to go searching for a good variable load generator. The initial search on the internet did not yield any obvious project which could take a series of QPS numbers and hit replay. I did go through the “awesome http benchmark” list from Denis Denisov on github.

https://github.com/denji/awesome-http-benchmark

I was looking for a load generator project that could produce good variable load results with relatively small change. Going through the list, I found two projects that have an interesting rate limiting interface:

Vegeta: https://github.com/tsenart/vegeta

And fatshttploader: https://github.com/hagen1778/fasthttploader

Load Generator 1: Vegeta’s Rate Limiting Interface


Let’s take a look at Vegeta first. Vegeta has a “pacer” interface to control the rates of http request. How the worker interacts with the “pacer” interface is simple. Before sending out each request, the worker thread will call the “pacer” interface to determine the time it needs to wait. By inserting extra wait time, the “pacer” controls the rate of the tick token was making.

Now a closer look at the caller of the pacer interface to understand how it works, in attack.go 

line 289:

https://github.com/tsenart/vegeta/blob/master/lib/attack.go#L267

    289                 for {

    290                         elapsed := time.Since(began)

    291                         if du > 0 && elapsed > du {

    292                                 return

    293                         }

    294 

    295                         wait, stop := p.Pace(elapsed, count)

    296                         if stop {

    297                                 return

    298                         }

    299 

    300                         time.Sleep(wait)

    301 

The for loop at line 289 is for generating the attack tokens.

Line 295 is calling for the Pacer interface method to determine the wait time.

Line 300 is actually sleeping.

In the late part of the function, 

    303                                 select {

    304                                 case ticks <- struct{}{}:

    305                                         count++

    306                                         continue

    307                                 case <-a.stopch:

    308                                         return

Line 304 is where the request token was generated and sent to the ticks channel. Each ticks channel event corresponding as one request to the server.

Within the same function there are two places of duplicating code.

    317                         select {

    318                         case ticks <- struct{}{}:

    319                                 count++

    320                         case <-a.stopch:

    321                                 return

The above code in 317 is doing the same thing as 304. These two sections of code can be merged together.

The interface on line 295 is actually hard to use. The reason is that, from the caller’s point of view, it just wants to emit http request token at a given rate X request per second. An easy strategy is just sleep over 1/X seconds. That will make up X request per second.However, this simple method is based on the assumption that the rest of the for loop is always being scheduled and that it takes ignorable time. If the execution has been scheduled out other than the “time.Sleep ” call at line 300, then the scheduled out execution time will be added to the delay to generate request token as well. Due to scheduling behavior, the actual request generated will be slightly lower than X.

march2 blog 1 opsanicorner 2 chriss commentary 1

Sure enough, if you take a look at any of the existing peacer implementation, there is some logic similar to this:

https://github.com/tsenart/vegeta/blob/master/lib/pacer.go#L54

     53         expectedHits := uint64(cp.Freq) * uint64(elapsed/cp.Per)

     54         if hits < expectedHits {

     55                 // Running behind, send next hit immediately.

     56                 return 0, false

     57         }

It basically said that if the number of request token is less than expected, instead of returning the normal 1/X waiting time, return 0 for immediate generation of the next token. The pacer function either returns 0 or 1/X as the waiting time. This is effectively a feedback control loop with two distinct controller outputs.

So vegeta’s pacer interface is tricky to use.

Load Generator 2: fasthttploader’s Rate Limiting Interface

Now, let’s take a look at the fasthttploader.

Fasthttploader has a ratelimiter interface that is designed much better than vegeta’s pacer interface.

https://github.com/hagen1778/fasthttploader/blob/master/ratelimiter/ratelimiter.go

In the ratelimiter, there is no wait() call at all. Fasthttpoader use the go time.Ticker() to calculate the number of request token base on how much time has elapsed. That is much cleaner and it does not get affected by the scheduling behavior. If the ticker callback has been schedule late, it will add more token into the go channel accordingly.

https://github.com/hagen1778/fasthttploader/blob/master/ratelimiter/ratelimiter.go#L36

     36 func (l *Limiter) start() {

     37         var surplus float64

     38         for {

     39                 select {

     40                 case <-l.doneCh:

     41                         return

     42                 case <-l.ticker.C:

     43                         now := time.Now()

     44                         l.mu.Lock()

     45                         tokens := (now.Sub(l.lastEvent).Seconds() * l.limit) + surplus

     46                         l.mu.Unlock()

     47 

     48                         n := int(tokens) – len(l.ch)

     49                         if n <= 0 {

     50                                 continue

     51                         }

     52                         surplus = tokens – float64(int(tokens))

     53                         for i := 0; i < n; i++ {

     54                                 l.ch <- struct{}{}

     55                         }

     56 

     57                         l.mu.Lock()

     58                         l.lastEvent = now

     59                         l.mu.Unlock()

     60                 }

     61         }

 

There is no time.Wait() at all.

Also the the rate control interface just call 

https://github.com/hagen1778/fasthttploader/blob/master/loader.go#L113

    113         throttle.SetLimit(cfg.qps)

    114         client.RunWorkers(cfg.c)

The caller just needs to set the request per second and ratelimiter execute at that rate. There is no need to worry about the catch up and behavior of the control feedback loop. Another advantage is that calling the SetLimit() is thread safe and can be done outside of the main running loop.

      8 func (l *Limiter) PlayLSV(interval int) {

      9         ticker := time.NewTicker(time.Duration(interval) * time.Second)

     10         go func() {

     11                 for {

     12                         <-ticker.C

     13                         var rate float64 = 0

     14                         _, err := fmt.Scan(&rate)

     15                         if err != nil {

     16                                 ticker.Stop()

     17                                 break

     18                         }

     19                         l.SetLimit(rate)

     20                 }

     21         } ()

     22 }

From this exercise, we compare the variable load generation interface provided in vegeta and fasthttploader project. I enjoy the simple to use rate limit design in the fasthttploader. We also implement a very simple variable load generation feature by reading load rate value from a lsv file.

In our experiment it also shows that fasthttploader can produce higher load than vegeta due to using the higher performance fasthttp package rather than the stander go http package.

Here is a report generated by fasthttploader given the 300 600 900 1500 600 200 variable load sequence at 10 second interval.

To learn more about Opsani, read our datasheet here.