Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OP is a friend of mine, and when I first heard of his problem I wondered if there might be an analytical solution to quantify the difference between intelligent vs naive routing. I took this problem as an opportunity to teach myself a bit of Queueing Theory[1], which is a fascinating topic! I'm still very much a beginner, so bear with me and I'd love to get any feedback or suggestions for further study.

For this example, let's assume our queueing environment is a grocery store checkout line: our customers enter, line up in order, and are checked out by one or more registers. The basic way to think about these problems is to classify them across three parameters:

- arrival time: do customers enter the line in a way that is Deterministic (events happen over fixed intervals), randoM (events are distributed exponentially and described by Poisson process), or General (events fall from an arbitrary probability distribution)?

- checkout time: same question for customers getting checked out, is that process D or M or G?

- N = # of registers

So the simplest example would be D/D/1, where - for example - every 3 seconds a customer enters the line and every 1.5 seconds a customer is checked out by a single register. Not very exciting. At a higher level of complexity, M/M/1, we have a single register where customers arrive at rate _L and are checked out at rate _U (in units of # per time interval), where both _L and _U obey Poisson distributions. (You can also model this as an infinite Markov chain where your current node is the # of people in the queue, you transition to a higher node with rate _L and to a lower node with rate _U.) For this system, a customer's average total time spent in the queue is 1/(_U - _L) - 1/_U.

The intelligent routing system routes each customer to the next available checkout counter; equivalently, each checkout counter grabs the first person in line as soon as it frees up. So we have a system of type M/G/R, where our checkout time is Generally distributed and we have R>1 servers. Unfortunately, this type of problem is analytically intractable, as of now. There are approximations for waiting times, but they depend on all sorts of thorny higher moments of the general distribution of checkout times. But if instead we assume the checkout times are randomly distributed, we have a M/M/R system. In this system, the total time spent in queue per customer is C(R, _L/_U)/(R _U - _L), where C(a,b) is an involved function called the Erlang C formula [2].

How can we use our framework to analyze the naive routing system? I think the naive system is equivalent to an M/M/1 case with arrival rate _L_dumb = _L/R. The insight here is that in a system where customers are instantaneously and randomly assigned to one of R registers, each register should have the same queue characteristics and wait times as the system as a whole. And each register has an arrival rate of 1/R times the global arrival rate. So our average queue time per customer in the dumb routing system is 1/(_U - _L/R) - 1/_U.

In OP's example, we have on average 9000 customers arriving per minute, or _L = 150 customers/second. Our mean checkout time is 306ms, or _U ~= 3. Evaluating for different R values gives the following queue times (in ms):

# Registers 51 60 75 100 150 200 500 1000 2000 4000

dumb routing 16,667 1,667 667 333 167 111 37 18 9 4

smart routing 333 33 13 7 3 2 1 0 0 0

which are reasonably close to the simulated values. In fact, we would expect the dumb router to be comparatively even worse for the longer-tailed Weibull distribution they use to model request times, because you make bad outcomes (e.g. where two consecutive requests at 99% request times are routed to the same register) even more costly. This observation seems to agree with some of the comments as well [3].

[1] http://en.wikipedia.org/wiki/Queueing_theory

[2] http://en.wikipedia.org/wiki/Erlang%27s_C_formula#Erlang_C_f...

[3] http://news.ycombinator.com/item?id=5216385



As someone building a Heroku Cedar-esque PaaS[1], here's the problem with your analogy: back in Aspen/Bamboo days (and what a lot of people still think of as "PaaS"), Heroku was like this (i.e your app was the one-a-time cashier, and Heroku's "routing mesh" setup checkout lanes and routed customers to your cashiers intelligently).

Now however, Heroku lets you build your own checkout lane, so you can run apps with single-response thread Rails, multi-response thread(e.g. unicorn) Rails, and async long-polling/SSE etc apps w/ ruby/node.js/scala/go/erlang/etc that can handle huge numbers of simultaneous connections. Throw websockets into the mix here too (we do). And you can even mix & match within an app, distributing requests to different stacks of code based on URL or the time of day, which may have different internal response/queuing characteristics (e.g. we have a Rails app w/ a Grape API w/ a handful of URLs mapped in via Rack::Stream middleware rather than going through Rails).

So to get back to your analogy, Heroku is automating the setup of the "lanes", but each supermarket is allowed to use its own blueprint and cashier and checkout process, and basically just do whatever they want within a lane. Maybe some "lanes" are more like a restaurant where 25 customers spend an average of 45 minutes at a time juggled between 6 waiters while others are still bottlenecked supermarket checkouts, with everything in between. Maybe one type of customer ties up the cashier/waiter so much that he can only handle 10 others instead of 100 normally. And it could all change every time the store opens (a deployment with new code occurs), or based on what the specific customers are buying.

The point is simply that there's not a "next available checkout counter" in this situation, because all apps are not single-threaded Rails apps anymore. Which doesn't mean there aren't better solutions than dumb routing, but it does get a bit more complicated than the supermarket checkout.

[1] http://www.pogoapp.com/


We're discussing Rails on Heroku specifically which, non-unicorn, should be a "next available checkout counter" situation. Ideally it should be possible to make this an optional behavior that you can choose to turn on for Rails apps.


I agree there should be a better way - it's just important to understand than Rails doesn't get any special treatment on a PaaS done correctly, so it's important to come up with a generic solution.

I think part of the solution would be customizable option(i.e.. how many requests can each dyno handle simultaneously), probably combined with intelligently monitoring/balancing proxy load so new requests always go to the least-loaded dyno.

Buildpacks could probably be used to parse of Gemfile/etc, see if you're using what mix of webrick/unicorn/rails/sinatra/rack-stream/goliath etc, and set an semi-intelligent default. But apps are increasingly unlike a checkout line. Apps are more like the supermarket, which is harder.


Rails doesn't need to be treated specially. All that is needed is a "maximum number of simultaneous connections to pass to this backend" setting coupled with load balancing by available slots rather than purely randomly.

The issue here isn't that Rails needs to be treated specially - this problem applies to various extent in any type of backend where some types of requests might turn out to be computationally heavy or require lots of IO. You can't magic away that: A request that takes 8 CPU seconds will take 8 CPU seconds. If you start piling more requests onto that server, response times will increase, even if some will keep responding, and if another 8 CPU second request hits too soon, chances increase that a third one will, and a fourth, and before you know it you might have a pileup where available resources for new requests on a specific instance are rapidly diminishing and response times shoot through the roof.

Pure random distribution is horrible for that reason pretty much regardless.

Now, doing "intelligent" routing is a lot easier for servers with some concurrency, as you can "just" have check requests and measure latency for the response and pick servers based on current low latency and get 90% there and that will be enough for most applications. Sure, the lower the concurrency, the more you risk having multiple heavy queries hit the same server and slow things down, and this request grows dramatically with the number of load balancers randomly receiving inbound requests to pass on, but at least you escape the total pileup more often.

But that's also a clue to one possible approach for non-concurrent servers: group them into buckets handled by a single active load balancer at a time and have front ends that identifies the right second layer load balancers. Shared state is now reduced to having the front end load balancers know which second layer load balancers are the currently active ones for each type of backend. It costs you an extra load balancer layer with according overhead. But don't you think OP would prefer an extra 10ms per request over the behaviour he's seen?


I'm sure OP could prefer the extra 10ms, but then everyone else who can deal with random dispatching right now has to pay a 10ms penalty because OP built his stuff on a technology that can deal with only one request at a time on a server, which boggles the mind to begin with.


Why? The system could easily be built so that it by default only aggregates those services where the configuration indicates they can handle a concurrency below a certain level, and does random balancing of everything else.

The "everyone else who can deal with random dispatching right now" is a much smaller group than you think. Anyone who has long running requests that grind the CPU or disk when running, will be at high risk of seeing horribly nasty effects from random dispatching, no matter whether their stack in ideal conditions have no problem handling concurrent requests.

It's just less immediately apparent, as any dynos that start aggregating multiple long running requests will "just" get slower and slower instead of blocking normally low-latency requests totally.


"The system could easily be built so that it by default only aggregates those services where the configuration indicates they can handle a concurrency below a certain level, and does random balancing of everything else."

Let me know when you are done with that.


I've built fairly large haproxy based infrastructures, thank you very much. Doing this is not particularly challenging.

Actually what I'd probably do for a setup like this would be to balance by the Host: header, and simply have the second layer be a suitable set of haproxy instances balancing each by least connections.

Immediately vastly better than random.


Haproxy doesn't support dynamic configurations as far as I know, which is a serious problem if you're letting lots of people add/change domains and scale backends up/down dynamically. A Heroku haproxy would probably need to be restarted multiple times a second due to config changes. Nginx can do dynamic backends with lua & redis, but it can't use the built-in upstream backend balancing/failover logic if you do.


While it doesn't support dynamic configurations, it does support hot reconfiguration (the new daemon signals the old processes to gracefully finish up and shut down), and reconfigures very rapidly. You still don't want to restart it multiple times a second, but you don't need to:

A two layer approach largely prevents this from being a problem. You can afford total overkill in terms of the number of haproxies as they're so lightweight - running a few hundred individual haproxy instances with separate configs even on a single box is no big deal.

The primaries would rarely need to change configs. You can route sets customers to specific sets of second layer backends with ACL's on short substrings of the hostname (e.g. two letter combinations), so that you know which set of backends each hostname you handle maps to, and then further balance on the full host header within that set to enable the second layer to balance on least-connections to get the desired effect.

That lets you "just" rewrite the configs and hot-reconfigure the subset of second layer proxies handling customers that falls in the same set on modifications. If your customer set is large enough, you "just" break out the frontend into a larger number of backends.

Frankly, part of the beauty of haproxy is that it is so light that you could probably afford a third layer - a static primary layer grouping customers into buckets, a dynamic second layer routing individual hostnames (requiring reconfiguration when adding/removing customers in that bucket) to a third layer of individual customer-specific haproxies.

So while you would restart some haproxy multiple times a second, the restarts could trivially be spread out over a large pool of individual instances.

Alternatively, "throwing together" a second or third layer using iptables either directly or via keepalived - which does let you do dynamic reconfiguration trivially, and also supportes least-connections load balancing - is also fairly easy.

But my point was not to advocate this as the best solution for somewhere like Heroku - it doesn't take a very large setup before a custom solution starts to pay off.

My point was merely that even with an off the shelf solution like haproxy, throwing together a workable solution that beats random balancing is not all that hard - there's a large number of viable solutions -, so there really is no excuse not to for someone building a PaaS.


You're right. They'd have to instead build a load-balancer that solves the problem, and that's too darn hard.


> it's just important to understand than Rails doesn't get any special treatment on a PaaS done correctly, so it's important to come up with a generic solution.

Its kind of weird to describe not optimizing the entire platform provided to apps as "PaaS done correctly". Making a PaaS more generic has a certain kind of value in terms of broadening the audience and enabling heterogenous systems to be implemented on it, but if you are doing that by sacrificing the optimization of the individual application platforms available, you are losing some of what makes a PaaS valuable as opposed to roll-your-own platform support on top of a generic IaaS.

Its especially problematic to say that worsening support for the main existing app framework in use on an establish PaaS and giving existing customers orders of magnitude less value for their money is doing something right.

> I think part of the solution would be customizable option

That's probably a good idea, though the default for existing apps should not have changed, especially without clear up-front notice.

> But apps are increasingly unlike a checkout line.

Existing apps are, for the most part, exactly as much like a checkout line as they were before the unannounced change.


> it's just important to understand than Rails doesn't get any special treatment on a PaaS done correctly

Why is it only "done correctly" if it does not account for specific properties of the technology used by a particular customer?


Because PaaS is a generic technology for running and scaling applications with a multitude of different language/framework/stacks, and many/most of those apps do not share the specific properties of single-threaded Rails (including many Ruby/Rails apps!)

And Rails 4 is going to bake-in "live streaming", making single-threaded app servers even more of an edge case.


That sounds a lot more like IaaS to me. PaaS should be providing the entire platform, hence the name.

The entire promise of the space is that the customer only has to worry about his own code and perhaps tweaking a few knobs.


That's like saying Craigslist did it correctly and AirBnB didn't because AirBnB is only tailoring to a specific segment of the world's supply and demand market.

Rails is very widely used. How can you consider that an edge case?


It's like saying EC2 should tailor its virtualization to Fedora 16, or Mac OS X should tailor its windowing system to Photoshop CS4, or Apache should tailor mod_proxy to Joomla. There may be specific attributes of popular applications that need to be adapted to, but those adaptations need to be built in a generic way and exposed through a standard API.

Since even many Rails apps now do not follow a single threaded request-response model, that model of running a web application needs to be considered as one case of many, and building a platform that supports many/all use-cases as well as possible is more complicated than building a platform that fits one use case like a leather glove.


> Mac OS X should tailor its windowing system to Photoshop CS4

I think statements like these obscure away the very tight coupling Heroku has historically had with Rails. While certainly Heroku now perhaps envisions itself as a do-it-all PaaS, there's no denying Rails at one point (and, numerically, perhaps still) was their bread and butter.

While I don't have numbers to support or refute the assertion that "most Rails apps are primarily single threaded", my suspicion is that this is in fact still the case.


> or Mac OS X should tailor its windowing system to Photoshop CS4

I'm taking this example out specifically, because it illustrates my point quite nicely.

If there were a sizeable community of people who only wanted to use a computer for Photoshop, and tailoring the windowing system to them made it a significant usability improvement for those people, then it would be a completely imaginable situation that upon first opening your brand new Mac, it'd ask you whether you're one of those Photoshop people and want the special windowing system setup.

Well, ok, haha Apple and customizing anything for anyone, ever. But many other vendors might make such a choice.

The apparent stubborn refusal of many PaaS services, including Heroku and yours, to particularly tailor to a very common configuration of Rails sounds like a hole in the market, to me. As a customer, I don't care whether this is "incorrect" because Rails does not conform to some yet-to-be-defined standard or simply because the PaaS doesn't have their shit together. The customer experience is the same: I'm running a blog tutorial app, and the performance sucks.


Microsoft have definitely done exactly this. Windows versions have famously been "rebroken" in development to keep bug-for-bug compatibility so they didn't break big third party applications.


The 'P' stands for platform. Providing the platform as a service absolutely means catering to the specific needs of the platform. There is no generic platform. If you want to support multiple platforms, then you support multiple platforms. You don't stop supporting any platform at all.


This whole we can't optimize for Rails anymore seems like a red herring. Dumb (random) routing is dumb routing. It doesn't matter if you have single threaded Rails or Django stack or highly concurrent Node.js or Erlang serving requests, if you distribute the requests randomly you're not going to efficiently use your resources and the Heroku "promise" of spinning up a 2nd dyno and getting 100% more concurrency is just not true.

All it changes is the details of the analysis, not the core finding. It makes the problem not as worse, but it's still pretty bad, and it's worse the more uneven your traffic is (in terms of how long each request takes to service).

All apps, even Unicorn, JRuby, Node.js, Erlang, etc. would benefit from something better than random routing.


Since when does Heroku support WebSockets?


It doesn't yet, but that doesn't really change the situation if you've already got long-polling/SSE. I mentioned it because we support it and it seems like a big part of the model the the web is moving towards (which is significantly less request-response based).


Just a little thing: why can't we just say "servers" and "requests" instead of "registers" and "customers", because stores don't get 9000 customers per minute and don't have 500 registers. Everybody here would understand servers and requests.


Though I'm aware of the basics of request routing, forming a real-world analogy that's more tangible definitely helped me grok the explanation.


Distributed load balancing is a tough problem with two pieces to it. One is the queueing theory part.

The other is the systems side to it. If you have multiple customers and multiple checkout lines, and if your customers act independently without seeing the lines (no feedback from servers, network failures and delays, implementation complexity), what do you do?

It isn't a trivial problem. The easy route is paying Cisco's load balancers millions of dollars, but those only scale so far.

The bigger internet companies spend years of development time trying to make distributed load balancing work, but the issues there are a bit more complicated than a few customers walking to checkout lines.


You are certainly right that it becomes an M/G/1 model and thus that M/M/1 and M/D/1 will give reasonable approximations. The reason that it's an M/?/1 model is partly what you're saying, but also partly because the random assignment of the incoming requests acts as a random filter on a Poisson process. Poisson processes are just defined by not having history, and the random filter is not history-dependent either, so the output from the filter -- the input to any node -- is still a Poisson process.

What's interesting to me here is: Suppose rather than doing this with a random variable, you do it with a summing filter, a simple counter:

    on request r: 
        node[n].handle(r)
        n = (n + 1) % node.length
That's a really simple solution but it should tend to average out the Poisson input, which gives you something closer to a D/G/1 problem.


Queuing theory is cool, but I'm not 100% sure it actually applies here in a meaningful sense (although, disclaimer: I'm no more experienced here than you). A lot of queuing theory assumes that you must route a request to a handler immediately as you receive it, and that reassigning a request is a very expensive process.

This intuitively explains why queueing theory is very big in router design - imagine that you send a packet through 10 hops and at the last hop experiences a significant delay, does the packet then turn around and go back through another router? Which hop does it pick to look for a different path through? What happens if the packet gets delayed going backwards looking for another route? Does it reverse _again_ looking for a quicker route to another route? Answer: it doesn't, routers deliver messages with the "best effort" (in protocol terminology) they can, and high level latency trends are adapted for through the routing algorithms themselves (read: not adapted for each individual packet). This keeps transport much simpler and therefore faster.

In the case of load balancing, if the "(re)assignment cost" (my terminology) of a request is sufficiently small, then it doesn't make sense to pre-distribute requests until you can be 100% sure a worker is ready. If a request takes 40ms to process and 0.5ms to distribute/assign to a worker, then waiting for feedback (which would also take 0.5ms) from a worker would incur a slowdown of (40 + 0.5 + 0.5)/40 versus if you pre-assigned before a worker was finished. This seems like a no-brainer if it would keep the width of the distribution of your latencies down.

Edit: thinking about this more, if you have an asynchronous worker model, Queueing theory comes back into play. If a worker stops processing Request A to wait for a network response and takes up Request B, and then the network responds while Request B is still working, moving Request A mid-handling to another free machine may be very hard/expensive, if not entirely impossible. As themgt brings up, it sounds like Heroku enabled an asynchronous model in a recent stack and may have dropped the feedback loop that allows for intelligent routing because there's no obvious way to implement it in an asychronous model.

That being said, you could still have a feedback loop when a worker is completely idle. It's certainly very hard to reason about a worker currently executing any task, but it is very easy to reason about a worker that isn't executing anything. Therefore, it should be straightforward (theoretically) to do intelligent routing to empty workers and then keep the random routing for busy workers in such a way that if there is an idle worker no request should ever go to a busy worker. A more advanced version would keep track of the number of open connections to each worker and simply distribute to the worker with the fewest number of active open connections.

I just checked and nginx actually has a directive (least_conn) to do exactly this, but it's not enabled by default! ELB apparently does something similar (see: https://forums.aws.amazon.com/thread.jspa?messageID=135549&#...).


Yeah, you want leastconn + app/router affinity. Affinity is the statement that all of your requests for an app go through one router (to avoid distributed connection state tracking).

In the past, I've accomplished this by having the next layer up consistent hash the api keys onto the router list. If you don't control the top layer (ELB), you need to add a dumb layer just for the hashing.

HAProxy works great for this extra layer. In practice, all you end up doing is adding a "balance hdr(host)" directive (see http://haproxy.1wt.eu/download/1.5/doc/configuration.txt) to get the hashing right, and you're spending <1ms inside HAProxy.


Maybe this was after you did your work, but ELB currently supports affinity, see: http://aws.amazon.com/about-aws/whats-new/2010/04/08/support...


You want affinity by Host: header, not by cookie/session.


haproxy also has a few balance algos[1] that it can be configured to use. I would think something like static-rr would even be somewhat better than random.

[1]: http://cbonte.github.com/haproxy-dconv/configuration-1.5.htm...


>as an opportunity to teach myself a bit of Queueing Theory[1]

My dear Sir, you are a brave man. I tried the same 1.5 years back on HN - http://news.ycombinator.com/item?id=3329676


OT: I went to the website, had my eyes assaulted with multiple font colours and sizes and left immediately without even trying to read.

People, there is a compromise between Google "brain dead" simplicity and MySpace pages "psycho" look, that is easy to read but still functional.


I cant stop myself from saying this. You wrote all this instead of doing what?


Normally I would just downvote you and move on, but in this case your comment is frustrating enough that I have to say something. I found the comment you responded to (by nsrivast) quite fascinating. A well-written but brief analysis of the problem, with sources attached for further reading -- what's not to like? In-depth and thoughtful comments like that are what keep me coming back to this site, and are what make the community great.

I for one am very thankful that nsrivast took the time to write something so technical and detailed. However, I found your response to be in extremely poor taste. It added nothing to the conversation, and IMHO was rude and unnecessary.


really?


Yes. Really. Some people[1] spend their time writing lengthy and technical posts about specific technical issues. The rest of the world benefits from this. And the person writing the post benefits too, because trying to write something like that makes you smarter. Perhaps you should try it sometime.

[1] - I occasionally do this.


your handle is humbledrone and your opinions are humbled. Boy am i impressed!


Maybe he's off work? Maybe he's "decompressing" from a hard problem at his job? Maybe you should STFU and enjoy the free information you are getting?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: