Search This Blog

Saturday, April 14, 2007

Multiple HTTPService Requests

HTTPService requests can be operated in async mode but what happens if the state of your database requires serialization of requests or what happens if you need to debug your server-side code and you are doing this through the use of some kind of log file - async server requests can make debugging a problem even if you have some kind of debugging system that lets you step through your server side code. Found that my async HTTPService requests were failing sometimes for odd reasons until they were run serially one after another through a single HTTPService Object. Chose to run the requests one after another to facilitate debugging and maintaining my database state. Keep in mind when requests are being run async it may be possible for a subsequent request to be run out of order, in case it matters what order the requests need to be run in.

Build a REST Serialization mechanism that guarantees queue-up requests as deep as reqd and then execute them serially since that is the way the REST backend needs the requests in order to maintain database state.

Using several HTTPService calls to plain old REST API's on the

backend. If the end user clicks some of the UI controls multiple times quickly, many calls to these services get queued up, and it can take a long time for all the data to be retrieved. For ex. clicking on a list box item sends requests to refresh all the data, and if the user were to use the keyboard to quickly scroll through the listbox, many many calls would get queued up. Tried setting the concurrency="last" on the httpservices, but it appears the only effect of that is to make the UI change only when the last dataset is received, but still all the service calls continue to be queued and there is a long delay after making many calls. Also tried <httpservice>.disconnect() and .cancel() before making any new backend call but it didn't appear to have any effect as far as preventing many calls from getting queued up.

The maximum number of concurrent HTTP connections allowed to a web server is controlled by the browser. The HTTP 1.1 specification suggests a limit of 2 connections per host, but this requires further consideration if persistent connections are to be used. Some browsers can be configured to accept more, but your users will more than likely have the default settings. IE honors the 2 connections per host suggestion (but to change this you have to edit the registry, see MaxConnectionsPerServer). Firefox sets this value to 8 but does still limit persistent connections to 2 (you can change these settings via about:config). If you were dealing with a closed network or intranet application, you may be able to change your company's IT policy and roll out different default settings, but for public applications There are lots of other tricks that one can use to optimize HTTP requests (e.g. idempotent GET requests you benefit from pipelining etc)... but my point is that there's a bit more to consider than a bunch of simultaneous requests.

Yes, once one of the previous requests completes (either by a fault or result) the next outstanding request can proceed. Note that you can make use of multiple CNAMEs to increase the number of requests made concurrently per host (e.g. Google Maps uses mt0.google.com through mt3.google.com to get 8 concurrent connections to load map data). Persistent / "reusable" / "keep alive" connections are the default behavior for HTTP 1.1 your connections are being re-used for multiple requests already. The period for a persistent connection however is typically much smaller than, say, the life of a J2EE session. Keep alive is merely a hack to optimize the situation when multiple requests are made to the same host in a short period of time. If you had a UI that was comprised of 25 individual assets then you wouldn't want to re-establish 25 connections and perform the usual HTTP handshake for each request (and it would be even worse for HTTPS based connections). HTTP that makes use of "keep alive" should still be considered a stateless situation. A completely different approach for "persistent connections" is the style used by COMET servers and chunked encoding. Here the client immediately establishes a connection to the server on receiving data and the server holds on to the connection until new data is available to push back to the client. This does, however, impact on the number of simultaneous connections that a server can handle because it ties up threads on the server.

1 comment:

Sebi said...

I'm looking at a specific case where I'd like to limit the number of HTTP connections available to mx:HTTPService to 1 but keep the concurrency to multiple (to queue the requests) while allowing only one connection at a time to pass through. Any ideas?

http://tech.groups.yahoo.com/group/flexcoders/message/71851