Helping you drive digital innovation
Subscribe
RSS Feed of the Mendix Blog
Thanks for Subscribing

Keep an eye out for Mendix resources coming straight to your inbox.

Take a time-out to learn about asynchronous calling

on June 14, 2012

Share:

You call a microflow from your browser, wait about a minute for it to finish and get the error ‘Connection dropped by intermediate host. HTTP error 504: timeout’. The strange thing is, you can’t reproduce it locally. Sounds familiar? We have been getting some reports of this issue, which is actually easy to solve.

HTTP basics

What happens when you execute a microflow from your browser? The web is based on HTTP, which is a request-response protocol. Your browser is the client, the other party is the server (usually a machine in a data center). For every request from the client, the server must send a single response. The sooner the better, as the client is actively waiting for it.

However, this request-response scheme has a couple of drawbacks. Suppose you’ve sent a request and are waiting for the response. You are getting impatient and want to know how the process is coming along. Unfortunately there is no way to ask this of the server, because each request has only one response. The only solution is to spread out the communication between client and server over multiple request-response messages. Work is underway for better protocols, like SPDY and Websockets, which will solve a lot of the problems that Javascript-rich sites, such as Mendix apps, face.

We are, however, getting off track. Let’s look at how microflows are executed in Mendix using a single request-response message pair.

Calling microflows – the simple implementation

When you think about a microflow call, you’ll see that it has input from the client and output from the server. It makes sense then, to use HTTP’s request-response architecture to build a simple implementation of executing a microflow. Your request will look like this: “Execute microflow X with parameters this and that and return the output”. While the server is executing this microflow, your browser will wait for the HTTP-response that contains the output. This whole process will probably happen within a fraction of a second, and once the request is calculated and sent by the server and received by the client, the deal is done. So far so good.

But what if your microflow takes a long time to complete? Like one that changes thousands of objects in the database? Or one that calculates the smallest distance a travelling salesman can travel to visit all cities? This can take hours! Meanwhile, the browser that prompted the microflow is still waiting for the HTTP-response. If you were the browser you might be starting to wonder whether the server ever received your request.

You might be interested to know that you can easily verify the way this works. If you open the developer tools of your browser you can get a network packet overview. This happens to be my Chrome browser talking to a Mendix Server:

Network

Synchronous call

The upper request in the log is in fact a microflow call, which lasts for about six seconds.

Asynchronicity to the rescue

While the simple implementation of calling microflows is fast, easy to understand and has virtually no overhead, it is not suitable for microflows that need more than just a couple of seconds to execute. Generally speaking, if your microflow can take longer than five seconds, you should call it in an asynchronous fashion.

How? Well, like this:

Async1

Microflow trigger -> Right click -> Microflow settings

Note that asynchronous calls take more than a couple of seconds and therefore always have a progress bar.

“Ok, great”, you’ll say. “But why did it go wrong in the first place and why does this fix it?”

The interfering web server

Whenever you run a Mendix Runtime in a hosted environment, you will want to run a general purpose web server like Apache or Nginx in front of the Mendix Runtime Server. The web server will catch all HTTP-requests from the client and takes care of static files by itself (Nginx is especially good at that). The other requests are forwarded (reverse-proxied) to the Mendix Runtime. When the Runtime replies to the web server, the web server passes the reply on to the client.

This will work flawlessly when the Runtime replies within a couple of seconds. However, after that, most web servers will start to get impatient. After 60 seconds* without a response from the Mendix Runtime, the web server will take matters into its own hands and reply to the client with a HTTP timeout.

When you deploy your project directly from the Modeler, there is no intermediate web server to raise timeouts. All requests go to the Runtime directly. Therefore, not surprisingly, you will not come across timeout errors in local deployment.

* depending on your configuration

What asynchronous calling does

Instead of the microflow call and its output being handled by a single HTTP request/response pair, asynchronous calling uses multiple message pairs. The first will give the command (execute microflow X). The server will reply with “Ok, I started X”. From that point on, the client will send “Is X done yet?” every couple of seconds. The reply to these requests will be either “No” or “Yes, here are the results for X!”

See for yourself:

Async call just started

Async call just started


Async Not Finished

The server says it's not finished.

The microflow takes a lot of time, we're at five minutes here

The microflow takes a lot of time, we're at five minutes here

The microflow is finished, the results are in!

The microflow is finished, the results are in!

Conclusion

Now you know when to trigger your microflows in a synchronous way and when to use the asynchronous way.

Asynchronous calling, pros:

  • HTTP messages will be handled very quickly.
  • The client is always aware of what the the Runtime is doing.
Cons:
  • Overhead of multiple messages per microflow call.
  • A time window in which the microflow is ready, but the client has not fetched the results.

If you are not working with a lot of data or heavy computing, synchronous calling will probably suffice. Keep in mind that data tends to accumulate and microflows that once were quick might slow down. Once you start noticing that your calls take longer than five seconds, change your call type to asynchronous in the next release of your app.

Bonus question

There is a scenario where long outstanding HTTP-requests (of about 50 seconds) are very useful. Actually, if you go to https://home.mendix.com and inspect the network traffic with your browser’s developer tools, you will see some of them come by with stunning regularity. Can you guess what their use is?

Tags:

Subscribe to Our Blog

Receive Mendix platform tips, tricks, and other resources straight to your inbox every two weeks.

RSS Feed of the Mendix Blog

About Jouke Waleson

Product Manager and Team Lead of the Mendix Cloud team.

| Community Profile
  • Note that in both cases the user has to wait 5 minutes to receive input. So in some cases it can be a nice alternative to let the microflow wrap another microflow, which is called using one of community commons ‘async’ microflows.

    The original microflow can than notify the user the action has started successfully, since the microflow itself return immediately. Progress can than be polled by using the microflow timer widget if necessary. This can be a nice alternative if your action is either ‘fire-and-forget’, or if you want to display more detailed progress info, such as which item is currently processed. 

  • Hi Michel,

    Could you elaborate some more on the sentence “Progress can than be polled by using the microflow timer widget if necessary.” How do I set this up in the Modeler? Is there a page somewhere in the documentation or the forum maybe that explains this in more detail?

  • Hi Theo, 

    There is no documentation for that, but the idea is simple; if you have a complex process, create an Entity, for example ‘MyProgress’ with an integer attribute expressing the progress. 

    When starting your batch process, create a new MyProgress and show this object in your manually created progress form (A dataview with, for example, the progress bar widget from the appstore). In the batch process (which should be started asynchronous using the community commons library), update the same MyProgress object each time when some items are processed. 

    In the progress form, add a Microflow Timer object with an interval of 1000 ms. Now set the microflow property to a new microflow, which just refreshes the MyProgress object passed into the microflow. 

    That should be all. Its quite some extra work compared to a simple progress dialog, but the solution is very flexible as well. You can now easily add ‘abort’ functionality etc. 

  • Fantastic! I have implemented a progress monitoring framework like this in Oracle PL/SQL years ago. Why didn’t I come up with this myself? 🙂

    It’s fun to see that design patterns that I used in other technologies like Oracle PL/SQL and Oracle Forms still work in Mendix.

  • Michel, what are the chances of this being implemented into the Model proper? 🙂

  • Model as in: your project Model or as in the Mendix Platform?

    An experienced Mendix Developer should be able to build this in less than an hour I think.