The Sprintr Like Button; Tales of a 16 fold performance gain.

Skip navigation

The Sprintr Like Button; Tales of a 16 fold performance gain.

/ March 30, 2012

Everyone likes to like. And everybody likes speed. However, in sprintr, the like feature turned out the be the enemy of speed. In the next sprintr deployment, we will increase the speed of loading the messages by 400% and finally add an unlike button as well. Compared to the first like button implementation, messages in sprintr are now loaded 16 times faster. In this post we will explain how we achieved to gain this performance improvement. Your Mendix app might profit from these mechanisms as well!

TL;DR

In Sprintr we gained a 16 times performance improvement in loading a messages by using the following, general applicable techniques:

  • Remove all (!) virtual attributes and replace them with object events, custom widgets or security.
  • Introduce visiblity-by-security to show or hide attributes based on Xpath constraints
  • Apply data denormalization; uses multiple copies of an object to simplify security.

Virtual attributes: Caught by reality

Virtual attributes: Caught by reality

Image 1 – Naive Message model

Image 1 displays the Sprintr domain model concerning the message and like mechanism. Likes are stored in the Like entity, that connects a Message with a User. If a connection can be made from a Message through a Like to an User, the user has liked the message. Pretty straightforward.

In the original model, the caption of the like button is implemented using a virtual attribute. The attached microflow searches for a like that uses both the currentUser and the current Message. If such an object was found, the microflow returns ‘Unlike’. Otherwise, it seems that the user hasn’t liked the message yet, so we return ‘Like’. As a special case, if the current user is the composer of the message, an empty caption is returned since an user should not be able to like his/ her own comment.

Sadly, virtual attributes are costly in terms of performance when retrieved in large amounts, so the message walls started to become quite slow. The sluggishness of virtual attributes is primarily caused by the fact that they are re-evaluated on each retrieve. So for every message that is ever retrieved, the database performs an additional query that needs to join the user, message and like tables. This becomes expensive if done thousands of times.

Furthermore virtual attributes render database retrieval schemas useless. Normally the client requests only the attributes of an entity that it needs at that moment. A datagrid with 3 columns of an attribute with 20 attributes will only fetch 3 attributes from the database. However, when there is a virtual attribute the core pre-fetches all attributes of an object; they might be needed by the microflow of the virtual attribute. So instead of selecting just a few attributes, all attributes are retrieved from the database and sent to the client as soon as a single virtual attribute exists in an entity. In Sprintr, the message wall became about four times slower and consumed three times more bandwidth when the virtual attributes were added to the model.

Generally speaking, one should avoid virtual attributes wherever possible. Especially if they are used in larger datasets such as grids or graphs, or when the calculation itself is expensive. In my experience, 80% of the virtual attributes used in an arbitrary model can be easily avoided. Many virtual attributes just combine some other attributes and can easily be rendered client-side by the Format String widget as well (see this this post for an explanation). Other attributes can be calculated during a commit event or by using an update microflow that is triggered in the right places. Although this introduces some extra complexity, performance will greatly benefit.

Two cases of virtual attributes are hard to factor out: Attributes that depend on the current time (for example: “message posted 4 hours ago“) and attributes than somehow depend on the current user, such as the caption of the like button in Sprintr. In Sprintr, we solved the ‘ago’ issue by doing that calculation client side, using the Format String widget.

Security to the rescue

The like button caption is trickier to optimize. But we were finally able to factor out this virtual attribute by introducing a dedicated access rule on the attribute. To display the ‘like’ button, the value of the caption attribute is fixed to ‘Like’ in the domain model. The visibility of that caption can then be defined by assigning the following security constraint to the attribute:

[not(Sprintr.Like_Message/Sprintr.Like/Sprintr.Like_User='[%CurrentUser%]’)]
[Sprintr.Message_Composer != ‘[%CurrentUser%]’]

Or, in plain english: You are only allowed to see ‘Like’ if you did not like the message yet and you are not the composer of the message.

The advantage of this approach is that it is way faster than using virtual attributes, as the security will translate into efficient SQL joins and will not result in additional queries. Furthermore no like-state-synchronization needs to be modeled since security is always applied. This solution removes the bandwidth overhead of virtual attributes and it is multiple times faster, at least four times.

This visibility-by-security approach allowed us to toggle the visibility of several attributes in an efficient way and has been applied to several other message mechanisms as well; a conditional edit button, the vote status of ideas and the voted/not-voted status of polls. However, this approach makes the number of security rules grow rapidly, we ended up with 11 rules on a single entity.

This approach becomes tricky when the visibility of sensitive data needs to be governed by complex security rules; not only the condition needs to be expressed, but the general rules for the message as a whole needs to be repeated for this attribute as well. Since the basic message security in Sprintr is quite complex, the new rules became complex as well (a message can be visible because you are someone’s colleague, a project participant or a feedback submitter and all cases have their own edge cases). In the end, about 109 lines of security XPath were defined on messages, and querying the dashboard messages took about 3 seconds. Which, in my opinion, is actually surprisingly fast given the amount of data and the complexity of the query. But there must be a way to do better.

Data Denormalization: More data, more speed!

So we experimented with a method used by many web-scale frameworks; data denormalization. The thought behind this is that by copying all the user related data for each user, we can simplify the security. So, a single message no longer exists once in the database, but there is a copy for each user that can read the message. First results showed that this approach allowed to load messages 4 times faster. So, what did we do?

Data Denormalization Example

Image 2 – Denormalized messages

Basically, we added an additional entity, MessageProxy, to our domain model. This message proxy contains an association to an user and to a message. Upon every change or submit of a message, we create/update a message proxy for every user able to read the message. This introduces a lot of administrative logic when users join/leave a project et cetera, but as a result our security became as simple as ‘You are allowed to read a message proxy if it is yours or you are allowed to read a message if you have a message proxy of that message’.

In the rendering, each message is a combination of a proxy and its related message; common attributes such as timestamps are stored on the message, but user-specific data such as the like/unlike button caption is stored on the proxy and can easily be updated when the user votes.

This approach does have its downsides; message might take up a 100 times more space in the database (if many users can read it) and writes are much slower. When changing a message it might be needed to update 100 proxy objects as well. Regarding the storage issue, we can afford to not really care. It is not that much data in the end, especially compared to document or file uploads. The slower write issue would slow the UI responsiveness, but that can easily be solved by performing the message updates asynchronously in the background (using the community commons functions executeMicroflowAsync or executeMicroflowInBatches). Only the message proxy of the current user is updated synchronously, otherwise he would still see the previous proxy version.

Data Denormalization resulted in another 400% performance boost in loading messages (in fact, even more as the model grew more complex in the meantime) as it severely simplifies security.