Speed in a World of Latency
We want to avoid this! Credit: Randall Munroe, XKCD, Latency

At General Task, we want to build the best place to find what’s next in your workday. This means many things (dealing with meetings, pull requests, etc.), but a major part of the problem is pulling in tasks from each of the task managers your work uses (e.g. Linear, JIRA, Trello). This also means providing the user with a snappy, satisfying experience when using our app. These two goals are very much at odds with one another, as these external services introduce inherent latency into our app. This is simply due to the fact that these services are run by external parties, and in order to interact with these services, we must make requests, which of course, introduces latency.

Essentially, our service cannot be faster than these task services are in and of themselves. However, the latency from some of these services is far too high for our use case. Thus, we must make use of several tricks in order to speed up the experience from the perspective of the user.

Multithreading

First we want to reduce the “real” latency. As stated above, we cannot be faster than the service itself, but we can and should bring the load-time as close as possible to the service’s latency itself. This is more difficult than it sounds, because it is rarely a single query to load all of a particular user’s tasks. Instead, depending on the service, we must fetch the task itself, comment data, user data, and organization data. If we were to naively request each of these fields in series, it would result in 4x the normal latency!

Luckily, we built our backend in Go. Go makes it extraordinarily easy to create and execute threads. With minor modifications to logic, we can switch from the serial approach to a parallel request approach, where we fetch each piece of data in its own thread. This means that the latency to fetch data is no longer the sum of the latency of each request, but rather simply the latency of the slowest endpoint.

This seems like a minor change, but it is one that has resulted in the latency of our refresh calls going from 9+ seconds to under 3 today!

Optimistic Updates

Secondly, we want to reduce the user’s “perceived” latency even more. This is particularly important in the case of modification on General Task. We don’t want the user to have to wait seconds after they modify an external task to see the outcome. This would be a terrible user experience. With that said, there is nothing we can do about the ~1 second delay between sending the modification request and getting the response.

In the early days, we accepted this fact for what it was. We wanted to show the user a state of the world that was consistent with the external sources at all times, and the cost of this was a ~1 second lag whenever modifying outside sources. However, these requests failed less than 0.1% of the time, most of the time due to user error. We were sacrificing the entire feeling of our product for an edge case in which we could not make the expected updates.

In order to speed up perceived modification, we used optimistic updates. “Optimistic updates” mean that we show the user the state of the universe if each of their requests were successful. For example, if a user creates a task, we will show that task as being fully created, even though the request has yet to go through on the external site. While this task does not exist externally (until the delay to send the request has passed), we still allow the user to interact with the task as though it is like any other task (i.e. we support modification, deletion, etc.). In the case where the request is not successful, we show the user a small popup to alert them of the issue.

This simple approach allowed us to massively reduce the perceived latency of each call. As an added bonus, this logic is handled by our frontend, not our backend. This means that we do not need to even account for the latency of our own backend! Even though our product lives in a world of latency, we are able to provide an instantaneous experience for the user.

Conclusion

It is not always possible to reduce the latency to zero. However, with a few tricks and a good understanding of the systems you’re working with, you can reduce the experienced latency of your users to effectively nothing. If you’re curious to try out the high performance task management experience we discussed, check it out at https://generaltask.com!

Newer post

P0: Effective Prioritization

We want to provide a quick user experience for our users. The external services we use introduce latency. How can we reconcile these two facts?

Speed in a World of Latency