Happy 25th Anniversary, Python

25 years ago, on 20th February 1991, Python 0.9.0 was released publicly… I absolutely love it and use it everyday, and it seems to be as successful as ever…

900x900px-LL-4632c8c0_gallery8713121347386124-2.png

For another great 25 years! Cheers!

 

Do not spawn processes on users requests

I’ve been playing recently an online game that has recently launched, that uses the following idea.

When a user starts a match, it spawns a process in the server that acts as the opponent, generating the actions against the user.

The game had a rough launch, with a lot of problems due it being played by a lot of people. And, IMHO, a lot of the problems can be traced to that idea.

I see it’s a seductive one. If a user generates an interaction with the service that takes time (for example, a match for this game), spawn a process/thread in the server that generates the responses in “real time“. The user then will be notified through polling or pushing the information, and can react to it. The process will receive the new information from the user and adjust the responses.

I know is seductive because I had it once, and I was very lucky to have someone around with more experience that show me how it will break under pressure. It’s not a sane architecture to scale.

Some bad ideas:

  • No limit on processes, meaning the servers can be overflown by context switching. Once you have several thousand processes  running on a server, you are in a bad place.
harry_potter_replicating_cups.gif
Replication out of control

  • The very definition of state on the server. You need to keep track of processes started on different servers (so no two servers perform the same job). High Availability is impossible, as losing one server will mean destroying the state on all those processes. For scalability, always look at stateless servers: read all the data, store the resulting data.
  • Start up times. Each time a process starts, there’s some time to boot. This can be a problem if processes are always being started and stopped, adding overhead to the system. Even starting a thread is not free (and will require probably starting internal work like connect to the DB, read from cache, etc)
  • Connections explosion. If each process needs to connect to other parts of the infrastructure (DB, logging, cache, etc) you can have a problem in number of connections.
  • Process monitoring. What if a process gets stuck? A request can be cancelled easily by a web server (if a request takes more than X, kill it), but an individual process or thread can be more complicated and require specific tooling.

Alternative: Pool of workers

Generate a defined number of processes that can perform the individual actions that generates a match. Each process will get an action from a queue, execute it, and store the resulting state. Any process can produce an action for any user.

vlcsnap-2012-09-24-19h37m32s14.png
A group of workers can be very efficient

For example, if a match is a set of 20 actions, each one happening every  minute, the start match request will introduce 20 actions in a queue, to be extracted at the proper time, introducing the proper delay on each action. Note that the queue needs to have a way of deliver delayed messages, not every messaging queue can (in particular, RabbitMQ doesn’t have a good support). Beanstalkd or Amazon SQS supports it.

Or, alternatively, a single action, that will end inserting the next step in the queue with the adequate delay. The action can be as simple as checking if it should change something and, if not, end.

The processes will be extracting the next action from the queue, and executing them. Note that here you minimise the time the worker is waiting for a new task to do. Each worker is active as much as possible, while any user has a pending task, ready to be executed.

The number of processes are limited, so you won’t have an explosion. You can test the system and have a good idea on the limit, when your throughput is not good enough to execute the actions within a reasonable delay, so you can stop the users from starting a new match. This is a better fallback option than allowing everyone to start one and then not giving a good experience.

A priority queue can be put in place, in that case, to inform the user: “You will be able to start your match in ~3 minutes

Or you can add more processes/servers to increase the throughput in a predictable manner.

Alternative: Whole match pregeneration

Another alternative is actually generating a set of actions and returning them in the first go, and display them at the proper times in the client side. If any adjustment is required due the actions of the user, redo all the results from that time on.

Emperor1.jpg
This match is proceeding as I have foreseen it

For example, a match starts, and returns the 20 server actions to the client, which shows them to the user one each minute. In the 3rd minute, the user performs an action, which makes the server to recalculate the  remainder of the match and return another 17 actions. This is a good strategy if generating actions in advance is possible and few interactions from the user are expected.

The bottom line

The main word here is stateless. It is a basic component of an scalable system, and it’s always worth it to keep in mind when designing a system to be used to more than a couple of users.

All you need is cache

Cache is all you need
Cache is all you need

What is cache

More than a formal definition, I think that the best way of thinking about cache is an result from an operation (data) that gets saved (cached) for future use.

The cache value should be identifiable with a key that is reasonably small. This normally is the call name and the parameters, in some sort of hashed way.

A proper cache has the following three properties:

  1. The result is always replicable. The value can be scrapped without remorse.
  2. Obtaining the result from cache is faster than generate it.
  3. The same result will be used more than once.

The first property implies that the cache is never the True Source of Data. A cache that’s the True Source of Data is not a cache, it’s a database; and need be treated as such.

The second one implies that retrieving from cache is useful. If getting the result from the cache is slower (or only marginally better) than from the True Source of Data, the cache can (and should) be removed. A good candidate for cache should be a slow I/O operation or computationally expensive call. When in doubt, measure and compare.

The third property simply warns against storing values that will be used only once, so the cached value won’t be ever used again. For example, big parts of online games are uncacheable because they change so often they are read less times than written.

The simplest cache

The humblest cache is a well known technique called memoization, which is simply to store in process memory the results of a call, to serve it from there on the next calls with the same parameters. For example,

NUMBER = 100
def leonardo(number):

    if number in (0, 1):
        return 1

    return leonardo(number - 1) + leonardo(number - 2) + 1

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))  

This terribly performant code will return the first 100 Leonardo numbers. But each number will be calculated recursively, so storing the result we can greatly speed up the results. The key to store the results is simply the number.

cache = {}

def leonardo(number):

    if number in (0, 1):
        return 1

    if number not in cache:
        result = leonardo(number - 1) + leonardo(number - 2) + 1
        cache[number] = result

    return cache[number]

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))

Normally, though, we’d like to limit the total size of the cache, to avoid our program to run wild in memory. This restrict the size of the cache to only 10 elements, so we’ll need to delete values from the cache to allow new values to be cached:

def leonardo(number):

    if number in (0, 1):
        return 1

    if number not in cache:
        result = leonardo(number - 1) + leonardo(number - 2) + 1
        cache[number] = result

    ret_value = cache[number]

    while len(cache) > 10:
        # Maximum size allowed, 10 elements
        # this is extremely naive, but it's just an example
        key = cache.keys()[0]
        del cache[key]

    return ret_value

Of course, in this example every cached value never changes, which may not be the case. There’s further discussion about this issue below.

Cache keys

Cache keys deserve a small note. They are not usually complicated, but the key point is that they need to be unique. A non unique key, which may be produced by unproper hashing, will produce cache collisions, returning the wrong data. Be sure that this doesn’t happen.

Python support

Just for the sake of being useful, on Python3 there is support for a decorator to cache calls, so the previous code can look like this.

from functools import lru_cache

lru_cache(maxsize=10)
def leonardo(number):

    if number in (0, 1):
        return 1

    if number not in cache:
        result = leonardo(number - 1) + leonardo(number - 2) + 1
        cache[number] = result

    return cache[number]

so you can use it instead of implement your own.

The stereotypical web app cache

In the context of web apps, everyone normally thinks of memcached when thinks of cache.

Memcached will, in this stereotypical usage, use some allocated memory to cache database results or full HTML page, identified by an appropriate unique key, speeding up the whole operation. There are a lot of integrated tools with web frameworks and it can be clustered, increasing the total amount of memory and reliability of the system.typical_usage

In a production environment, with more than one server, the cache can  be shared among different servers, making the generation of content only happen once on the whole cluster, and then be able to be read by every consumer. Just be sure to ensure the first property, making possible to obtain the value from the True Source of Data at any point, from any server.

This is a fantastic setting, and worth using in services. Memcached can be also replaced by other tools like Redis, but the general operation is similar.

But there are more ways to cache!

Assuming a typical distributed deployment on a production web server, there are a lot of places where a cache can be introduced to speed up things.

This described service will have one DB (or a cluster) that contains the True Source of Data, several servers with a web server channeling requests to several backend workers, and a load balancer on top of that, as the entry point of the service.

full_distributed_app

Typically, the farther away that we introduce a cache from the True Source of Data, the less work we produce to the system and the most efficient the cache is.

Let’s describe possible caches from closest to the True Source of Data to farther away.

Cache inside the DataBase

(other than the internal cache of the database itself)

Some values can be stored directly on the database, derivating them from the True Source of Data, in a more manageable way.

A good example for that are periodic reports. If some data is produced during the day, and a report is generated every hour, that report can be stored on the database as well. Next accesses will be accessing the already-compiled report, which should be less expensive than crunching the numbers again.

InsideDB

Another useful way of caching values is to use replication. This can be supported by databases, making possible to read from different nodes at the same time, increasing throughput.

For example, using Master-Slave replication on MySQL, the True Source of Data is on the Master, but that information gets replicated to the slaves, that can be used to increase the read throughput.
full_distributed_app_replication

Here the third property of cache shows up, as this is only useful if we read the data more often than we write it. Write throughput is not increased.

Cache in the Application Level

The juiciest part of a service is normally in this level, and here is where the most alternatives are available.

From the raw results of the database queries, to the completed HTML (or JSON, or any other format) resulting from the request, or any other meaningful intermediate result, here is where the application of caches can be most creative.

Memory caches can be set either internally per worker, per server, or  externally for intermediate values.

  • Cache per worker. This is the fastest option, as the overhead will be minimal, being internal memory of the process serving the requests. But it will be multiplied by the number of workers per box, and will need to be generated individually. No extra maintenance needs to be done, though.
  • External cache. An external service, like memcached. This will share all the cache among the whole service, but the delays in accessing the cache would be limited by the network. There is extra maintenance costs in setting the external service.
  • Cache per server. Intermediate. Normally, setting on each server a cache service like memcached. Local faster access shared among all workers on the same box, with the small overhead of using a protocol.

Another possibility worth noting in some cases is to cache in the hard drive, instead of RAM memory. Reading from local hard drive can be faster than accessing external services, in particular if the external service is very slow (like a connection to a external network) or if the data needs to be highly processed before being used. Hard drive caches can also be helpful for high volumes of data that won’t fit in memory, or reducing startup time, if starting a worker requires complex operations that produces a cacheable outcome.

Cache in the Web Server

Widely available web servers like  Apache or Nginx have integrated caches. This is typically less flexible than application layer caching, and needs to fit into common patterns, but it’s simple to setup and operate.

There’s also the possibility to return an empty response with status code 304 Not Modified, indicating that the data hasn’t changed since the last time the client requested the data. This can also be triggered from the application layer.

Static data should be, as much as possible, stored as a file and returned directly from the web server, as they are optimised for that use case. This allows the strategy of storing responses as static files and serve them through the web server. This, in an offline fashion, is the strategy behind static website generators like Nikola or Jekyll.

For sites that deal with huge number of requests that should return the same data, like online newspapers or Wikipedia, a cache server like Varnish can be set to cache them, that may be able to act as a load balancer as well. This level of cache may be done with the data already compressed in Gzip, for maximum performance.

Cache in the Client

Of course, the fastest requests is the one that doesn’t happen, so any information that can be stored in the client and avoid making a call at all will greatly speed an application. To achieve real responsiveness this needs to be greatly taken into account. This is a different issue than caching, but I translated an article a while ago about tips and tricks for improving user experience on web applications here.

The dreaded cache invalidation

The elephant in the room when talking about cache is “cache invalidation”. This can be an extremely difficult problem to solve in distributed environments, depending on the nature of the data.

The basic problem is very easy to describe: “What happens when the cache contains different data than the True Source of Data?

Some times this won’t be a problem. In the first example, the cached Leonardo numbers just can’t be different from the True Source of Data. If the value is cached, it will be the correct value. The same would happen with prime numbers, a calendar for 2016, or last month’s report. If the cached data is static, happy days.

But most of the data that we’d like to cache is not really static. Good data candidates for being cached are values that rarely change. For example, your Facebook friends, or your schedule for today. This is something that will be relatively static, but it can change (a friend can be added, a meeting cancelled). What would happen then?

The most basic approach is to refresh periodically the cache, like deleting the cached value after a predetermined time. This is very straightforward and normally supported natively by the cache tools, like allowing to store a value with a validation date. For example, assuming the user has a cached copy of the avatars of friends locally available, only ask again every 15 minutes. Sure, there will up to 15 minutes where a new avatar from a friend won’t be available, and the old one will be displayed, but that’s probably not a big deal.

On the other hand, the position on a leaderboard for a competitive video game, or the result on a live match in the World Cup is probably much more sensible for such a delay.

Even worse, we’ve seen that some options involve having more than one cache (cache per server, or per worker; or redundant copies for reliability purposes). If two caches contains different data, the user may be alternating between old and new data, which will be confusing at best and produce inconsistent results at worst.

This is a very real problem on applications working with eventually consistent databases (like the mentioned Master-Slave configuration). If a single operation involves writing a value, and then read the same value, the returned value could have a different value (the old one), potentially creating inconsistent results or corrupting the data. Two very close operations modifying the same data by two users could also produce this effect.

Periodically refreshing the cache can also produce bad effects in production environment, like synchronising all the refresh happening at the same time. This is typical in systems that refresh data for the day at exactly 00:00. At exactly that time all workers will try to refresh all the data at the same time, orchestrating a perfectly coordinated distributed attack against the True Source of Data. It is better to avoid perfectly round numbers and use some randomness instead, or set numbers relative to the last time the data was requested from the True Source of Data, avoiding synchronised access.

This avalanche effect can also happen when the cache cluster changes (like adding or removing nodes, for example, when one node fails). These operations can invalidate or make unavailable high numbers of cached content, producing an avalanche of requests to the True Source of Data. There are techniques to mitigate this, like Consistent Hash Rings, but they can be a nightmare if faced in production.

Manually invalidating the cache when the data changes in the True Source of Data is a valid strategy, but it needs to invalidate the results from all the caches, which is normally only feasible for external cache services. You simply can’t access the internal memory of a worker on a different server. Also,  depending on the rate of invalidation per read cached value, can be counter productive, as it will produce an overhead of calls to the cache services. It also normally requires more development work, as this needs a better knowledge of the data flow and when the value in the cache is no longer valid. Sometimes that’s very subtle and not evident at all.

Conclusion

Caching is an incredibly powerful tool to improve performance in software systems.  But it can also be a huge pain due all those subtle issues.

So, some tips to deal with cache

  • Understand the data an how it’s consumed by the user. A value that changes more often than gets read it’s not a good cache candidate.
  • Ensure the system has a proper cache cycle. At the very least, understand how cache flows and what are the implications of cache failure.
  • There are a lot of ways and levels to cache. Use the most adequate to make caching efficient.
  • Cache invalidation can be very difficult. Sorry about that.

Gorgon: A simple task multiplier analysis tool (e.g. loadtesting)

Load testing is something very important in my job. I spend a decent amount of time checking how performant are some systems.

There are some good tools out there (I’ve used Tsung extensively, and ab is brilliant for small checks), but I found that it’s difficult to create flows, where you produce several requests in succession and the input depends on the returned values of previous calls.

Also, normally load test tools are focused in HTTP requests, which is fine most of the time, but sometimes is limiting.

So, I got the idea of creating a small framework to take a Python function, replicate it N times and measure the outcome, without the hassle of dealing manually with processes, threads, or spreading it out on different machines.

The source code can be found in GitHub and it can be installed through PyPi. It is Python3.4 and Python2.7 compatible.

pip install gorgon
Gorgons were mythological monsters whose hair were snakes.
Gorgons were mythological monsters whose hair were snakes.

Gorgon

To use Gorgon, just define the function to be repeated. It should be a  function with a single parameter that will receive a unique number. For example

    
    def operation_http(number):
        # Imports inside your function 
        # is required for cluster mode
        import requests  
        result = request(get_transaction_id_url)
        unique_id = get_id_from(result)
        result = request(make_transaction(unique_id))
        if process_result(result) == OK:
            return 'SUCCESS'
        return 'FAIL'

There’s no need to limit the operation to HTTP requests or other I/O operations

    def operation_hash(number):
        import hashlib
        # This is just an example of a 
        # computationally expensive task
        m = hashlib.sha512()
        for _ in range(4000):
            m.update('TEXT {}'.format(number).encode())
        digest = m.hexdigest()
        result = 'SUCCESS'
        if number % 5 == 0:
            result = 'FAIL'
        return result

Then, create a Gorgon with that operation and generate one or more runs. Each run will run the function num_operations times.

        from 
        NUM_OPS = 4000
        test = Gorgon(operation_http)
        test.go(num_operations=NUM_OPS, num_processes=1, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=4)
        test.go(num_operations=NUM_OPS, num_processes=4, 
                num_threads=10)

You can get the results of the whole suite with small_report (simple aggregated results) or with html_report (graphs).

    Printing small_report result
    Total time:  31s  226ms
    Result      16000      512 ops/sec. Avg time:  725ms Max:  3s  621ms Min:   2ms
       200      16000      512 ops/sec. Avg time:  725ms Max:  3s  621ms Min:   2ms

Example of graphs. Just dump the result of html_report as HTML to a file and take a look with a browser (it uses Google Chart API)

Gorgon HTML report example
Gorgon HTML report example

Cluster

By default, Gorgon uses the local computer to create all the tasks. To distribute the load even more, and use several nodes, add machines to the cluster.

        NUM_OPS = 4000
        test = Gorgon(operation_http)
        test.add_to_cluster('node1', 'ssh_user', SSH_KEY)
        test.add_to_cluster('node2', 'ssh_user', SSH_KEY, 
                             python_interpreter='python3.3')
        ...
        # Run the test now as usual, over the cluster
        test.go(num_operations=NUM_OPS, num_processes=1, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=4)
        print(test.small_report())

Each of the nodes of the cluster should have installed Gorgon over the default python interpreter, unless the parameter python_interpreter is set. Using the same Python interpreter in all the nodes and controller is recommended.
paramiko module is a dependency in cluster mode for the controller, but not for the nodes.

As a limitation, all the code to be tested needs to be contained on the operation function, including any imports for external modules. Remember to install all the dependencies for the code on the nodes.

Available in GitHub

The source code and more info can be found in GitHub and it can be installed through PyPi So, if any of this sounds interesting, go there and feel free to use it! Or change it! Or make suggestions!

Happy loadtesting!

Compendium of Wondrous Links vol X

wondrous_links

More interesting reads worth checking out

topblueprint

Tech

Red Lion, Pennsylvania, USA --- 6/1/1946- Red Lion, PA: Soft coal miners return to work... miners stand in the elevator cage, ready to descend into the H.C. Frick coke company mine at Red Lion, PA., near Connellsville, June 1st, to work their first shift since settlement of the soft coal strike. Pennsylvania'a 75,000 hard coal miners are still on strike while contract negotiations continue. PH: Edwin J. Morgan. --- Image by © Bettmann/CORBIS

About development

  • I’ve still confused with this “learning code is cool”, as this article says. I’m not sure if this is a bad time to be a beginner.  Yes, it’s true that too many options is confusing, but the amount and quality of instructional material at the moment is absolutely incredible. Beginners right now are a thousand times more capable of doing stuff than 20 years ago, just by the increase of productivity and clarity.
  • Tools don’t solve the web problems. Related to the first about the constant new tools for working on a web development, and their problems.
  • This tweet chain describes quite good the constant roller coaster when developing code.
  • Be friends with failure. The master has failed more times than the beginner has even tried.

Leonardo numbers

I have my own set of numbers!
I have my own set of numbers!

Because Fibonacci numbers are quite abused in programming, a similar concept.


L0 = L1 = 1

Ln = Ln-2 + Ln-1 + 1

My first impulse is to describe them in recursive way:

def leonardo(n):
    if n in (0, 1):
        return 1
    return leonardo(n - 2) + leonardo(n - 1) + 1 

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))

But this is not very efficient to calculate them, as for each is calculating all the previous ones, recursively.

Here memoization works beautifully


cache = {}

def leonardo(n):
    if n in (0, 1):
        return 1

    if n not in cache:
        result = leonardo(n - 1) + leonardo(n - 2) + 1
        cache[n] = result

    return cache[n]

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))

Taking into account that it uses more memory, and that calculating the Nth element without calculating the previous ones is also costly.

I saw this on Programming Praxis, and I like a lot the solution proposed by Graham on the comments, using an iterator.

def leonardo_numbers():
    a, b = 1, 1
    while True:
        yield a
        a, b = b, a + b + 1

The code is really clean.

Compendium of Wondrous Links vol IX

wondrous_links

Welcome back to this totally non-regular compilation of interesting reads. Enjoy!

1427381663-20150326

 

 

Do you want to see the whole series?