Gorgon: A simple task multiplier analysis tool (e.g. loadtesting)

Load testing is something very important in my job. I spend a decent amount of time checking how performant are some systems.

There are some good tools out there (I’ve used Tsung extensively, and ab is brilliant for small checks), but I found that it’s difficult to create flows, where you produce several requests in succession and the input depends on the returned values of previous calls.

Also, normally load test tools are focused in HTTP requests, which is fine most of the time, but sometimes is limiting.

So, I got the idea of creating a small framework to take a Python function, replicate it N times and measure the outcome, without the hassle of dealing manually with processes, threads, or spreading it out on different machines.

The source code can be found in GitHub and it can be installed through PyPi. It is Python3.4 and Python2.7 compatible.

pip install gorgon
Gorgons were mythological monsters whose hair were snakes.
Gorgons were mythological monsters whose hair were snakes.

Gorgon

To use Gorgon, just define the function to be repeated. It should be a  function with a single parameter that will receive a unique number. For example

    
    def operation_http(number):
        # Imports inside your function 
        # is required for cluster mode
        import requests  
        result = request(get_transaction_id_url)
        unique_id = get_id_from(result)
        result = request(make_transaction(unique_id))
        if process_result(result) == OK:
            return 'SUCCESS'
        return 'FAIL'

There’s no need to limit the operation to HTTP requests or other I/O operations

    def operation_hash(number):
        import hashlib
        # This is just an example of a 
        # computationally expensive task
        m = hashlib.sha512()
        for _ in range(4000):
            m.update('TEXT {}'.format(number).encode())
        digest = m.hexdigest()
        result = 'SUCCESS'
        if number % 5 == 0:
            result = 'FAIL'
        return result

Then, create a Gorgon with that operation and generate one or more runs. Each run will run the function num_operations times.

        from 
        NUM_OPS = 4000
        test = Gorgon(operation_http)
        test.go(num_operations=NUM_OPS, num_processes=1, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=4)
        test.go(num_operations=NUM_OPS, num_processes=4, 
                num_threads=10)

You can get the results of the whole suite with small_report (simple aggregated results) or with html_report (graphs).

    Printing small_report result
    Total time:  31s  226ms
    Result      16000      512 ops/sec. Avg time:  725ms Max:  3s  621ms Min:   2ms
       200      16000      512 ops/sec. Avg time:  725ms Max:  3s  621ms Min:   2ms

Example of graphs. Just dump the result of html_report as HTML to a file and take a look with a browser (it uses Google Chart API)

Gorgon HTML report example
Gorgon HTML report example

Cluster

By default, Gorgon uses the local computer to create all the tasks. To distribute the load even more, and use several nodes, add machines to the cluster.

        NUM_OPS = 4000
        test = Gorgon(operation_http)
        test.add_to_cluster('node1', 'ssh_user', SSH_KEY)
        test.add_to_cluster('node2', 'ssh_user', SSH_KEY, 
                             python_interpreter='python3.3')
        ...
        # Run the test now as usual, over the cluster
        test.go(num_operations=NUM_OPS, num_processes=1, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=1)
        test.go(num_operations=NUM_OPS, num_processes=2, 
                num_threads=4)
        print(test.small_report())

Each of the nodes of the cluster should have installed Gorgon over the default python interpreter, unless the parameter python_interpreter is set. Using the same Python interpreter in all the nodes and controller is recommended.
paramiko module is a dependency in cluster mode for the controller, but not for the nodes.

As a limitation, all the code to be tested needs to be contained on the operation function, including any imports for external modules. Remember to install all the dependencies for the code on the nodes.

Available in GitHub

The source code and more info can be found in GitHub and it can be installed through PyPi So, if any of this sounds interesting, go there and feel free to use it! Or change it! Or make suggestions!

Happy loadtesting!

The most annoying thing about online advertising

Right now there is a lot of discussion about how invasive and intrusive online advertising is, including the effects it has in performance.

The funny part?

I am still getting advertising that hardly qualifies as “targeted” or “interesting“.

don_draper

By now the whole internet should have a lot of contextual information on where I spend my time, what are the pages I read, and what interest me. What kind of ads I see? The same products I see on the broad TV. Just the usual cars, insurance, cleaning products. Plus the  spammy “you’re the 10,000,000th visitor!”, lose weight or celebrities click-bait.

Sure, a lot of them are localised, so I get information about things happening in the country I live. And sometimes I see software products, though most of them are not really the kind I’m interested in.

But I found way more interesting products advertised more through “reach your audience” way, like podcasts, or even sponsored feeds.

If capturing all kind of information about someone in a creepy invasive way doesn’t give highly relevant, attractive results, what kind of future has advertising?

Typewriters

I have to say that sometimes I am incredibly surprised with some things. The last one has been to transform an old typewriter into a valid USB keyboard.

typewriter US keyboard

This baffles me, because I am old enough to remember a word with typewriters.

Well, I’m not that old. I only used a typewriter very briefly, on my school years, but I was close enough to people using them, most in particular, my grandfather.

My grandfather was a journalist and writer, and for most of his life, he used a typewriter for quite a long time every single day. I remember vividly the sound. And all the inconveniences.

The most obvious one is the unforgiveness of each page. Any small correction or typo will make you redo a whole page. 80% of his time was just copying again the same text. As a way to avoid this, you could hire someone to do it, presenting an annotated draft, but that was expensive and didn’t avoid completely the risk of introducing new typos.

Paper is also a very bad way of preserving information. Keeping a good reference of unfinished work is difficult, especially for old drafts. There has been too many cases of lost work just because the original manuscript was lost or destroyed in any way, to the point of being a cliché in movies.

And all the physical inconveniences. A typewriter weights a lot, needs ink, sounds uncomfortably high, needs a supply of paper and it’s full of moving parts that can break.

I understand that some typewriters are gorgeous, and worth being displayed as an sculpture. But I don’t get that anyone wants to use them on a regular basis right now.

typewriter

Oh, my grandfather came to write on a computer. He was probably the least inclined person towards technology I’ve ever met, but he saw the potential and abandoned the typewriter. Though it took a while to adjust, he said that his couldn’t have written his latests books without it.

Compendium of Wondrous Links vol X

wondrous_links

More interesting reads worth checking out

topblueprint

Tech

Red Lion, Pennsylvania, USA --- 6/1/1946- Red Lion, PA: Soft coal miners return to work... miners stand in the elevator cage, ready to descend into the H.C. Frick coke company mine at Red Lion, PA., near Connellsville, June 1st, to work their first shift since settlement of the soft coal strike. Pennsylvania'a 75,000 hard coal miners are still on strike while contract negotiations continue. PH: Edwin J. Morgan. --- Image by © Bettmann/CORBIS

About development

  • I’ve still confused with this “learning code is cool”, as this article says. I’m not sure if this is a bad time to be a beginner.  Yes, it’s true that too many options is confusing, but the amount and quality of instructional material at the moment is absolutely incredible. Beginners right now are a thousand times more capable of doing stuff than 20 years ago, just by the increase of productivity and clarity.
  • Tools don’t solve the web problems. Related to the first about the constant new tools for working on a web development, and their problems.
  • This tweet chain describes quite good the constant roller coaster when developing code.
  • Be friends with failure. The master has failed more times than the beginner has even tried.

Leonardo numbers

I have my own set of numbers!
I have my own set of numbers!

Because Fibonacci numbers are quite abused in programming, a similar concept.


L0 = L1 = 1

Ln = Ln-2 + Ln-1 + 1

My first impulse is to describe them in recursive way:

def leonardo(n):
    if n in (0, 1):
        return 1
    return leonardo(n - 2) + leonardo(n - 1) + 1 

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))

But this is not very efficient to calculate them, as for each is calculating all the previous ones, recursively.

Here memoization works beautifully


cache = {}

def leonardo(n):
    if n in (0, 1):
        return 1

    if n not in cache:
        result = leonardo(n - 1) + leonardo(n - 2) + 1
        cache[n] = result

    return cache[n]

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))

Taking into account that it uses more memory, and that calculating the Nth element without calculating the previous ones is also costly.

I saw this on Programming Praxis, and I like a lot the solution proposed by Graham on the comments, using an iterator.

def leonardo_numbers():
    a, b = 1, 1
    while True:
        yield a
        a, b = b, a + b + 1

The code is really clean.

Compendium of Wondrous Links vol IX

wondrous_links

Welcome back to this totally non-regular compilation of interesting reads. Enjoy!

1427381663-20150326

 

 

Do you want to see the whole series?

ffind v0.8 released

Good news everyone!

The new version of find (0.8) is available in GitHub and PyPi. This version includes performance improvements, man page and fuzzy search support.

Enjoy!