Notes about ShipItCon 2017

Disclaimer: I know personally and worked with a good portion of the conference organizers and talkers. I label them with an asterisk*.

The ShipItCon finally took place last Friday. I think it’s quite impressive, given the short amount of time since announcing it and being the first edition, that was so well organized. The venue was very good (and fairly unusual for a tech conference), and all the usual things that are easy to take as granted (food, space, projector, sound, etc) work like clockwork. Kudos to the organizers.

The conference was oriented towards releasing online services, with special emphasis on Continuous Integration/Delivery. I think that focusing a conference over this kind of topic is challenging, as talks need to be generic enough in terms of tools, but narrow enough that is useful. Conferences about a specific technology (like PyConRubyConf or Linux Con) are more focused by concept.

The following is some notes, ideas and follow up articles that I took. Obviously, there are biased over the kind of things I find more interesting. I’ll try to link the presentation slides if/once they’re available.

  • The keynote by the Romero family was a great story and addressed a lot of specific points to the game industry (like the design challenges). It was also the exception in talking about shipping something other than a service, but a game (in Steam and iOS). I played a little GunMan Taco Track over the weekend!
    • Ship a game while on a ship“. They released part of the game while on the Queen Elizabeth cruise, crossing the Atlantic.
  • Release often and use feature toggles, detaching the code release and feature release. This is a point done in the Frederick Meyer talk that I heard recently in other places.
    • Friday night releases make me cringe, but it can make sense if the weekend is the lowest activity point of your customers.
    • Dependency trees grow to be more and more complex, to the point no one understands them anymore and only automated tools can plot them.
    • Challenges in treating data in CI. Use production data? A subset? Fake data? Redacted data? Performance analysis can be tricky.
    • Automate what you care about
  • The need for early testing, including integration/system/performance, was the theme around Chloe Condon talk. Typically, a lot of testing will be performed at the “main branch” (after a feature is merged back) level that can be prepared in advance, giving better and faster feedback to developers. Test early, test often.
    • She presented Codefresh with seems an interesting Cloud CI tool aimed at working with containers.
  • Lauri Apple talked about communication and how important READMEs and documentation are for projects, both internal and external. The WHAT to build is a key aspect that shouldn’t be overlooked.
    • READMEs should include a roadmap, as well as info about installation, run and configure the code.
    • This project offers help, review and advice for READMEs. I’ll definitively submit a  review for ffind (after I review it and polish it a little bit myself).
    • She talked about the Open Organization Maturity Model, a framework about how open organizations are.
    • A couple of projects in Zalando that catches my eye:
      • Patroni, an HA template for PostgreSQL
      • Zalenium, distribute a Selenium Grid over Docker to speed up Selenium tests.
      • External DNS, to help configure external DNS access (like AWS Route 53 or CloudFare) to Kubernetes cluster.
  • If it hurts, do it more frequently. A great quote for Continuous Delivery and automated pipelines. Darin Egan talked about the mindfullness principes and how the status quo get challenges and driving change opposes inertia.
  • The main point in Ingrid Epure‘s talk was the integration of security practices during the development process and the differences between academia and engineering practices.
    • Linters can play a part in enforcing security practices, as well as automating autoformatting to leave format differences out from the review process.
    • Standardizing the logs is also a great idea. Using Canonical Log Lines for Online Visibility. I talked before about the need to increasing logs and generate them during the development process.
  • Eric Maxwell talked about the need to standardise the “upper levels” of the apps, mainly related to logging and metrics, and making applications (Modern Applications) more aware of their environment (choreography vs orquestration) and abstracted from the underlying infrastructure.
    • He presented habitat.sh, a tool aimed at working with these principles.
    • Packaging the application code and letting the tool to do the heavy lifting on the “plumbing
  • The pipeline in Intercom was discussed by Eugene Kenny, and the differences between “the ideal pipeline” and “the reality” of making dozens of deployments every day.
    • For example, fully test and deploy only the latest change in the pipeline, speeding deployments at the expense of fewer separations of changes.
    • Or allow locking the pipeline when things are broken.
    • Follow up article: Continuous Deployment at Instagram
  • Observability is an indispensable property for online services: the ability to check what’s going on in production systems. Damien Marshall* had this concept of graphulsion that I can only share.

He gave some nice ideas on observability through the whole life cycle:

Development:

  • Make reporting logs and metrics simple
  • Account for the effort to do observability work
  • Standardize what to report. The three most useful metrics are Request Rate, Error Rate and Duration per Request.

Deployment:

  • Do capacity planning. Know approximately the limits of your system and calculate the utilization of the system (% of that limit)
  • Ship the observability

Production:

  • Make metrics easy to use
  • Centralise dashboard views across different systems
  • Good alerting is hard. Start and keep it simple.

 

  • Riot Games uses custom generation of services to generate skeletons and standardise good practices and reduce development time. Adam Comeford talked about those practices and how they implemented them.
    • Thinking inside the container.
    • Docker-gc is a tool to reduce the size of image repos, as they tend to grow very fast very quickly.
  • Jacopo Scrinzi talked about defining Infrastructure as Code, making the infrastructure changes through the same process as code (review, subjected to source control, etc). In particular using Terraform and Atlas (now Terraform Enterprise) to make automatic deployments, following CI practices for infrastructure.
    • Using modules in Terraform simplifies and standardises common systems.
  • The last keynote was about Skypilot, an initiative inside Demonware to deploy a game fully using Docker containers over Marathon/Mesos , in the Cloud. It was given by Tom Shaw* and the game was last year’s release of Skylanders. As I’ve worked in Demonware, I know how big an undertaking is to prepare the launch of a game previously in dedicated hardware (and how much in underused to avoid risks), so this is a huge improvement.

 

 

As noted by the amount of notes I took, I found the conference very interesting and full of ideas that are worth following up. I really expect a ShipItCon 2018 full of great content.

 

ffind v1.2.0 released!

The new version of ffind v1.2.0 is available in GitHub and PyPi. This version includes the ability to configure defaults by environment variables and to force case insensitivity in searches.

You can upgrade with

    pip install ffind --upgrade

This will be the latest version to support Python 2.6.

Happy searching!

ffind v1.0.2 released!

The new version of ffind (1.0.2) is available in GitHub and PyPi. This version includes the ability to execute python modules and scripts directly and some other minor improvements.

Happy developing!

Do not spawn processes on users requests

I’ve been playing recently an online game that has recently launched, that uses the following idea.

When a user starts a match, it spawns a process in the server that acts as the opponent, generating the actions against the user.

The game had a rough launch, with a lot of problems due it being played by a lot of people. And, IMHO, a lot of the problems can be traced to that idea.

I see it’s a seductive one. If a user generates an interaction with the service that takes time (for example, a match for this game), spawn a process/thread in the server that generates the responses in “real time“. The user then will be notified through polling or pushing the information, and can react to it. The process will receive the new information from the user and adjust the responses.

I know is seductive because I had it once, and I was very lucky to have someone around with more experience that show me how it will break under pressure. It’s not a sane architecture to scale.

Some bad ideas:

  • No limit on processes, meaning the servers can be overflown by context switching. Once you have several thousand processes  running on a server, you are in a bad place.
harry_potter_replicating_cups.gif
Replication out of control

  • The very definition of state on the server. You need to keep track of processes started on different servers (so no two servers perform the same job). High Availability is impossible, as losing one server will mean destroying the state on all those processes. For scalability, always look at stateless servers: read all the data, store the resulting data.
  • Start up times. Each time a process starts, there’s some time to boot. This can be a problem if processes are always being started and stopped, adding overhead to the system. Even starting a thread is not free (and will require probably starting internal work like connect to the DB, read from cache, etc)
  • Connections explosion. If each process needs to connect to other parts of the infrastructure (DB, logging, cache, etc) you can have a problem in number of connections.
  • Process monitoring. What if a process gets stuck? A request can be cancelled easily by a web server (if a request takes more than X, kill it), but an individual process or thread can be more complicated and require specific tooling.

Alternative: Pool of workers

Generate a defined number of processes that can perform the individual actions that generates a match. Each process will get an action from a queue, execute it, and store the resulting state. Any process can produce an action for any user.

vlcsnap-2012-09-24-19h37m32s14.png
A group of workers can be very efficient

For example, if a match is a set of 20 actions, each one happening every  minute, the start match request will introduce 20 actions in a queue, to be extracted at the proper time, introducing the proper delay on each action. Note that the queue needs to have a way of deliver delayed messages, not every messaging queue can (in particular, RabbitMQ doesn’t have a good support). Beanstalkd or Amazon SQS supports it.

Or, alternatively, a single action, that will end inserting the next step in the queue with the adequate delay. The action can be as simple as checking if it should change something and, if not, end.

The processes will be extracting the next action from the queue, and executing them. Note that here you minimise the time the worker is waiting for a new task to do. Each worker is active as much as possible, while any user has a pending task, ready to be executed.

The number of processes are limited, so you won’t have an explosion. You can test the system and have a good idea on the limit, when your throughput is not good enough to execute the actions within a reasonable delay, so you can stop the users from starting a new match. This is a better fallback option than allowing everyone to start one and then not giving a good experience.

A priority queue can be put in place, in that case, to inform the user: “You will be able to start your match in ~3 minutes

Or you can add more processes/servers to increase the throughput in a predictable manner.

Alternative: Whole match pregeneration

Another alternative is actually generating a set of actions and returning them in the first go, and display them at the proper times in the client side. If any adjustment is required due the actions of the user, redo all the results from that time on.

Emperor1.jpg
This match is proceeding as I have foreseen it

For example, a match starts, and returns the 20 server actions to the client, which shows them to the user one each minute. In the 3rd minute, the user performs an action, which makes the server to recalculate the  remainder of the match and return another 17 actions. This is a good strategy if generating actions in advance is possible and few interactions from the user are expected.

The bottom line

The main word here is stateless. It is a basic component of an scalable system, and it’s always worth it to keep in mind when designing a system to be used to more than a couple of users.

All you need is cache

Cache is all you need
Cache is all you need

What is cache

More than a formal definition, I think that the best way of thinking about cache is an result from an operation (data) that gets saved (cached) for future use.

The cache value should be identifiable with a key that is reasonably small. This normally is the call name and the parameters, in some sort of hashed way.

A proper cache has the following three properties:

  1. The result is always replicable. The value can be scrapped without remorse.
  2. Obtaining the result from cache is faster than generate it.
  3. The same result will be used more than once.

The first property implies that the cache is never the True Source of Data. A cache that’s the True Source of Data is not a cache, it’s a database; and need be treated as such.

The second one implies that retrieving from cache is useful. If getting the result from the cache is slower (or only marginally better) than from the True Source of Data, the cache can (and should) be removed. A good candidate for cache should be a slow I/O operation or computationally expensive call. When in doubt, measure and compare.

The third property simply warns against storing values that will be used only once, so the cached value won’t be ever used again. For example, big parts of online games are uncacheable because they change so often they are read less times than written.

The simplest cache

The humblest cache is a well known technique called memoization, which is simply to store in process memory the results of a call, to serve it from there on the next calls with the same parameters. For example,

NUMBER = 100
def leonardo(number):

    if number in (0, 1):
        return 1

    return leonardo(number - 1) + leonardo(number - 2) + 1

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))  

This terribly performant code will return the first 100 Leonardo numbers. But each number will be calculated recursively, so storing the result we can greatly speed up the results. The key to store the results is simply the number.

cache = {}

def leonardo(number):

    if number in (0, 1):
        return 1

    if number not in cache:
        result = leonardo(number - 1) + leonardo(number - 2) + 1
        cache[number] = result

    return cache[number]

for i in range(NUMBER):
    print('leonardo[{}] = {}'.format(i, leonardo(i)))

Normally, though, we’d like to limit the total size of the cache, to avoid our program to run wild in memory. This restrict the size of the cache to only 10 elements, so we’ll need to delete values from the cache to allow new values to be cached:

def leonardo(number):

    if number in (0, 1):
        return 1

    if number not in cache:
        result = leonardo(number - 1) + leonardo(number - 2) + 1
        cache[number] = result

    ret_value = cache[number]

    while len(cache) > 10:
        # Maximum size allowed, 10 elements
        # this is extremely naive, but it's just an example
        key = cache.keys()[0]
        del cache[key]

    return ret_value

Of course, in this example every cached value never changes, which may not be the case. There’s further discussion about this issue below.

Cache keys

Cache keys deserve a small note. They are not usually complicated, but the key point is that they need to be unique. A non unique key, which may be produced by unproper hashing, will produce cache collisions, returning the wrong data. Be sure that this doesn’t happen.

Python support

Just for the sake of being useful, on Python3 there is support for a decorator to cache calls, so the previous code can look like this.

from functools import lru_cache

lru_cache(maxsize=10)
def leonardo(number):

    if number in (0, 1):
        return 1

    if number not in cache:
        result = leonardo(number - 1) + leonardo(number - 2) + 1
        cache[number] = result

    return cache[number]

so you can use it instead of implement your own.

The stereotypical web app cache

In the context of web apps, everyone normally thinks of memcached when thinks of cache.

Memcached will, in this stereotypical usage, use some allocated memory to cache database results or full HTML page, identified by an appropriate unique key, speeding up the whole operation. There are a lot of integrated tools with web frameworks and it can be clustered, increasing the total amount of memory and reliability of the system.typical_usage

In a production environment, with more than one server, the cache can  be shared among different servers, making the generation of content only happen once on the whole cluster, and then be able to be read by every consumer. Just be sure to ensure the first property, making possible to obtain the value from the True Source of Data at any point, from any server.

This is a fantastic setting, and worth using in services. Memcached can be also replaced by other tools like Redis, but the general operation is similar.

But there are more ways to cache!

Assuming a typical distributed deployment on a production web server, there are a lot of places where a cache can be introduced to speed up things.

This described service will have one DB (or a cluster) that contains the True Source of Data, several servers with a web server channeling requests to several backend workers, and a load balancer on top of that, as the entry point of the service.

full_distributed_app

Typically, the farther away that we introduce a cache from the True Source of Data, the less work we produce to the system and the most efficient the cache is.

Let’s describe possible caches from closest to the True Source of Data to farther away.

Cache inside the DataBase

(other than the internal cache of the database itself)

Some values can be stored directly on the database, derivating them from the True Source of Data, in a more manageable way.

A good example for that are periodic reports. If some data is produced during the day, and a report is generated every hour, that report can be stored on the database as well. Next accesses will be accessing the already-compiled report, which should be less expensive than crunching the numbers again.

InsideDB

Another useful way of caching values is to use replication. This can be supported by databases, making possible to read from different nodes at the same time, increasing throughput.

For example, using Master-Slave replication on MySQL, the True Source of Data is on the Master, but that information gets replicated to the slaves, that can be used to increase the read throughput.
full_distributed_app_replication

Here the third property of cache shows up, as this is only useful if we read the data more often than we write it. Write throughput is not increased.

Cache in the Application Level

The juiciest part of a service is normally in this level, and here is where the most alternatives are available.

From the raw results of the database queries, to the completed HTML (or JSON, or any other format) resulting from the request, or any other meaningful intermediate result, here is where the application of caches can be most creative.

Memory caches can be set either internally per worker, per server, or  externally for intermediate values.

  • Cache per worker. This is the fastest option, as the overhead will be minimal, being internal memory of the process serving the requests. But it will be multiplied by the number of workers per box, and will need to be generated individually. No extra maintenance needs to be done, though.
  • External cache. An external service, like memcached. This will share all the cache among the whole service, but the delays in accessing the cache would be limited by the network. There is extra maintenance costs in setting the external service.
  • Cache per server. Intermediate. Normally, setting on each server a cache service like memcached. Local faster access shared among all workers on the same box, with the small overhead of using a protocol.

Another possibility worth noting in some cases is to cache in the hard drive, instead of RAM memory. Reading from local hard drive can be faster than accessing external services, in particular if the external service is very slow (like a connection to a external network) or if the data needs to be highly processed before being used. Hard drive caches can also be helpful for high volumes of data that won’t fit in memory, or reducing startup time, if starting a worker requires complex operations that produces a cacheable outcome.

Cache in the Web Server

Widely available web servers like  Apache or Nginx have integrated caches. This is typically less flexible than application layer caching, and needs to fit into common patterns, but it’s simple to setup and operate.

There’s also the possibility to return an empty response with status code 304 Not Modified, indicating that the data hasn’t changed since the last time the client requested the data. This can also be triggered from the application layer.

Static data should be, as much as possible, stored as a file and returned directly from the web server, as they are optimised for that use case. This allows the strategy of storing responses as static files and serve them through the web server. This, in an offline fashion, is the strategy behind static website generators like Nikola or Jekyll.

For sites that deal with huge number of requests that should return the same data, like online newspapers or Wikipedia, a cache server like Varnish can be set to cache them, that may be able to act as a load balancer as well. This level of cache may be done with the data already compressed in Gzip, for maximum performance.

Cache in the Client

Of course, the fastest requests is the one that doesn’t happen, so any information that can be stored in the client and avoid making a call at all will greatly speed an application. To achieve real responsiveness this needs to be greatly taken into account. This is a different issue than caching, but I translated an article a while ago about tips and tricks for improving user experience on web applications here.

The dreaded cache invalidation

The elephant in the room when talking about cache is “cache invalidation”. This can be an extremely difficult problem to solve in distributed environments, depending on the nature of the data.

The basic problem is very easy to describe: “What happens when the cache contains different data than the True Source of Data?

Some times this won’t be a problem. In the first example, the cached Leonardo numbers just can’t be different from the True Source of Data. If the value is cached, it will be the correct value. The same would happen with prime numbers, a calendar for 2016, or last month’s report. If the cached data is static, happy days.

But most of the data that we’d like to cache is not really static. Good data candidates for being cached are values that rarely change. For example, your Facebook friends, or your schedule for today. This is something that will be relatively static, but it can change (a friend can be added, a meeting cancelled). What would happen then?

The most basic approach is to refresh periodically the cache, like deleting the cached value after a predetermined time. This is very straightforward and normally supported natively by the cache tools, like allowing to store a value with a validation date. For example, assuming the user has a cached copy of the avatars of friends locally available, only ask again every 15 minutes. Sure, there will up to 15 minutes where a new avatar from a friend won’t be available, and the old one will be displayed, but that’s probably not a big deal.

On the other hand, the position on a leaderboard for a competitive video game, or the result on a live match in the World Cup is probably much more sensible for such a delay.

Even worse, we’ve seen that some options involve having more than one cache (cache per server, or per worker; or redundant copies for reliability purposes). If two caches contains different data, the user may be alternating between old and new data, which will be confusing at best and produce inconsistent results at worst.

This is a very real problem on applications working with eventually consistent databases (like the mentioned Master-Slave configuration). If a single operation involves writing a value, and then read the same value, the returned value could have a different value (the old one), potentially creating inconsistent results or corrupting the data. Two very close operations modifying the same data by two users could also produce this effect.

Periodically refreshing the cache can also produce bad effects in production environment, like synchronising all the refresh happening at the same time. This is typical in systems that refresh data for the day at exactly 00:00. At exactly that time all workers will try to refresh all the data at the same time, orchestrating a perfectly coordinated distributed attack against the True Source of Data. It is better to avoid perfectly round numbers and use some randomness instead, or set numbers relative to the last time the data was requested from the True Source of Data, avoiding synchronised access.

This avalanche effect can also happen when the cache cluster changes (like adding or removing nodes, for example, when one node fails). These operations can invalidate or make unavailable high numbers of cached content, producing an avalanche of requests to the True Source of Data. There are techniques to mitigate this, like Consistent Hash Rings, but they can be a nightmare if faced in production.

Manually invalidating the cache when the data changes in the True Source of Data is a valid strategy, but it needs to invalidate the results from all the caches, which is normally only feasible for external cache services. You simply can’t access the internal memory of a worker on a different server. Also,  depending on the rate of invalidation per read cached value, can be counter productive, as it will produce an overhead of calls to the cache services. It also normally requires more development work, as this needs a better knowledge of the data flow and when the value in the cache is no longer valid. Sometimes that’s very subtle and not evident at all.

Conclusion

Caching is an incredibly powerful tool to improve performance in software systems.  But it can also be a huge pain due all those subtle issues.

So, some tips to deal with cache

  • Understand the data an how it’s consumed by the user. A value that changes more often than gets read it’s not a good cache candidate.
  • Ensure the system has a proper cache cycle. At the very least, understand how cache flows and what are the implications of cache failure.
  • There are a lot of ways and levels to cache. Use the most adequate to make caching efficient.
  • Cache invalidation can be very difficult. Sorry about that.

ffind v0.8 released

Good news everyone!

The new version of find (0.8) is available in GitHub and PyPi. This version includes performance improvements, man page and fuzzy search support.

Enjoy!

Future as a developer and the ever changing picture

A few weeks ago I came by a couple of articles my Marco Arment that share the theme of the current status of accelerated change within the development community as a way of stressing up, and being difficult to be up to date. After all, one gets tired of learning a new framework or language every size months. It gets to a point where is not funny or interesting anymore.

It seems like two different options are presented, that are available for developers after some time:

  • Keep up, meaning that you adopt rapidly each new technology
  • Move to other areas, typically management

Both are totally valid options, as I already said in this blog that I don’t like when good developers move to different areas (to me it’s sort of a surgeon deciding she had enough after a few years and move to manage the hospital). Though, obviously each person has absolutely every right to choose their career path.

But I think that it’s all mostly based in an biased and incorrect view of the field of technology and the real pace of changes.

In the last years, there has been an explosion of technologies, in particular for web. Ruby on Rails almost feels introduced at the same time as COBOL. NodeJS seemed to be in fashion for a while. The same with MongoDB or jQuery.

We all know that being stressed is not a great way of learn
We all know that being stressed is not a great way of learn

In the last 6 or 7 years there has been an incredible explosion in terms of open source fragmentation. Probably because GitHub (and other online repos) and the increase in communication through the Internet, the bar to create a web framework and offer it to the world has been lowered so much, that a lot of projects that would’ve been not exposed previously, has gotten more exposure. As a general effect, is positive, but it came with the negative effect that every year there is a revolution in terms of technologies, which forces everyone to catch up and learn the brand new tool that is the best for the current development, increasing the churning of buzz words.

But all this is nothing but an illusion. We developers tend to laugh at the common “minimum 3+ years of experience in Swift”, but we still get the notion that we should be experts in a particular language, DB or framework since day one. Of course, of the one on demand today, or we are just outdated, dinosaurs that should retire.

Software development is a young field, full of young people. That’s great in a lot of aspects, but we need to appreciate experience, even if it comes from using a different technology. It doesn’t look like it, but there’s still a lot of projects done in “not-so-fancy” technologies. That includes really old stuff like Fortran or COBOL, but also C++, Java, Perl, PHP or Ruby.

Technologies gets established by a combination of features, maturity, community and a little luck.  But once they are established, they’re quite resilient and don’t go away easily.  They are useful for quite a long time. Right now it’s not that difficult to pick a tool that is almost guaranteed to be around in the next 10-15 years. Also, most of the real important stuff is totally technology agnostic, things like write clean code, structure, debug ability, communication, team work, transform abstract ideas into concrete implementations, etc… That simply does not go away.

Think about this. iOS development started in 2008. Smartphones are radically different beasts than the ones available 6 years ago, probably the environment that has changed more. The basics are the same, though. And even if Swift has been introduced this year, it’s based in the same principles. Every year there has been tweaks, changing APIs, new functionalities. But the basic ideas are still the same. Today a new web development using LAMP is totally viable. Video games still relay on C++ and OpenGL. Java is still heavily used. I use all the time ideas mainly developed in the 70s like UNIX command line or Vim.

Just because every day we get tons of news about new startups setting up applications on new paradigms, that doesn’t mean that they don’t coexist with “older” technologies.

Of course, there are new tricks to learn, but it’s a day by day additive effort. Real revolution and change of paradigm is rare, and normally not a good sign. Changing from MySQL to PostgreSQL shouldn’t be considered a major change in career. Searching certain stability in the tools you use should be seen as good move.

We developers love to stress the part of learning everyday something new and constantly challenge ourselves, but that should be taken also in perspective with allowing time to breathe. We’ve created a lot of pressure on ourselves in terms of having to be constantly pushing with new ideas, investigating in side projects and devoting ourselves 100% of the time to software. That’s not only not realistic. It’s not good.

You only have to breathe.  And just worry on doing a good work and enjoy learning.