Memcached 1.6.15 releases, high performance distributed caching system

Memcached is a high-performance multithreaded event-based key/value cache-store intended to be used in a distributed system.

memcached allows you to take memory from parts of your system where you have more than you need and make it accessible to areas where you have less than you need.

memcached also allows you to make better use of your memory. If you consider the diagram to the right, you can see two deployment scenarios:

  1. Each node is completely independent (top).
  2. Each node can make use of memory from other nodes (bottom).

The first scenario illustrates the classic deployment strategy, however you’ll find that it’s both wasteful in the sense that the total cache size is a fraction of the actual capacity of your web farm, but also in the amount of effort required to keep the cache consistent across all of those nodes.

With memcached, you can see that all of the servers are looking into the same virtual pool of memory. This means that a given item is always stored and always retrieved from the same location in your entire web cluster.

Also, as the demand for your application grows to the point where you need to have more servers, it generally also grows in terms of the data that must be regularly accessed. A deployment strategy where these two aspects of your system scale together just makes sense.

The illustration to the right only shows two web servers for simplicity, but the property remains the same as the number increases. If you had fifty web servers, you’d still have a usable cache size of 64MB in the first example, but in the second, you’d have 3.2GB of usable cache.

Changelog v1.6.15

Fixes

Fixes

  • proxy: Fix buffer overflow and prevent recv() of 0 byte
  • proxy: allow await() to be called recursively
  • proxy: mcp.request(cmd, [val | resp])
  • proxy: hacky method of supporting noreply/quiet
  • proxy: add ring_hash builtin
  • proxy: fix logger entry memory corruption
  • storage: parameterize the compaction thread sleep
  • proxy: pull chunks into individual c files
  • proxy: documentation updates
  • proxy: “stats settings” for proxy
  • proxy: await improvements
  • proxy: trivial support for SO_KEEPALIVE on backend
  • mcmc: upstream update for SO_KEEPALIVE
  • proxy: fix crash on stats proxy sans user stats
  • proxy: enable backend_total stat
  • proxy: track in-flight requests
  • proxy: add some basic logging for backend errors
  • proxy: logging improvements + lua mcp.log()
  • proxy: add stats for commands seen

Download

Suggest Reading: How to install & use memcached on CentOS