December 5, 2020

Memcached 1.6.9 releases, high performance distributed caching system

3 min read

Memcached is a high-performance multithreaded event-based key/value cache-store intended to be used in a distributed system.

memcached allows you to take memory from parts of your system where you have more than you need and make it accessible to areas where you have less than you need.

memcached also allows you to make better use of your memory. If you consider the diagram to the right, you can see two deployment scenarios:

  1. Each node is completely independent (top).
  2. Each node can make use of memory from other nodes (bottom).

The first scenario illustrates the classic deployment strategy, however you’ll find that it’s both wasteful in the sense that the total cache size is a fraction of the actual capacity of your web farm, but also in the amount of effort required to keep the cache consistent across all of those nodes.

With memcached, you can see that all of the servers are looking into the same virtual pool of memory. This means that a given item is always stored and always retrieved from the same location in your entire web cluster.

Also, as the demand for your application grows to the point where you need to have more servers, it generally also grows in terms of the data that must be regularly accessed. A deployment strategy where these two aspects of your system scale together just makes sense.

The illustration to the right only shows two web servers for simplicity, but the property remains the same as the number increases. If you had fifty web servers, you’d still have a usable cache size of 64MB in the first example, but in the second, you’d have 3.2GB of usable cache.

Changelog v1.6.9


  • crawler: remove bad mutex unlock during error
  • idle_timeout: avoid long hangs during shutdown
  • extstore: use fcntl locking on disk file
  • portability fix for getsubopt
  • illumos build fixes + require libevent2
  • core: generalize extstore’s defered IO queue
  • fix connection limit tests
  • logger: fix spurious watcher hangups
  • watcher.t: reduce flakiness
  • Extend test CA validity to 500 years
  • adjust “t/idle-timeout.t” be more forgiving

New Features

  • arm64: Re-add arm crc32c hw acceleration for extstore
  • restart mode: expose memory_file path in stats settings
  • ‘shutdown graceful’ command for raising SIGUSR1
  • Introduce NAPI ID based worker thread selection (see doc/napi_ids.txt)
  • item crawler hash table walk mode

The background item crawler thread is used by default to walk LRU’s and actively reclaim expired items. It can also be used by end users to examine the cache via the lru_crawler metadump command.

The metadump command can be told to walk specific LRU’s, in case you are curious what is taking up memory in lower or higher slab classes. On the downside LRU’s will naturally reorder as things happen. There is also an issue where items that are very frequently accessed are invisible to the LRU crawler.

New with this release, if you invoke the crawler via: lru_crawler metadump hash, the crawler will instead visit each bucket in the hash table. This will ensure each item is visited once, but the search cannot be limited in the same way.

Note this does not in any way snapshot memory. If items are deleted or added to the hash table after the walk starts, they may or may not be seen.


Suggest Reading: How to install & use memcached on CentOS