Memcached

From Wikipedia, the free encyclopedia

The system uses a client–server architecture. The servers maintain a key–value associative array; the clients populate this array and query it. Keys are up to 250 bytes long and values can be at most 1 megabyte in size.

Clients use client-side libraries to contact the servers which, by default, expose their service at port 11211. Each client knows all servers; the servers do not communicate with each other. If a client wishes to set or read the value corresponding to a certain key, the client’s library first computes a hash of the key to determine the server to use. Then it contacts that server. The server will compute a second hash of the key to determine where to store or read the corresponding value.

The servers keep the values in RAM; if a server runs out of RAM, it discards the oldest values. Therefore, clients must treat Memcached as a transitory cache; they cannot assume that data stored in Memcached is still there when they need it. MemcacheDB, Couchbase Server, Tarantool and other database servers provide persistent storage while maintaining Memcached protocol compatibility.

If all client libraries use the same hashing algorithm to determine servers, then clients can read each other’s cached data; this is obviously desirable.

A typical deployment will have several servers and many clients. However, it is possible to use Memcached on a single computer, acting simultaneously as client and server.

read the code there

consistent hash ring

http://www.martinbroadhurst.com/Consistent-Hash-Ring.html

It is used in distributed storage systems like Amazon Dynamo, memcached, Project Voldemort and Riak.

how can you find a server in a distributed system to store or retrieve a value identified by a key, while at the same time being able to cope with server failures and network partitions?

Simply finding a server for value is easy; just number your set of s servers from 0 to s – 1. When you want to store or retrieve a value, hash the value’s key modulo s, and that gives you the server.

The problem comes when servers fail or become unreachable through a network partition. At that point, the servers no longer fill the hash space, so the only option is to invalidate the caches on all servers, renumber them, and start again. Given that, in a system with hundreds or thousands of servers, failures are commonplace, this solution is not feasible.

solution:

In consistent hashing, the servers, as well as the keys, are hashed, and it is by this hash that they are looked up. The hash space is large, and is treated as if it wraps around to form a circle – hence hash ring. The process of creating a hash for each server is equivalent to placing it at a point on the circumference of this circle. When a key needs to be looked up, it is hashed, which again corresponds to a point on the circle. In order to find its server, one then simply moves round the circle clockwise from this point until the next server is found. If no server is found from that point to end of the hash space, the first server is used – this is the “wrapping round” that makes the hash space circular.

==============================================================================

instead of hashing the node only, hash the node and its replicas to construct a big hashtable

add a node, that means add node and its replica to the big hashtable

{ node+replica1 => node, node + replica2 => node, … }

remove a node, remove (node+all replica) from the big hashtable

get a node, iterate the hash, find lowest value greater than or equal to the node; if such node are not found, return the first one.

read the code there