Varnish cache

From Wikipedia, the free encyclopedia

Varnish
Developed by Poul-Henning Kamp Linpro
Latest release 1.1.2 / December 20, 2007
OS Unix
Genre open source
License BSD
Website http://www.varnish-cache.org/

Varnish is a HTTP accelerator designed for content-heavy dynamic web sites. In contrast to other HTTP accelerators, many of which began life as client-side proxies or origin servers, Varnish was designed from the ground up as an HTTP accelerator. The Varnish web site claims that Varnish is ten to twenty times faster than the popular Squid cache on the same hardware.

Contents

[edit] History

The project was initiated by the online branch of the Norwegian tabloid newspaper Verdens Gang. The architect and lead developer is Danish independent consultant Poul-Henning Kamp, with management, infrastructure and additional development being provided by the Norwegian Linux consulting company Linpro.

Varnish is open source (specifically, it is distributed under a two-clause BSD license), but commercial support is available from Linpro, amongst others. Currently, the rights to the code are jointly held by Verdens Gang and Linpro, but work is under way to transfer them to an independent foundation.

[edit] Architecture

Varnish is heavily threaded, with each client connection being handled by a separate worker thread. When the configured limit on the number of active worker threads is reached, incoming connections are placed in an overflow queue; only when this queue reaches its configured limit will incoming connections be rejected.

The principal configuration mechanism is VCL (Varnish Configuration Language), a DSL used to write hooks which are called at critical points in the handling of each request. Most policy decisions are left to VCL code, making Varnish far more configurable and adaptable than most other HTTP accelerators. When a VCL script is loaded, it is translated to C, compiled to a shared object by the system compiler, and linked directly into the accelerator.

A number of run-time parameters control things such as the maximum and minimum number of worker threads, various timeouts etc. A command-line management interface allows these parameters to be modified, and new VCL scripts to be compiled, loaded and activated, without restarting the accelerator.

In order to reduce the number of system calls in the fast path to a minimum, log data is stored in shared memory, and the task of filtering, formatting and writing log data to disk is delegated to a separate application.

[edit] Performance

While Varnish is designed to reduce contention between threads to a minimum, its performance will only be as good as that of the system's pthreads implementation. Additionally, a poor malloc implementation may add unnecessary contention and thereby limit performance.

When the requested document is in cache, response time is typically measured in microseconds[citation needed]. This is significantly better than most HTTP servers[citation needed], so even sites consisting mostly of static content will benefit from Varnish.

[edit] Load Balancing

Still in experimental stage as of now (December 2007) is the load balancing feature of Varnish. It will allow to distribute incoming requests to Varnish among several backend servers.

[edit] See also

[edit] External links