FastCGI


FastCGI is a binary protocol for interfacing interactive programs with a web server. It is a variation on the earlier Common Gateway Interface. FastCGI's main aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.

History

is a protocol for interfacing external applications to web servers. CGI applications run in separate processes, which are created at the start of each request and torn down at the end. This "one new process per request" model makes CGI programs very simple to implement, but limits efficiency and scalability. At high loads, the operating system overhead for process creation and destruction becomes significant. Also, the CGI process model limits resource reuse methods, such as reusing database connections, in-memory caching, etc.
To address the scalability shortcomings of CGI, Open Market developed FastCGI and first introduced it in their webserver product in the mid-1990s. Open Market originally developed FastCGI in part as a competitive response to Netscape's proprietary, in-process application programming interfaces for developing Web applications.
While developed first by Open Market, FastCGI was then implemented by several other webserver makers. However, its approach competed against other methods to speed and simplify server-subprogram communication. Apache HTTP Server modules such as mod_perl and mod_php appeared around the same time, and gained popularity quickly., all of these various methods, including CGI, remain in common use.

Implementation details

Instead of creating a new process for each request, FastCGI uses persistent processes to handle a series of requests. These processes are owned by the FastCGI server, not the web server.
To service an incoming request, the web server sends environment variable information and the page request to a FastCGI process over either a Unix domain socket, a named pipe, or a Transmission Control Protocol connection. Responses are returned from the process to the web server over the same connection, and the web server then delivers that response to the end user. The connection may be closed at the end of a response, but both web server and FastCGI service processes persist.
Each individual FastCGI process can handle many requests over its lifetime, thereby avoiding the overhead of per-request process creation and termination. Processing multiple requests concurrently can be done in several ways: by using one connection with internal multiplexing ; by using multiple connections; or by a mix of these methods. Multiple FastCGI servers can be configured, increasing stability and scalability.
Web site administrators and programmers can find that separating web applications from the web server in FastCGI has many advantages over embedded interpreters. This separation allows server and application processes to be restarted independently - an important consideration for busy web sites. It also enables the implementation of per-application, hosting service security policies, which is an important requirement for ISPs and web hosting companies. Different types of incoming requests can be distributed to specific FastCGI servers which have been equipped to handle those types of requests efficiently.

Web servers that implement FastCGI

FastCGI can be implemented in any language that supports network sockets. Since "FastCGI is a protocol, not an implementation," it is not tightly bound to any one language. Application programming interfaces exist for:
Recent frameworks such as Ruby on Rails, Catalyst, Django, Kepler and Plack allow use with either the embedded interpreters, or FastCGI.