Network file system

From Wikipedia, the free encyclopedia

A network file system, is any computer file system that supports sharing of files, printers and other resources as persistent storage over a computer network. The first file servers were developed in the 1970s, and in 1985 Sun Microsystems created the Network File System (NFS) which became the first widely used distributed file system. Other notable distributed file systems are Andrew File System (AFS) and Common Internet File System (CIFS), also known as SMB.

Contents

[edit] Clients and servers

A file server provides file services to clients. A client interface for a file service is a set of primitive file operations, such as creating a file, deleting a file, reading from a file, and writing to a file. The primary [hardware] component that a file server controls is a set of local secondary-storage devices on which files are stored, and from which they are retrieved according to requests from clients.

[edit] Distribution

A distributed file system (DFS) is a network file system whose clients, servers, and storage devices are dispersed among the machines of a distributed system or intranet. Service activity occurs across the network, and instead of a single centralized data repository, the system has multiple and independent storage devices. In some DFSs, servers run on dedicated machines while in others a machine can be both a server and a client. A DFS can be implemented as part of a distributed operating system, or else by a software layer whose task is to manage the communication between conventional operating systems and file systems. The distinctive feature of a DFS is that the system has many and autonomous clients and servers.

[edit] Transparency

Ideally, a DFS should appear to its users to be a conventional, centralized file system. The multiplicity and dispersion of its servers and storage devices should be made invisible. That is, the client interface used by programs should not distinguish between local and remote files. It is up to the DFS to locate the files and to arrange for the transport of the data.

[edit] Performance

The most important performance measurement of a DFS is the amount of time needed to satisfy service requests. In conventional systems, this time consists of a disk-access time and a small amount of CPU-processing time. But in a DFS, a remote access has additional overhead due to the distributed structure. This includes the time to deliver the request to a server, the time to deliver the response the client, and for each direction, a CPU overhead of running the communication protocol software. The performance of a DFS can be viewed as one dimension of its transparency: ideally, it would be comparable to that of a conventional file system.

[edit] Concurrent file updates

A DFS should allow multiple client processes on multiple machines to access and update the same files. Hence updates to the file from one client should not interfere with access and updates from other clients. Concurrency control or locking may be either built into the file system or be provided by an add-on protocol.

[edit] List of network file systems

[edit] Client-server file systems

[edit] Distributed file systems

[edit] External links

[edit] See also