A Linux kernel module and pvfs-client process allow the file system to be mounted and used with standard utilities.
The client library provides for high performance access via the message passing interface (MPI).
PVFS version 0 was based on Vesta, a parallel file system developed at IBM T. J. Watson Research Center.
[2] Starting in 1994 Rob Ross re-wrote PVFS to use TCP/IP and departed from many of the original Vesta design points.
PVFS version 1 was targeted to a cluster of DEC Alpha workstations networked using switched FDDI.
Like Vesta, PVFS striped data across multiple servers and allowed I/O requests based on a file view that described a strided access pattern.
Ross showed that this depended on a number of factors including the relative speed of the network and the details of the file view.
[4] In late 1994 Ligon met with Thomas Sterling and John Dorband at Goddard Space Flight Center (GSFC) and discussed their plans to build the first Beowulf computer.
Over the next several years Ligon and Ross worked with the GSFC group including Donald Becker, Dan Ridge, and Eric Hendricks.
In 1997, at a cluster meeting in Pasadena, CA Sterling asked that PVFS be released as an open source package.
Ross completed his PhD in 2000 and moved to Argonne National Laboratory and the design and implementation was carried out by Ligon, Carns, Dale Witchurch, and Harish Ramachandran at Clemson University, Ross, Neil Miller, and Rob Latham at Argonne National Laboratory, and Pete Wyckoff at Ohio Supercomputer Center.
The new design featured object servers, distributed metadata, views based on MPI, support for multiple network types, and a software architecture for easy experimentation and extensibility.
Carns completed his PhD in 2006 and joined Axicom, Inc. where PVFS was deployed on several thousand nodes for data mining.
In 2008 Carns moved to Argonne and continues to work on PVFS along with Ross, Latham, and Sam Lang.
[8] In 2008 Clemson began developing extensions for supporting large directories of small files, security enhancements, and redundancy capabilities.
PVFS uses a networking layer named BMI which provides a non-blocking message interface designed specifically for file systems.
BMI has multiple implementation modules for a number of different networks used in high performance computing including TCP/IP, Myrinet, Infiniband, and Portals.
Another key design point is the PVFS protocol which describes the messages passed between client and server, though this is not strictly a component.
The request processor consists of the server process' main loop and a number of state machines.
State machines are based on a simple language developed for PVFS that manage concurrency within the server and client.