Developer(s) | Craig Barratt |
---|---|
Stable release | 3.2.1 / April 25, 2011 |
Preview release | 3.2.0beta1 / January 24, 2010 |
Written in | Perl |
Operating system | Cross-platform |
Type | Backup |
License | GPL 2 |
Website | backuppc.sourceforge.net |
BackupPC is a free Disk-to-disk backup software suite with a web-based frontend. The cross-platform server will run on any Linux, Solaris, or UNIX based server. No client is necessary, as the server is itself a client for several protocols that are handled by other services native to the client OS. In 2007, BackupPC was mentioned as one of the three most well known open-source backup software [1] even though it is one of the tools that are "so amazing, but unfortunately, if no one ever talks about them, many folks never hear of them" [2]
Data deduplication reduces the disk space needed to store the backups in the disk pool. It is possible to use it as D2D2T solution, if the archive function of BackupPC is used to back up the disk pool to tape. BackupPC is not a block-level backup system such as Ghost4Linux but performs file-based backup and restore. Thus it is not suitable for backup of disk images or raw disk partitions.[3]
BackupPC incorporates a Server Message Block (SMB) client that can be used to back up network shares of computers running Windows. Paradoxically, under such a setup the BackupPC server can be located behind a NAT'd firewall while the Windows machine operates over a public IP address. While this may not be advisable for SMB traffic, it is more useful for web servers running SSH with GNU tar and rsync available, as it allows the BackupPC server to be stored in a subnet separate from the web server's DMZ.
It is published under the GNU General Public License.
Contents |
Supports NFS, SSH, SMB and rsync [4]
It can back up Unix-like systems with native ssh and tar or rsync support, such as Linux, BSD, and Mac OSX, as well as Microsoft Windows shares with minimal configuration.[5]
On Windows, third party implementations of tar, rsync, and SSH (such as Cygwin) are required to utilize those protocols.[6]
The choice between tar and rsync is dictated by the hardware and bandwidth available to the client. Clients backed up by rsync use considerably more CPU time than client machines using tar or SMB. Clients using SMB or tar use considerably more bandwidth than clients using rsync. These trade-offs are inherent in the differences between the protocols. Using tar or SMB transfers each file in its entirety, using little CPU but maximum bandwidth. The rsync method calculates checksums for each file on both the client and server machines in a way that enables a transfer of just the differences between the two files; this uses more CPU resources, but minimizes bandwidth.[7]
BackupPC uses a combination of hard links and compression to reduce the total disk space used for files. At the first full backup, all files are transferred to the backend, optionally compressed, and then compared. Files that are identical are hard linked, which uses only one additional directory entry. The upshot is that an astute system administrator could potentially back up ten Windows XP laptops with 10 GB of data each, and if 8 GB is repeated on each machine (Office and Windows binary files) would look like 100 GB is needed, but only 28 GB (10 × 2 GB + 8 GB) would be used.[8] Compression of the data on the back-end will further reduce that requirement.
When browsing the backups, incremental backups are automatically filled back to the previous full backup. So every backup appears to be a full and complete dump of data.
When backing up a remote SMB share, speeds of 3–4 Mbit/s are normal . A local disk used as a backup destination returns speeds of 10+ Mbit/s depending on CPU performance. A faster CPU will naturally help with compression and md5sum generation. Speeds of over 13 MB/s are attainable on a gigabit LAN when backing up a Linux client using rsync over SSH, even when the backup destination is non-local.