The general definition of redundancy is exceeding what is normal. However, in computing, the term is used more specifically and refers to duplicate devices that are used for backup purposes. The goal of redundancy is to prevent or recover from the failure of a specific component or system.
There are many types of redundant devices. The most common in personal computing is a backup storage device. While most other computer components can be easily replaced, if a hard drive fails, it may not be possible to recover personal data. Therefore, it is important to regularly back up your data to a secondary hard drive. In enterprise situations, a RAID configuration can be used to mirror data across two drives in real-time.
Another type of redundant device is a secondary power supply. High traffic web servers and other critical systems may have multiple power supplies that take over in case the primary one fails. While an uninterruptible power supply (UPS) is not technically a redundant device, the battery within the surge protector provides power redundancy for a few minutes if electricity is lost.
Computer networks often implement redundancy as well. From local area networks to Internet backbone connections, it is common to have redundant data paths. This means if one system goes down, the connection between other systems will not be broken. For example, an FDDI network has a duplicate data "ring" that is used automatically when the primary data path is interrupted. Network redundancy can be accomplished by either adding extra physical connections or using networking software that automatically reroutes data when needed.
Updated: November 23, 2011