Failing badly and failing well are concepts in systems security and network security describing how a system reacts to failure. The terms have been popularized by Bruce Schneier, a cryptographer and security consultant.
Failing badly
A system that fails badly is one that has a catastrophic result when failure occurs. A single point of failure can thus bring down the whole system. Examples include:
Databases protected only by a password. Once this security is breached, all data can be accessed.
Fracture critical structures, such as buildings or bridges, that depend on a single column or truss, whose removal would cause a chain reaction collapse under normal loads.
Security checks which concentrate on establishing identity, not intent.
A system that fails well is one that compartmentalizes or contains its failure. Examples include:
Compartmentalized hulls in watercraft, ensuring that a hull breach in one compartment will not flood the entire vessel.
Databases that do not allow downloads of all data in one attempt, limiting the amount of compromised data.
Structurally redundant buildings conceived to resist loads beyond those expected under normal circumstances, or resist loads when the structure is damaged.
Computer systems that restart or proceed to a stopped state when an invalid operation occurs.
Concrete structures which show fractures long before breaking under load, thus giving early warning.
Armoured cockpit doors on airplanes, which confine a potential hijacker within the cabin even if they are able to bypass airport security checks.
Internet connectivity provided by more than one vendor or discrete path, known as multihoming.
Star or mesh networks, which can continue to operate when a node or connection has failed.
Ductile materials, such as "under-reinforced concrete", when overloaded, fail gradually – they yield and stretch, giving some warning before ultimate failure.
Making a backup copy of all important data and storing it in a separate place. That data can be recovered from the other location when either place is damaged.
Designing a system to 'fail well' has also been alleged to be a better use of limited security funds than the typical quest to eliminate all potential sources of errors and failure.