Bad sector in computing refers to a disk sector on a disk storage unit that is permanently damaged. Upon taking damage, all information stored on that sector is lost. When a bad sector is found and marked, the operating system skips it in the future.
Details
A bad sector is the result of mechanical damage. Bad sectors are a threat to information security in the sense of data remanence. Very often physical damages can interfere with parts of many different files.
Operating system
Bad sectors may be detected by the operating system or the disk controller. Most file systems contain provisions for sectors to be marked as bad, so that the operating system avoids them in the future. Disk diagnostic utilities, such as CHKDSK, Disk Utility, or badblocks can actively look for bad sectors upon user request.
Disk controller
When a sector is found to be bad or unstable by the firmware of a disk controller, the disk controller remaps the logical sector to a different physical sector. Typically, automatic remapping of sectors only happens when a sector is written to. In the normal operation of a hard drive, the detection and remapping of bad sectors should take place in a manner transparent to the rest of the system and in advance before data is lost. There are two types of remapping by disk hardware: P-LIST and G-LIST. Utilities can read the Self-Monitoring, Analysis, and Reporting Technology information to tell how many sectors have been reallocated, and how many spare sectors the drive may still have. Because reads and writes from G-list sectors are automatically redirected to spare sectors, it slows down drive access even if data in drive is defragmented. Once the G-list is filled up, the storage unit must be replaced.
In the 1980s, many software vendors mass-producedfloppy disks for distribution to users of home computers that had bad sectors deliberately introduced. The disk drives for these computers would not read the sector: the header information may be duplicated so that different data was read at each pass from different physical sectors with the same headers, or the data in the sector would not be read correctly by the head, and various other techniques described above. The home computer equipment could only write "good" sectors, so that attempts to copy the disk were flawed either because:
A sector was deliberately made "bad" so that the disk controller would attempt to read it several times, generally requiring one complete revolution of the media for each attempt. This made reading slow, and the read would complete eventually indicating an error, were the disk legitimate. Were it a copy, it would complete quickly indicating a successful read: but this then proved it was a copy, made without the deliberate bad sector.
The same header information was present on the same track more than once for the sector, typically half a spin apart, depending on the slew rate of the disk and the expected interleaving by the operating system. So the head would read the "same" sector with different information, since two copies were available diametrically opposite and the disk head would see either of the two, depending on when it was asked. Generally, because of variations in spin speed, the request was made three or four times until different results were achieved. Again, if the same data were retrieved every time, the disk was a copy; if different data were obtained, it was an original. In both cases the data was successfully read, so a simple XOR of the two could then be used to give a comparison against a known string of characters, so that not only did the data have to differ, but had to differ in an exact bit pattern.
These techniques could generally be easily circumvented since the code to read the bad sectors was usually in the bootstrap loader on the disk itself, so by reverse engineering and rewriting the bootstrap loader, it would not look for the bad sectors, and the comparison for a known bit pattern would have to be encoded there, too.