List of interface bit rates


This is a list of interface bit rates, is a measure of information transfer rates, or digital bandwidth capacity, at which digital interfaces in a computer or network can communicate over various kinds of buses and channels. The distinction can be arbitrary between a computer bus, often closer in space, and larger telecommunications networks. Many device interfaces or protocols are used both inside many-device boxes, such as a PC, and one-device-boxes, such as a hard drive enclosure. Accordingly, this page lists both the internal ribbon and external communications cable standards together in one sortable table.

Factors limiting actual performance, criteria for real decisions

Most of the listed rates are theoretical maximum throughput measures; in practice, the actual effective throughput is almost inevitably lower in proportion to the load from other devices, physical or temporal distances, and other overhead in data link layer protocols etc. The maximum goodput may be even lower due to higher layer protocol overhead and data packet retransmissions caused by line noise or interference such as crosstalk, or lost packets in congested intermediate network nodes. All protocols lose something, and the more robust ones that deal resiliently with very many failure situations tend to lose more maximum throughput to get higher total long term rates.
Device interfaces where one bus transfers data via another will be limited to the throughput of the slowest interface, at best. For instance, SATA revision 3.0 controllers on one PCI Express 2.0 channel will be limited to the 5 Gbit/s rate and have to employ more channels to get around this problem. Early implementations of new protocols very often have this kind of problem. The physical phenomena on which the device relies will also impose limits; for instance, no spinning platter shipping in 2009 saturates SATA revision 2.0, so moving from this 3 Gbit/s interface to USB 3.0 at 4.8 Gbit/s for one spinning drive will result in no increase in realized transfer rate.
Contention in a wireless or noisy spectrum, where the physical medium is entirely out of the control of those who specify the protocol, requires measures that also use up throughput. Wireless devices, BPL, and modems may produce a higher line rate or gross bit rate, due to error-correcting codes and other physical layer overhead. It is extremely common for throughput to be far less than half of theoretical maximum, though the more recent technologies employ preemptive spectrum analysis to avoid this and so have much more potential to reach actual gigabit rates in practice than prior modems.
Another factor reducing throughput is deliberate policy decisions made by Internet service providers that are made for contractual, risk management, aggregation saturation, or marketing reasons. Examples are rate limiting, bandwidth throttling, and the assignment of IP addresses to groups. These practices tend to minimize the throughput available to every user, but maximize the number of users that can be supported on one backbone.
Furthermore, chips are often not available in order to implement the fastest rates. AMD, for instance, does not support the 32-bit HyperTransport interface on any CPU it has shipped as of the end of 2009. Additionally, WiMAX service providers in the US typically support only up to 4 Mbit/s as of the end of 2009.
Choosing service providers or interfaces based on theoretical maxima is unwise, especially for commercial needs. A good example is large scale data centers, which should be more concerned with price per port to support the interface, wattage and heat considerations, and total cost of the solution. Because some protocols such as SCSI and Ethernet now operate many orders of magnitude faster than when originally deployed, scalability of the interface is one major factor, as it prevents costly shifts to technologies that are not backward compatible. Underscoring this is the fact that these shifts often happen involuntarily or by surprise, especially when a vendor abandons support for a proprietary system.

Conventions

By convention, bus and network data rates are denoted either in bits per second or bytes per second. In general, parallel interfaces are quoted in B/s and serial in bit/s. The more commonly used is shown below in bold type.
On devices like modems, bytes may be more than 8 bits long because they may be individually padded out with additional start and stop bits; the figures below will reflect this. Where channels use line codes, quoted rates are for the decoded signal.
The figures below are simplex data rates, which may conflict with the duplex rates vendors sometimes use in promotional materials. Where two values are listed, the first value is the downstream rate and the second value is the upstream rate.
All quoted figures are in metric decimal units. Note that these aren't the traditional binary prefixes for memory size. These decimal prefixes have long been established in data communications. This occurred before 1998 when IEC and other organizations introduced new binary prefixes and attempted to standardize their use across all computing applications.

Bandwidths

The figures below are grouped by network or bus type, then sorted within each group from lowest to highest bandwidth; gray shading indicates a lack of known implementations.
As stated above, all quoted bandwidths are for each direction. Therefore for duplex interfaces, the stated values are simplex speeds, rather than total upstream+downstream.

[Radio clock#List of [radio time signal stations|Time Signal Station]] to Radio Clock">Radio clock">Radio Clock

Teletypewriter">Teleprinter">Teletypewriter (TTY) or [telecommunications device for the deaf] (TDD)

Modems (narrowband and broadband)

[Narrowband] (POTS">Plain old telephone service">POTS: 4 kHz channel)

[Broadband] (hundreds of kHz to GHz wide)

Mobile telephone interfaces

[Wide area network]s

[Local area network]s

[Wireless network]s

networks in infrastructure mode are half-duplex; all stations share the medium. In infrastructure or access point mode, all traffic has to pass through an Access Point. Thus, two stations on the same access point that are communicating with each other must have each and every frame transmitted twice: from the sender to the access point, then from the access point to the receiver. This approximately halves the effective bandwidth.
802.11 networks in ad hoc mode are still half-duplex, but devices communicate directly rather than through an access point. In this mode all devices must be able to "see" each other, instead of only having to be able to "see" the access point.

[Wireless personal area network]s

Computer buses

Main buses

LPC protocol includes high overhead. While the gross data rate equals 33.3 million 4-bit-transfers per second, the fastest transfer, firmware read, results in. The next fastest bus cycle, 32-bit ISA-style DMA write, yields only. Other transfers may be as low as.
Uses 128b/130b encoding, meaning that about 1.54% of each transfer is used by the interface instead of carrying data between the hardware components at each end of the interface. For example, a single link PCIe 3.0 interface has an 8 Gbit/s transfer rate, yet its usable bandwidth is only about 7.88 Gbit/s.
Uses 8b/10b encoding, meaning that 20% of each transfer is used by the interface instead of carrying data from between the hardware components at each end of the interface. For example, a single link PCIe 1.0 has a 2.5 Gbit/s transfer rate, yet its usable bandwidth is only 2 Gbit/s.

Portable

Storage

Uses 8b/10b encoding
Uses 64b/66b encoding
Uses 128b/150b encoding

Peripheral

MAC">Media Access Control">MAC to [PHY]

[PHY] to [XPDR]

[Dynamic random-access memory]

The table below shows values for PC memory module types.
These modules usually combine multiple chips on one circuit board.
SIMM modules connect to the computer via an 8-bit- or 32-bit-wide interface. RIMM modules used by RDRAM are 16-bit- or 32-bit-wide.
DIMM modules connect to the computer via a 64-bit-wide interface.
Some other computer architectures use different modules with a different bus width.
In a single-channel configuration, only one module at a time can transfer information to the CPU.
In multi-channel configurations, multiple modules can transfer information to the CPU at the same time, in parallel.
FPM, EDO, SDR, and RDRAM memory was not commonly installed in a dual-channel configuration. DDR and DDR2 memory is usually installed in single- or dual-channel configuration. DDR3 memory is installed in single-, dual-, tri-, and quad-channel configurations.
Bit rates of multi-channel configurations are the product of the module bit-rate and the number of channels.
The clock rate at which DRAM memory cells operate. The memory latency is largely determined by this rate. Note that until the introduction of DDR4 the internal clock rate saw relatively slow progress. DDR/DDR2/DDR3 memory uses 2n/4n/8n prefetch buffer to provide higher throughput, while the internal memory speed remains similar to that of the previous generation.
The "memory speed/clock" advertised by manufactures and suppliers usually refers to this rate. Note that modern types of memory use DDR bus with two transfers per clock.

Graphics processing units' RAM

RAM memory modules are also utilised by graphics processing units; however, memory modules for those differ somewhat from standard computer memory, particularly with lower power requirements, and are specialised to serve GPUs: for example, GDDR3 was fundamentally based on DDR2. Every graphics memory chip is directly connected to the GPU. The total GPU memory bus width varies with the number of memory chips and the number of lanes per chip. For example, GDDR5 specifies either 16 or 32 lanes per "device", while GDDR5X specifies 64 lanes per chip. Over the years, bus widths rose from 64-bit to 512-bit and beyond: e.g. HBM is 1024 bits wide.
Because of this variability, graphics memory speeds are sometimes compared per pin. For direct comparison to the values for 64-bit modules shown above, video RAM is compared here in 64-lane lots, corresponding to two chips for those devices with 32-bit widths.
In 2012, high-end GPUs used 8 or even 12 chips with 32 lanes each, for a total memory bus width of 256 or 384 bits. Combined with a transfer rate per pin of 5 GT/s or more, such cards could reach 240 GB/s or more.
RAM frequencies used for a given chip technology vary greatly. Where single values are given below, they are examples from high-end cards. Since many cards have more than one pair of chips, the total bandwidth is correspondingly higher. For example, high-end cards often have eight chips, each 32 bits wide, so the total bandwidth for such cards is four times the value given below.

Digital audio

Digital video interconnects

Data rates given are from the video source to receiving device only. Out of band and reverse signaling channels are not included.
Uses 8b/10b encoding Uses 16b/18b encoding Uses 128b/132b encoding