Communication protocol


A communication protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. The protocol defines the rules, syntax, semantics and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.
Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre-determined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communication what programming languages are to computations. An alternate formulation states that protocols are to communication what algorithms are to computation.
Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a protocol stack.
Internet communication protocols are published by the Internet Engineering Task Force. The IEEE handles wired and wireless networking and the International Organization for Standardization handles other types. The ITU-T handles telecommunication protocols and formats for the public switched telephone network. As the PSTN and Internet converge, the standards are also being driven towards convergence.

Communicating systems

History

One of the first uses of the term protocol in a data-commutation context occurs in a memorandum entitled A Protocol for Use in the NPL Data Communications Network written by Roger Scantlebury and Keith Bartlett in April 1967.
On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The Network Control Program for the ARPANET was first implemented in 1970. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept.
Networking research in the early 1970s by Robert E. Kahn and Vint Cerf led to the formulation of the Transmission Control Program. Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time.
The International Networking Working Group agreed a connectionless datagram standard which was presented to the CCIT in 1975 but was not adopted by the ITU or by the ARPANET. International research, particularly the work of Rémi Després, contributed to the development of the X.25 standard, based on virtual circuits by the ITU-T in 1976. Computer manufacturers developed proprietary protocols such as IBM's Systems Network Architecture, Digital Equipment Corporation's DECnet and Xerox Network Systems.
TCP software was redesigned as a modular protocol stack. Originally referred to as IP/TCP, it was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete protocol suite by 1989, as outlined in and, laid the foundation for the growth of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet.
International work on a reference model for communication standards led to the OSI model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.

Concept

The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any state-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by algorithms and data structures. Protocols are to communication what algorithms or programming languages are to computations.
Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself. In contrast, because there is no shared memory, communicating systems have to communicate with each other using a shared transmission medium. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems.
To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made operating system independent. The best-known frameworks are the TCP/IP model and the OSI model.
At the time the Internet was developed, abstraction layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design.
Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol suite. Some of the best known protocol suites are TCP/IP, IPX/SPX, X.25, AX.25 and AppleTalk.
The protocols can be arranged based on functionality in groups, for instance, there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions. To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer.

Basic requirements

Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kind of rules are said to express the syntax of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kind of rules are said to express the semantics of the communication.
Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed:
;Data formats for data exchange
;Address formats for data exchange
;Address mapping
;Routing
;Detection of transmission errors
;Acknowledgements
;Loss of information - timeouts and retries
;Direction of information flow
;Sequence control
;Flow control
;Queueing

Protocol design

principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite, within a conceptual framework.
Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes. Concurrency can also be modeled using finite state machines, such as Mealy and Moore machines. Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general.
The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit. The framework introduces rules that allow the programmer to design cooperating protocols independently of one another.

Layering

In modern protocol design, protocols are layered to form a protocol stack. Layering is a design principle that divides the protocol design task into smaller steps, each of which accomplishes a specific part, interacting with the other parts of the protocol only in a small number of well-defined ways. Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple.
The communication protocols in use on the Internet are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the Internet Protocol Suite. The first two cooperating protocols, the Transmission Control Protocol and the Internet Protocol resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite.
The OSI model was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering.
Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically connectionless in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as Ethernet. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an Asynchronous Transfer Mode network.

Protocol layering

Protocol layering forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols. The protocol layers each solve a distinct class of communication problems. Together, the layers make up a layering scheme or model.
Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a data flow diagram is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows are in-system and the horizontal message flows are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the protocol layers.

Software layering

Having established the protocol layering and the protocols, the protocol designer can now resume with the software design. The software has a layered organization and its relationship with protocol layering is visualized in figure 5.
The software modules implementing the protocols are represented by cubes. The information flow between the modules is represented by arrows. The red arrows are virtual. The blue lines mark the layer boundaries.
To send a message on system A, the top module interacts with the module directly below it and hands over the message to be encapsulated. This module reacts by encapsulating the message in its own data area and filling in its header data in accordance with the protocol it implements and interacts with the module below it by handing over this newly formed message whenever appropriate. The bottom module directly interacts with the bottom module of system B, so the message is sent across. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B.


On protocol errors, a receiving module discards the piece it has received and reports back the error condition to the original source of the piece on the same layer by handing the error message down or in case of the bottom module sending it across.


The division of the message or stream of data into pieces and the subsequent reassembly are handled in the layer that introduced the division/reassembly. The reassembly is done at the destination.
Program translation is divided into four subproblems: compiler, assembler, link editor, and loader. As a result, the translation software is layered as well, allowing the software layers to be designed independently. Noting that the ways to conquer the complexity of program translation could readily be applied to protocols because of the analogy between programming languages and protocols, the designers of the TCP/IP protocol suite were keen on imposing the same layering on the software framework. This can be seen in the TCP/IP layering by considering the translation of a pascal program that is compiled into an assembler program that is assembled to object code that is linked together with library object code by the link editor, producing relocatable machine code that is passed to the loader which fills in the memory locations to produce executable code to be loaded into physical memory. To show just how closely the analogy fits, the terms between parentheses in the previous sentence denote the relevant analogs and the terms written cursively denote data representations. Program translation forms a linear sequence because each layer's output is passed as input to the next layer. Furthermore, the translation process involves multiple data representations. The same thing is seen happening in protocol software, where multiple protocols define the data representations of the data passed between the software modules.
The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between application layer and transport layer is called the operating system boundary.

Strict layering

Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking. Strict layering can have a serious impact on the performance of the implementation, so there is at least a trade-off between simplicity and performance.
While the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers for two principal reasons. Firstly, abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis.

Design patterns for application layer protocols

Commonly reoccurring problems in the design and implementation of communication protocols can be addressed by patterns from several different pattern languages: Pattern Language for Application-level Communication Protocols, Service Design Patterns, Patterns of Enterprise Application Architecture, Pattern-Oriented Software Architecture: A Pattern Language for Distributed Computing. The first of these pattern languages focuses on the design of protocols and not their implementations. The others address issues in either both areas or just the latter.

Formal specification

Formal methods of describing communication syntax are Abstract Syntax Notation One and Augmented Backus-Naur form.
Finite state machine models and communicating finite-state machines are used to formally describe the possible interactions of the protocol.

Protocol development

For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability.
Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. This activity is referred to as protocol development. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market-shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol.

The need for protocol standards

The need for protocol standards can be shown by looking at what happened to the bi-sync protocol invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to 'enhance' the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening.
In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are monopolized. They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a 'de facto standard' operating system like GNU/Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition. Standardization is therefore not the only solution for open systems interconnection.

Standards Organizations

Some of the standards organizations of relevance for communication protocols are the International Organization for Standardization, the International Telecommunication Union, the Institute of Electrical and Electronics Engineers, and the Internet Engineering Task Force. The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the public switched telephone network, as well as many radio communication systems. For marine electronics the NMEA standards are used. The World Wide Web Consortium produces protocols and standards for Web technologies.
International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other.

The Standardization Process

The standardization process starts off with ISO commissioning a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement on what the standard should provide and if it can satisfy all needs. All conflicting views should be taken into account, often by way of compromise, to progress to a draft proposal of the working group.
The draft proposal is discussed by the member countries' standard bodies and other organizations within each country. Comments and suggestions are collated and national views will be formulated, before the members of ISO vote on the proposal. If rejected, the draft proposal has to consider the objections and counter-proposals to create a new draft proposal for another vote. After a lot of feedback, modification, and compromise the proposal reaches the status of a draft international standard, and ultimately an international standard.
The process normally takes several years to complete. The original paper draft created by the designer will differ substantially from the standard, and will contain some of the following 'features':
International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject.

OSI Standardisation

A lesson learned from ARPANET, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for structured protocols and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels. This gave rise to the OSI Open Systems Interconnection reference model, which is used as a framework for the design of standard protocols and services conforming to the various layer specifications.
In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered ; the nth layer is referred to as -layer. Each layer provides service to the layer above it using the services of the layer immediately below it. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are called peer entities. To communicate, two peer entities at a given layer use an -protocol, which is implemented by using services of the -layer. When systems are not directly connected, intermediate peer entities are used. An address uniquely identifies a service access point. The address naming domains need not be restricted to one layer, so it is possible to use just one naming domain for all layers.
For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it.
In the original version of RM/OSI, the layers and their functionality are :
In contrast to the [|TCP/IP layering scheme], which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Using connections to communicate implies some form of session and circuits, hence the session layer. The constituent members of ISO were mostly concerned with wide area networks, so development of RM/OSI concentrated on connection-oriented networks and connectionless networks were only mentioned in an addendum to RM/OSI.
At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code".
The standardization process is described by .
Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards.

Taxonomies

Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol.
A layering scheme combines both function and domain of use. The dominant layering schemes are the ones proposed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes.
The layering scheme from the IETF is called Internet layering or TCP/IP layering.
The layering scheme from ISO is called the OSI model or ISO layering.
In networking equipment configuration, a term-of-art distinction is often drawn: The term "protocol" strictly refers to the transport layer, and the term "service" refers to protocols utilizing a "protocol" for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term "service" strictly refers to port numbers, and the term "application" is often used to refer to protocols identified through inspection signatures.