This manual commences with an overview of wireless communications and how radio works, followed by a practical discussion of Ethernet as this is always a key ingredient in a successful wireless implementation strategy. It will give you a clear understanding of the choices available to you in designing and implementing your own wireless and associated Ethernet networks.
Download Chapter List
Practical Wireless, Ethernet and TCP/IP Networking - Introduction
Communication networks evolved due to the need to exchange and share information amongst a group of machines. During the last century many kinds of communication networks have been developed, such as telephone networks, computer networks and cable TV networks.
With the need for data exchange superseding voice and picture transmission, computer networks have become the most prevalent of all communication networks. Depending on the distances between the computers, computer networks can be further differentiated into:
1.2 The OSI model
A communication framework that has had a tremendous impact on the design of LANs/WLANs is the Open Systems Interconnection (OSI) model of the International Organization for Standardization (ISO). The objective of this model is to provide a framework for the co-ordination of standards development, and allows for existing as well as evolving standards activities to be set within that common framework.
The various technologies described in this manual relate to different layers of the OSI model, for example:
For that reason a quick review of the OSI model basics is a necessity.
1.2.1 Open and closed systems
The interconnection of two or more devices with some form of digital communication is the first step towards establishing a network. In addition to the hardware requirements as discussed above, the software problems of communication must also be overcome. Where all the devices on a network are from the same manufacturer, the hardware and software problems are usually easily overcome because all the system components have usually been designed within the same guidelines and specifications.
Proprietary networks that comprise hardware and software from only one vendor are called closed systems. In most cases these systems were developed at a time before standardization, or when it was considered unlikely that equipment from other manufacturers would be included in the network.
In contrast, ‘open’ systems conform to specifications and guidelines that are ‘open’ to all. This allows equipment from any manufacturer that complies with that standard to be used interchangeably on the network. The benefits of open systems include wider availability of equipment, lower prices and easier integration with other components.
1.2.2 The OSI concept
Faced with the proliferation of closed network systems, the ISO defined a ‘Reference Model for Communication between Open Systems’ (ISO 7498) in 1978. This has since become known as the OSI model. The OSI model is essentially a data communications management structure that breaks data communications down into a manageable hierarchy (‘stack’) of seven layers. Each layer has a defined purpose and interfaces with the layers above it and below it.
By laying down functions and services for each layer, some flexibility is allowed so that the system designers can develop protocols for each layer independently of each other. By conforming to the OSI standards, a system is able to communicate with any other compliant system, anywhere in the world.
The OSI model supports a client/server model and since there must be at least two nodes to communicate, each layer also appears to converse with its peer layer at the other end of the communication channel in a virtual (‘logical’) manner. The concept of isolation of the process of each layer, together with standardized interfaces and peer-to-peer virtual communication, are fundamental to the concepts developed in a layered model such as the OSI model. This concept is shown in Figure 1.1.
OSI layering concept
The actual functions within each layer are provided by entities (abstract devices such as programs, functions, or protocols) that implement the services for a particular layer on a single machine. A layer may have more than one entity; for example a protocol entity and a management entity. Entities in adjacent layers interact through the common upper and lower boundaries by passing physical information through Service Access Points (SAPs). A SAP could be compared to a predefined ‘postbox’ where one layer would collect data from the previous layer. The relationship between layers, entities, functions and SAPs is shown in Figure 1.2.
Relationship between layers, entities, functions and SAPs
In the OSI model, the entity in the next higher layer is referred to as the N+1 entity and the entity in the next lower layer as N–1. The services available to the higher layers are the result of the services provided by all the lower layers.
The functions and capabilities expected at each layer are specified in the model. However, the model does not prescribe how this functionality should be implemented. The focus in the model is on the ‘interconnection’ and on the information that can be passed over this connection. The OSI model does not concern itself with the internal operations of the systems involved.
When the OSI model was being developed, a number of principles were used to determine exactly how many layers this communication model should encompass. These principles are:
The use of these principles led to seven layers being defined, each of which has been given a name in accordance with its purpose. Figure 1.3 below shows the seven layers.
The OSI reference model
The service provided by any layer is expressed in the form of a service primitive with the data to be transferred as a parameter. A service primitive is a fundamental service request made between protocols. For example, layer W may sit on top of layer X. If W wishes to invoke a service from X, it may issue a service primitive in the form of X.Connect.request to X.
Typically, each layer in the transmitting stack, with the exception of the lowest, adds header information, or Protocol Control Information (PCI) – a.k.a. ‘header’, to the data before passing it across the interface to the next layer. This interface defines which primitive operations and services the lower layer offers to the upper one. The headers are used for peer-to-peer layer communication between the stacks and some layer implementations use the headers to invoke functions and services at the adjacent (N+1 or N-1) layers.
At the transmitting stack, the user application (e.g. the client) invokes the process by passing data, primitive names and control information to the uppermost layer of the protocol stack. The stack then passes the data down through the layers of the stack, adding headers (and possibly trailers), and invoking functions in accordance with the rules of the protocol at each layer.
At each layer, the ‘data’ received at a certain layer (including headers from the layers above it) is referred to as a Service Data Unit or SDU. This is normally prefixed with the first letter of the name of the layer. For example, the Transport layer receives a TSDU from the Session layer. The Transport layer then processes it, adds a header, and creates a Transport Protocol Data Unit or TPDU.
At the receiving site, the opposite occurs with the headers being stripped from the data as it is passed up through the layers of the receiving stack. Generally speaking, layers in the same stack communicate with parameters passed through primitives, and peer layers communicate with the use of the headers across the network.
At this stage it should be quite clear that there is no physical connection or direct communication between the peer layers of the communicating applications. Instead, all physical communication is across the lowest (Physical) layer of the stack. Communication takes place downwards through the protocol stack on the transmitting node and upwards through the receiving stack. Figure 1.4 shows the full architecture of the OSI model, whilst Figure 1.5 shows the effects of the addition of headers to the respective SDUs at each layer. The net effect of this extra information is to reduce the overall bandwidth of the communications channel, since some of the available bandwidth is used to pass control information (see also Figure 1.6).
Peer layer interaction in the OSI model
OSI message passing
1.2.3 OSI layer services
The services provided at each layer of the stack are as follows.
The Application layer is the uppermost layer in the OSI reference model and is responsible for giving applications access to the protocol stack. Examples of Application-layer tasks include file transfer, electronic mail (e-mail) services, and network management. In order to accomplish its tasks, the Application layer passes program requests and data to the Presentation layer, which is responsible for encoding the Application layer’s data in the appropriate form.
The Presentation layer is responsible for presenting information in a manner suitable for the applications or users dealing with the information. Functions such as data conversion from EBCDIC to ASCII (or vice versa), the use of special graphics or character sets, data compression or expansion, and data encryption or decryption are carried out at this layer. The presentation layer provides services for the Application layer above it, and uses the Session layer below it. In practice, the Presentation layer rarely appears in pure form, and it is the least well defined of the OSI layers. Application- or Session-layer programs often encompass some or all of the Presentation layer functions.
The Session layer is responsible for synchronizing and sequencing the dialog and packets in a network connection. This layer is also responsible for ensuring that the connection is maintained until the transmission is complete, and that the appropriate security measures are taken during a ‘session’. The Session layer is used by the Presentation layer above it, and uses the Transport layer below it.
In the OSI reference model, the Transport layer is responsible for providing data transfer at an agreed-upon level of quality, such as at specified transmission speeds and error rates. To ensure delivery, some Transport layer protocols assign sequence numbers to outgoing packets. The Transport layer at the receiving end checks the packet numbers to make sure all have been delivered and to put the packet contents into the proper sequence for the recipient.
The Transport layer provides services for the Session layer above it, and uses the Network layer below it to find a route between source and destination. The Transport layer is crucial in many ways, because it sits between the upper layers, which are strongly application-dependent, and the lower one, which are network-based.
The layers below the Transport layer are collectively known as the ‘subnet’ layers. Depending on how well (or not) they perform their functions; the Transport layer has to interfere less (or more) in order to maintain a reliable connection.
The Network layer is the third layer from the bottom up, or the uppermost ‘subnet layer’. It is responsible for the following tasks:
Data link layer
The Data Link layer is responsible for creating, transmitting, and receiving data packets. It provides services for the various protocols at the Network layer, and uses the Physical layer to transmit or receive material. The Data Link layer creates packets appropriate for the network architecture being used. Requests and data from the Network layer are part of the data in these packets (or frames, as they are often called at this layer). These frames are passed down to the Physical layer from where they are transmitted to the Physical layer on the destination host via the medium. Network architectures (such as Ethernet and Wi-Fi) typically encompass the Physical layer and the lower half of the Data Link layer.
The IEEE 802 networking working groups have refined the Data Link layer into two sub-layers:
The LLC sub-layer provides an interface for the Network layer protocols, and controls the logical communication with its peer at the receiving side. The MAC sub-layer controls physical access to the medium.
The Physical layer is the lowest layer in the OSI. This layer gets data packets from the Data Link layer above it, and converts the contents of these packets into a series of electrical signals that represent ‘0’ and ‘1’ values in a digital transmission. These signals are sent across a transmission medium to the Physical layer at the receiving end. At the destination, the Physical layer converts the electrical signals into a series of bit values. These values are grouped into packets and passed up to the Data Link layer.
The required mechanical and electrical properties of the transmission medium are defined at this level. These include:
The medium itself is, however, not specific here. For example, Fast Ethernet dictates Cat5 cable, but the cable itself is specified in TIA/EIA-568B.
Ethernet is, at present, the dominant LAN technology. It provides a set of physical media definitions, a scheme for sharing that physical media (CSMA/CD or full duplex), and a simple frame format and hardware source/destination addressing scheme (MAC addresses) for moving packets of data between devices on a LAN. On its own, however, Ethernet lacks the more complex features required of a fully functional industrial network. For that reason, all installed Ethernet networks support one or more communication protocols that run on top of it, and provide more sophisticated data transfer and network management functionality. It is the higher layer protocols that determine what level of functionality is supported by the network, what types of devices may be connected to the network, and how devices interoperate on the network.
For many years users have steered away from the use of Ethernet in industrial applications, mainly because of its perceived lack of determinism. This was due to the CSMA/CD medium access method, which is essentially stochastic in nature. Other issues that affected its industrial application included connectors and cabling, packaging, power supplies, switching requirements, speed, power over the cable requirements and provision for redundancy.
Modern Ethernet systems, however, differ radically from the old cable-based legacy systems. Switched Ethernet systems now operate in full duplex mode, which, for all practical purposes, eliminates collisions. Many vendors offer industrial devices, with features such as IP67 environmental rating, rail mounting, redundant DC power supplies, VLAN capability, prioritized switching (IEEE 802.1p/Q) and redundant ring operation.
Industry often expects device power to be delivered over the same wires as those used for communicating with the devices. The IEEE 802.3af standard allows a source device (a hub or a switch) to supply a minimum of 300 mA at 48 Volts DC to field devices. Other Ethernet developments include Virtual LANs (IEEE 802.1Q), prioritized switching (IEEE 802.1p) and redundant switched rings.
The TCP/IP protocol suite consists of several protocols that provide routing services, end-to-end verification of transmitted data, and interfacing services to the stack for clients and servers.
TCP is a connection-oriented transport (OSI layer 4) protocol and runs on the two end hosts; i.e. the client host and the server host. It is a very reliable protocol, using triple handshakes to establish connections, acknowledgements and timeouts plus retransmissions to ensure correct delivery of data, and sliding windows to prevent data buffer overruns on the receiving side. This comes at a cost, in terms of protocol overheads such as header size.
UDP is a much simpler transport protocol. It is connectionless and provides a very simple capability to send ‘datagrams’ between two devices. It does not guarantee that the data will get from one device to another, does not perform retries, and does not even know if the target device has received the data successfully.
Application layers that implement their own handshaking or connection management between two devices and, therefore, only need a minimal transport service, will use UDP. UDP is smaller, simpler and faster than TCP due to its minimal capabilities and use of resources. In an industrial automation application, UDP is typically used for network management functions, applications that do not require reliable data transmission, applications that are willing to implement their own reliability scheme, such a flash memory programming of network devices, and for input/output (I/O) operations.
1.2.6 Wireless LANs
Traditional networks have been based on physical media, using extensive copper and fiber cabling to provide data, voice, and video transmission. However, the use of physical media is costly, unsuitable for rugged terrain, and limits mobility.
In 1971 a group of researchers at the University of Hawaii created the first packet based radio communications network, ALOHANET. It was essentially the very first WLAN and consisted of several computers that communicated via a bi-directional star topology and spanned four of the Hawaiian Islands, with the central computer based on Oahu.
In recent years more and more vendors have been developing wireless systems to support LAN, MAN and WAN infrastructures. The result has been the emergence of wireless networks. Current wireless technologies include Wi-Fi (IEEE 802.11b, a, g), IEEE 802.16 (WiMax), small dish satellite (VSAT), mobile wireless and wireless PANs (Bluetooth, wireless USB and ZigBee), with Wi-Fi by far the most popular WLAN technology.
There are numerous benefits in using wireless technologies, irrespective of the communication solution to be implemented. These include a high degree of mobility, accessibility and reduced installation costs.