IP OVER SONET full report
computer science topics|
Active In SP
Joined: Jun 2010
08-06-2010, 04:55 PM
IP OVER SONET.docx (Size: 45.79 KB / Downloads: 107)
IP OVER SONET
IV ECE e-mail:
P V Sailaja
G Pulla Reddy Engineering College
With internet use continuing to explode, with an increasing number of users switching to IP- based networks, and with data traffic about to surpass voice traffic, network service providers have looking for a faster; more efficient, and less expensive transport technology to handle the heavy volumes of traffic they are experiencing.
With this in mind, many providers have decided to carry IP traffic, directly over SONET (synchronous optical network), rather than via frame relay, ATM backbones, or leased lines.
The explosive growth in Internet traffic has created the need to transport IP on highÃ‚Â¬speed links. In the days of low traffic volume between IP routers, bandwidth partitions over a common interface made it attractive to carry a IP over frame relay and/or as a traffic rules is becoming more desirable to carry IP traffic directly over the SONET, at least in the core backbone with very high pair wise demand. Currently the focus of IP transport continues to be data oriented. However, a significant trend in the industry, with the emergent demand for the support of real time IP services, is the development of routers with sophisticated quality of service mechanisms.
In this paper we focus on IP transport on SONET, the problems associated with data transmission over the network and the techniques used to solve these problems for the efficient IP transmission, later we focus on the problem of rapid bandwidth re-allocation and its solution.
Table of Contents
1. Introduction - 4
2. About SONET -4
a. SONET Multiplexing -4
b. SONET Framing - 5
c. SONET Hierarchy - 5
d. SONET Features - 5
3. IP over SONET - 6
a. Protocols for Carrying Network Data - 6
b. Packet over SONET/SDH (POS) -
c. Asynchronous Transfer Mode (ATM) - 7
d. Overhead Associated with Network Data Protocols - 7
e. Introduction to Next Generation Protocols
f. Problems currently existing in the Data Network - 8
g. Issues Associated with Transporting Data over Legacy
V oice Networks -10
h. Solutions to these Problems - Concatenation -10
(i) Contiguous Concatenation -10
(ii) Virtual Concatenation -10
i. Implementation Issues Associated with Virtual Concatenation - 12
j. Solving the Problem of Rapid Bandwidth Re-allocation - 12
4. Conclusion - 13
5. References - 13
The technology was developed in the mid- 1980s and has been used in a number of telecommunications networks since the early 1990s. However, before the huge growth of internet traffic, there simply was not enough IP traffic to make efficient use of SONETs huge bandwidth. This was a particular problem because IP over SONET uses the entire link between the routers, so no other traffic could be multiplexed with the IP packets. To make full use of the bandwidth, users mapped IP traffic into their higher volume of ATM traffic before transporting it via SONET.
However, many data-network experts contend that ATM is not the most efficient way to transport IP traffic and cannot meet future demand for bandwidth because it has too much overhead. Users may prefer to use ATM when sending audio, video, voice, or other IP data types that require quality of service (QoS). However, IP directly over SONET can and does carry all types of traffic and is ideal for data-only networks because it does not use ATM and thus eliminates a layer of the network stack and has the lower overhead.
This consideration has become much more important to carriers as the volume of IP traffic has increased. Internet service providers (ISPs), local telephone carriers, and even large corporate users have thus begun sending IP directly over SONET as a fast, cost-effective, reliable, fault-tolerant, and more easily configurable alternative.
In the face of emerging proprietary optical transmission protocols, SONET was conceived by MCI and developed by Bellcore in the mid-1980s to create an open standard for synchronous data transmission on optical media. The standard was approved in 1988 by Comite Consultatif Internationale de Telephonique et Telegraphique (Consultative Committee on International Telephone and Telegraphy), the predecessor to today's
International Telecommunication Union, and in 1989 by the American national standards institute.
SONET multiplexing combines low-speed digital signals such as DS1, DS1C, E1, DS2, and DS3 with required overhead to form a building block called Synchronous Transport Signal Level One (STS-1). Fig1 on the next page shows the STS-1 frame, which is organized as 9 rows by 90 columns of bytes. It is transmitted row first, with the most significant bit (MSB) of each byte transmitted first.
Fig 2 shows the STS-1 frame divided into two parts to physically segregate the layers, where each square represents an 8-bit byte. The first three columns comprise the transport overhead (TOH), while the remainder is called the synchronous payload envelope (SPE). The TOH dedicates three rows for the section overhead (SOH) and six rows for the line overhead (LOH). The SPE contains one column for the path overhead (POH), leaving the remaining 86 columns for payload data (49.536 Mb/s).
SONET packages a signal into containers. It then adds the section overhead so that the signal and the quality of transmission are all traceable. The containers have two names depending on size: virtual tributary (VT) or a synchronous payload envelope (SPE). The path overhead contains data to control the facility (end to end) such as for path trace, error monitoring, far-end error, or virtual container (VC) composition.
SONET traffic is packaged in VCs and transported in synchronous transport vehicles or signals (STS). An STS exists on each section (i.e., the link between two nodes). An STS is made up of the payload and extra information called the line or section overhead. The line/section overhead contains data to control the node to node (protection switching, error monitoring), and, in addition, provides extra channels (network management and maintenance and phone link).
The SONET line transmission rates are 51,155 and 622 Mbps; 2.5 Gbps; and 10 Gbps. The SONET transmission rates are shown in the fig 3.
Fig 3: Transmission rates
The following features make it suitable for efficient broadband services.
1 Network management
3 Bandwidth management
4 Network simplification
5 mid-fiber meet
IP OVER SONET
Internet Protocol (IP) traffic is a primary driver of the explosive growth of the Internet. The Internet has been growing at an exponential rate over the past several years. This growth has created an insatiable demand for bandwidth to carry IP traffic. This demand poses a challenge for transport network providers, who need to both increase the capacity of existing network infrastructure and deploy new networks to service this demand. The goal of these providers is to offer low-cost IP transport solutions with value-added services such as dynamic bandwidth allocation, guaranteed service availability, and Quality of Service (QoS) control.
Protocols for Carrying Network Data
There are a variety of protocols for transmitting IP across networks. Some of the more common protocols in widespread use include Packet over SONET (POS), Asynchronous Transfer Mode (ATM),and several varieties of Ethernet. Next-generation framing protocols such as Generic Framing Procedure (GFP) will transmit IP both over SONET and directly over fiber, and thus may become widely deployed in future high-speed optical networks.
One of SONET's greatest strengths is its inherent failure restoration, which is specified to occur within 60 ms from time of detection. This is specified by SONET's Automatic Protection Switching (APS) protocol. In contrast, Layer 3 restoration could take up to several seconds, typically 6 to 10 seconds for IP routing protocols such as Open Shortest Part First (OSPF), Intermediate System-to-Intermediate System (IS-IS), and Border Gateway Protocol (BGP). This protection ability and redundancy makes SONET the transport mechanism of choice for mission-critical data.
Packet over SONET/SDH (POS)
Packet over SONET is a widely used method of carrying IP over optical networks. It uses a simple but robust framing mechanism (Fig 4) to delineate the IP data as it is carried down the fiber. POS is a widely deployed and relatively inexpensive method for carrying data on optical fiber. Packet over SONET operates over the three lower layers of the OSI modelâ€Physical, Data Link, and Network layers.
Packet over SONET/SDH routers gives service providers a single, economical network access device for delivering both time-division multiplexed (TDM) and IP services on SONET access networks. POS enables the use of SONET's speed and excellent management capabilities for reliable data transport.
Some of the more important applications of POS include leveraging existing SONET/SDH infrastructure for data services, lighting dark fiber and aggregating traffic from edge routers, and consolidating the multi-service and IP-optimized networks typically run in parallel by major carriers.
Fig 4: Packet over SONET frame format (HDLC framing)
Asynchronous Transfer Mode (ATM)
ATM is a packet (cell) switched technology that can handle both variable-rate traffic (data) and fixed-rate traffic (for example, voice or video). ATM is based on the concept of transmitting all information in small fixed-size packets called cells. Cells are 53-bytes long, of which 5 bytes are header and 48 bytes are payload (Fig 5). ATM networks are connection oriented, and are based on speeds of 155.52 and 622.08 Mbps. ATM is termed 'asynchronous', as it is not synchronous (tied to a master clock) like SONET systems. In ATM, cells can arrive at any time from any source, and gaps between cells are possible. Idle cells fill these gaps. ATM has been designed to be independent of the transmission medium. The ATM layer defines the layout of a cell and the header field format, and deals with establishing and releasing virtual circuits.
Fig 5: ATM cell frame format
The most widely used method to carry the ATM cells is over SONET/SDH, based upon the SONET STS-3c frame. The SONET frame SPE consists of a nine-octet path overhead portion; the remainder contains ATM cells as shown in Fig 6. The ATM cells may cross a payload boundary.
Fig 6: ATM cells carried into a SONET frame
Overheads Associated with Network-Data Protocols
The traditional mechanisms for carrying IP over SONET are shown in Figure 4.
Fig 7: Traditional IP over SONET transport mechanisms
For Packet over SONET (IP/Ethernet-PPP-HDLC-SONET), the average bandwidth efficiency is 97%, while ATM (IP/Ethernet-ATM-SONET) has efficiency of approximately 80%. Packet over SONET/ SDH offers significantly higher efficiency than ATM since the overhead required (in the ATM cell header, IP over ATM encapsulation, and segmentation and re-assembly [SAR] functionality) is eliminated. ATM is a cell-switching technology, based upon fixed-size frames. This feature makes it easier to buffer and switch at high speeds than variable length frames such as Packet over SONET. ATM is a virtual circuit network, where circuits are temporarily established for a short duration. ATM enables quality of service (QoS) by assigning priorities to different types of data. Priority assignment is very important in carrying delay-sensitive data, such as streaming video or voice. POS does not have this QoS ability. Although ATM is less efficient than POS, there are several overriding reasons why ATM is an established and widely used protocol. ATM is still the only way for most users to get guaranteed performance promises, which makes the protocol important in networks carrying mixed (data, voice,
and video) traffic types. For example, for organizations wanting to use video conferencing as an alternative to travel, ATM can provide reliable connections with guaranteed QoS.
Introduction to Next Generation Protocols
Generic Framing Procedure (GFP) is a protocol that has been proposed by the ANSI subcommittee T1X1. GFP utilizes a frame delineation method in which the length of a frame and its CRC are contained at the start of the frame. This means that the network-processing engine can look at a stream of frames and skip from one header to the next easily, without having to de-stuff all bytes in a frame to find the delineating characters.
Fig 8: GFP frame format showing length/CRC-based cell delineation
Fig 8 shows the GFP frame format, where the processing engine jumps from frame to frame based upon the length construct. This capability permits frame processing at higher speeds, as well as more efficient allocation of buffer space to hold frames and other desirable characteristics. GFP is not yet widely deployed, but holds promise for the next generation of high data networks implemented over optical fiber.
Problems Currently Existing in the Data Network
These traditional methods for carrying IP data over SONET pose several problems. For service providers, one of the biggest headaches is that of users requiring different bandwidths. Each customer wants a different 'right-sized' bandwidth requirement, and the service provider has the unenviable task of combining and carrying these bandwidth sizes over the network, as inexpensively as possible. Fig 9 shows an example of variable customer bandwidth requirements.
Fig 9: This example shows how each customer may have different bandwidth requirements
The standard SONET payload sizes covering this bandwidth range are STS1 (50 Mbps), STS3c (155 Mbps), and STS12c (620 Mbps). Each user is assigned the next bandwidth slot bigger than their payload requirement. For payloads that are significantly smaller than the slot to which a customer is assigned, there can be inefficient use of bandwidth. Traditional SONET requires a total of 3.72Gb/s bandwidth to support these customers, shown in
Fig 10: Payloads assigned to each customer using traditional SONET mapping
The second problem is that customers frequently want to vary their bandwidth requirement, and they want the provider to implement this change quickly. For example, when a customer is planning to host a web cast or videoconference, their bandwidth requirement will rise temporarily, and then return to its original level. In this case, the customer wants 'bandwidth on demand' where they can access (and pay for) the bandwidth they need, when they need it. However, they do not want to pay for excess bandwidth when they don't need it. Current SONET networks require that a technician manually provision the network equipment to change a customer's bandwidth allocation.
In the example shown in Fig 11, the network equipment boxes (NEs) are only capable of processing OC3 channels. The OC12 payload needs to be broken down into multiple OC3 channels to be carried across the network.
Issues Associated with Transporting Data over Legacy Voice Networks
SONET was originally conceived to operate with the voice-optimized PDH system. As IP and ATM data networks evolved, their interfaces were designed to accommodate the legacy network infrastructure.
Existing SONET networks provide ample line and switching capacity to support today's clear-channel, high-bandwidth interfaces. However, the SONET infrastructure suffers awkward internal granularity when carrying data. This transport infrastructure is not optimized for carrying data, crating a challenge to transporting high-bandwidth signals across the network.
For example, after information has been routed across the network core, the high data rate fat pipe emerging from the core must be converted into smaller channels for transport through the edge network infrastructure. For example, an OC-48 (2.5 Gbps) stream exiting the core SONET network and entering the OC-3 network edge would be deÃ‚Â¬multiplexed to sixteen OC-3 signals.
There is a mismatch of granularity in the lower end, where finer granularity is required between SPEs, and also in the higher end, where today's bandwidth demands exceed the STS-3c level. Today's data pipes hold greater capacity (in other words, are fatter) than traditional STS-3c signals. Several multiplexing schemes are available to combine STS signals to fit in these fat pipes. The results have not always been optimal.
Solutions to These Problemsâ€Concatenation
The technique of concatenation was developed to transport a payload of a bandwidth larger than the capacity of the carrier link. In concatenation, multiple virtual containers are associated, and their total capacity can form a single logical container. This is distinct from channelized transport, in which each container contains separate payloads. The ITU-T in its recommendation G.707 defines concatenation as, "a procedure whereby a multiplicity of Virtual Containers is associated, one with another, with the result that their combined capacity can be used as a single container across which bit sequence integrity is
maintained." There are two types of concatenationâ€contiguous and virtual. The objective of concatenation is to provide multiple right-sized channels over a SONET/SDH network. Right-sized means channels that are optimally sized for each customer's requirements.
In contiguous concatenation, a concatenation indicator is used in the pointer associated with each concatenated frame to indicate that the SPEs associated with the pointers are concatenated. For this type of concatenation to operate, every intermediate node through which the concatenated string passes must be configured to support this mode. If a legacy network is to support contiguous concatenation, then you need expensive upgrades to hardware and software.
Virtual concatenation is another technique for carrying data across in the network. In Virtual Concatenation, each SPE within a concatenated group representing the data packet for transmission is given an identifier. This identifier is provided as part of the SONET path overhead (POH) information in the SPE and indicates the SPE's sequence and position within the group.
In virtual concatenation, you do not need intermediate node support, so special path termination hardware is only required at the each end of the concatenated link. This makes virtual concatenation a more cost-effective transport mechanism for carrying data than contiguous concatenation.
At the receiver, the SPEs are reassembled in the correct order to recover the data. To compensate for different arrival times of the received data, known as differential delay, the receiving circuits must contain some buffer memory so that the data can be properly realigned. To create right-sized bandwidth using Virtual Concatenation, a customer can specify the total bandwidth he needs in increments of STS1/VC3 (~50 Mb/s), STS3c/VC4 (~150 Mb/s), or STS12c/VC4-4c (~600 Mb/s). Fig 12 shows the concatenation options.
1 lie mm n 1 s Concatenation Options Total t" cm ten a red
lv,2v, 3vf4v3 5v,6v, 7v, Hv 4-5 â€“Â 3*2 Mb/*
r-TS-.^ 1 v. 2v. 3v, 'iv, flv, riv. 7v, flv 1 SO Mh/fi - 1.2 Clh/A
ftTS-12c 1 v. Iv 62H MTVft - 1.2 GhX
Fig 12: Virtual concatenation options and the resulting total concatenated bandwidth
If we look at the earlier example of assigning right-sized bandwidth to each customer, we see we can assign the bandwidth as shown in fig 13.
Using virtual concatenation, the six customers from the earlier example can be carried single OC-48 SPE.
Fig 14: With Virtual concatenation, you can combine the bandwidth requirements for each customer in a single OC-48 SPE, offering significantly greater efficiency than traditional means
The efficiency improvements offered by Virtual Concatenation enable significant cost savings for network operators. As seen in the example of Fig 14, by using Virtual Concatenation, the network operator only needs to deploy and support 2.5 Gbps of bandwidth (a single OC-48 link) to support these customers, and only the end equipment needs to be upgraded to support it. In contrast, traditional methods for bandwidth assignment would require the user to deploy and maintain 3.72 Gbps of bandwidth, creating an overhead of almost 50%. The actual mapping of customer bandwidth into the OC-48 SPE is shown in Fig 15 Customer 1, with a bandwidth of 250 Mbps, is assigned 5 x STS-1 (50 Mb/s) slots. This is sufficient to carry Customer 1's payload, with minimal wasted bandwidth.
Fig 15: The technique of mapping customer bandwidth requirements into STS-48 SPE
Implementation Issues Associated with Virtual Concatenation
When Virtual Concatenation carries signals across the network, the original payload is separated into many smaller payloads and sent across the network, often by different paths. When the receiving end receives these smaller payloads, it must wait until all payloads arrive before re-assembling the original larger payload. To recover the virtually concatenated smaller frames, the receiving end equipment must know the order of each SPE within the channel and the time stamp of the frame. This information is carried in the H4 byte in each POH. There is a delay, known as differential delay, between when the first and last virtually concatenated payloads arrive at the receiving end, as shown in Fig 16.
Fig16: Differential delay associated with virtual concatenation payloads arriving at a receiver
To account for differential delay, the virtual concatenation enable framer on the receiving side uses buffer memory to store the received SPEs until all have arrived, and then assembles them.
Solving the Problem of Rapid Bandwidth Re-allocation
Earlier we looked at the problem where customers frequently want to vary their bandwidth requirement, and they want their provider to implement this change quickly. Virtual Concatenation allows easy bandwidth re-allocation within an SPE, as shown in the example in Fig 17.
Fig 17: Bandwidth re-allocation between users in an SPE
In this case, we have four customers using an OC-48 SPE. Customer 1 terminates his service agreement, and is no longer carried on this SPE. Simultaneously, Customers 2 and 4 increase their demand for bandwidth. Virtual concatenation allows rapid bandwidth re-allocation, without the delays or manual reconfiguration previously required. Virtual concatenation eliminates the need for a technician to manually reconfigure the equipment, thus enabling more fluid bandwidth allocation and more flexible service-level agreements.
The changing nature of data being transported over networks will help drive the demand for IP over SONET. To handle the increasing volume of data, networks carriers are looking for the efficiency and speed that IP over SONET offers. Meanwhile, as IP becomes the dominant internetworking protocol, IP over SONET will become even more significant. And since many carriers already have SONET in their network infrastructures, it would be cost-effective for them to adopt IP over SONET. Moreover advances in dense wavelength division multiplexing (DWDM) will soon make the highly scalable IP over SONET an even more effective technology.
google.com sonet.com ieeexplore.ieee.org www .altavista.com
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
project report helper|
Active In SP
Joined: Sep 2010
29-10-2010, 04:01 PM
SONET.PPT (Size: 819.5 KB / Downloads: 89)
What is SONET?
Defines a digital hierarchy of synchronous signals
Maps asynchronous signals (DS1, DS3) to synchronous format
Defines electrical and optical connections between equipment
Allows for interconnection of different vendors’ equipment
Provides overhead channels for interoffice OAM&P