20.11.2012 Views

Contents Telektronikk - Telenor

Contents Telektronikk - Telenor

Contents Telektronikk - Telenor

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

able basis for control decisions (processor<br />

load, queue length, call arrivals, call<br />

categories, etc.)? How should control<br />

algorithms using these data be designed?<br />

Should different services be controlled<br />

individually or by a common algorithm?<br />

In a distributed environment, should the<br />

overload control mechanism be implemented<br />

in each processor or should this<br />

mechanism be centralised?<br />

Other basic requirements on mechanisms<br />

for overload control are: The amount of<br />

processor capacity needed to reject calls<br />

must be small. Thereby as much as possible<br />

of the processor capacity can be<br />

used for useful jobs. Call delays should<br />

not increase due to the overload control<br />

mechanism under periods of normal load.<br />

The overload control mechanism should<br />

be straightforward, robust and predictable<br />

and system performance not too sensitive<br />

to different parameter settings.<br />

There are other major internal functions<br />

in the control system that may interact<br />

with the overload control mechanism.<br />

The monitor that handles the variety of<br />

tasks and sees to it that time critical tasks<br />

are given a higher priority than others is<br />

perhaps the most striking example. It is<br />

important to grasp the impact and limitations<br />

on overload control mechanisms<br />

from such a monitor. This in turn emphasises<br />

that the design of overload control<br />

mechanisms should be done in parallel<br />

with the design of the control system<br />

itself and not as late “patches”. Good<br />

real-time performance can best be<br />

achieved by careful consideration of realtime<br />

issues early in the design process.<br />

2 The new challenge<br />

Most existing overload control schemes<br />

in public switches derive from pure telephone<br />

switch applications. These<br />

schemes are normally not well suited to<br />

cope with data traffic nor with a mix of<br />

ISDN traffic including packet switched<br />

signalling. In many of today’s ISDN<br />

switches overload control schemes handle<br />

D-channel traffic and signalling only<br />

in a rudimentary way. The great variety<br />

of traffic load mixes to be handled by the<br />

switches may talk in favour of individual<br />

solutions for different exchanges based<br />

on individually controlled services. For<br />

obvious reasons, however, such designs<br />

are far from attractive: very detailed data<br />

collection is required; many control<br />

parameters must be defined and adjusted<br />

to a varying traffic mix. Indeed, it seems<br />

that great efforts should be spent searching<br />

for control principles that are not<br />

only efficient but also as robust and simple<br />

as possible.<br />

Future control systems will in many<br />

cases be implemented in terms of a distributed<br />

architecture with a number of<br />

processing elements. In general, each<br />

processor has its own overload detection<br />

and control mechanism, since different<br />

overload situations may affect<br />

different processors. Then it is important<br />

to consider a global efficiency of<br />

the control system. In the case of a fully<br />

distributed architecture with loosely coupled<br />

processors, where each processor<br />

may be able to handle any task or service<br />

request to the system, the load sharing<br />

procedure has a strong impact on the<br />

overload control mechanism. In earlier<br />

papers [1] we have shown that certain<br />

load sharing principles, like Shortest<br />

Queue First, may under some circumstances<br />

lead to an undesired behaviour<br />

of the control systems. Heavy oscillations<br />

might appear in the job queues at<br />

each processor, which lead to a large<br />

increase in the mean waiting times.<br />

Again, good real-time performance can<br />

best be achieved by careful consideration<br />

of real-time issues early in the design<br />

process.<br />

In Intelligent Networks the definitions of<br />

services reside in a few nodes in the network<br />

called Service Control Points<br />

(SCP). (See Figure 3.) It is very important<br />

to protect the SCPs from being overloaded.<br />

A network where call control,<br />

service control and user data reside in not<br />

only different nodes requires the support<br />

of a powerful signalling system capable<br />

of supporting real-time transaction<br />

oriented exchange of information between<br />

the nodes dedicated to different<br />

tasks. SS No. 7 is used for this purpose.<br />

Due to the construction of the involved<br />

communication protocols, the throttling<br />

of new sessions must be done at their origin,<br />

i.e. not at the SCPs, but in the SSPs<br />

(Signal Service Points). To manage the<br />

transfer of information between different<br />

nodes in an IN, SS No. 7 uses a set of<br />

protocols and functions on OSI-level 7<br />

called Transaction Capabilities Application<br />

Part (TCAP). TCAP makes it possible<br />

to set up dialogues between nodes<br />

and to interchange information. Quite a<br />

lot of processing time must be spent on<br />

unwrapping the SS No. 7 protocol and<br />

TCAP before the contents of a message<br />

can be read. For example, in the freephone<br />

service, about half of the total processing<br />

time in the SCP is used to unwrap<br />

the protocols. Under overloads, the<br />

SCP will spend its real time capacity on<br />

Figure 3<br />

SCP<br />

Transport<br />

network<br />

Signalling<br />

network<br />

just unwrapping the protocol of arriving<br />

messages and discarding queries. Thus<br />

the SCP cannot protect itself from being<br />

overloaded. A node can only protect<br />

itself if the processing time needed to<br />

decide if a query should be accepted or<br />

not is small. Traditional approaches to<br />

load control will not be sufficient for the<br />

SCP nodes. Some of these difficulties<br />

can be solved by placing throttles in the<br />

nodes that initiate the work to be done by<br />

the SCP nodes instead of placing the<br />

throttles in the SCP nodes themselves.<br />

The flow control and rerouting in SS<br />

No. 7 are believed to have less influence<br />

on overload control implementation since<br />

these work on a shorter time scale than<br />

the overload control. Within the coming<br />

three to five years we will see STPs (Signal<br />

Transfer Points, i.e. switches in the<br />

signalling network) switching hundreds<br />

of thousands packets per second corresponding<br />

to some thousands of new sessions<br />

per second. If not properly protected<br />

from overload, the precious real-time<br />

capacity of the control system in the<br />

STPs will fade away.<br />

A part of the SPC capacity might be<br />

reserved to one or several service providers<br />

or customers for a certain period of<br />

time. In other cases this reservation is<br />

permanent. Under periods of overload it<br />

is important to portion out the SCP<br />

capacity according to the reservations<br />

made. The service provider that can give<br />

SCP<br />

STP<br />

SSP<br />

83

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!