Packet switching features delivery of variable – bit – rate data streams (sequences of packets) over a shared network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. When traversing network adapters, switches, routers, and other network nodes, packets are buffered and queued, resulting in variable delay and throughput depending on the network’s capacity and the traffic load on the network.
Packet switching contrasts with another principal networking paradigm, circuit switching, a method which sets up a limited number of dedicated connections of constant bit rate and constant delay between nodes for exclusive use during the communication session. In cases where traffic fees are charged (as opposed to flat rate), for example in cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching is characterized by a fee per unit of information transmitted (characters, packets, messages, …).
Packet mode communication may be utilized with or without intermediate forwarding nodes (packet switches or routers). Packets are normally forwarded by intermediate network nodes asynchronously using first – in – first – out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. In case of a shared physical medium (radio, 10BASE5 or thick Ethernet, …), the packets may be delivered according to a multiple access scheme.
Message switching is called a store – and – forward technology because the entire message must be received and stored before being sent on to the next system. It is actually the precursor of Packet Switching. In this type of switching, the messages are send in their entirety, one hop at a time. In this switching mechanism each message is treated as separate entity and each message is having its own addressing information because the switches read this addressing information, stores a copy of it and then forwards it to the next switch according to the addressing information. Email is a good example of this type of technology (the entire email message is sent to an SMTP (Simple Mail Transfer Protocol) server before being relayed to another one).
Some advantages of Message Switching may include shared data channels among communication devices which improves the use of bandwidth. In this type of switching mechanism the messages are stored temporarily at the message switches in the situations when network congestion become a problem. Priorities are used here to manage the traffic. Through message switching, the broadcast addressing uses the bandwidth more efficiently as the messages need to be delivered at multiple destinations. Also the incoming messages are not lost when all outgoing routes are busy. Instead they are stored in a queue with the other messages of the same route and are sent when the route becomes free.
I hope I was able to clear out this topic. If you still have any doubt or query regarding this topic, please feel free to share it with us in the comment box below. You are also most welcome to share your views and suggestion with us.