Available online at www.sciencedirect.com
J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210 www.elsevier.com/locate/jpdc
A novel scheme for supporting integrated unicast and multicast traffic in ad hoc wireless networks R.S. Sisodia, I. Karthigeyan, B.S. Manoj, C. Siva Ram Murthy∗ Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai 600036, India Received 12 June 2003; received in revised form 30 December 2003
Abstract Current multicast routing protocols for mobile ad hoc networks can be classified into two categories, tree-based protocols and meshbased protocols. Mesh-based protocols have high packet delivery ratios compared to tree-based protocols, but incur more control overhead. Though the control overhead involved in tree-based protocols is low, the performance in terms of packet delivery ratio of such protocols, decreases with increasing mobility. This is due the lack of proper tree maintenance mechanisms. We propose an efficient multicast routing protocol called preferred link-based multicast protocol that uses a preferred link approach for forwarding JoinQuery packets. This approach reduces control overhead during the forwarding of JoinQuery packets. In this approach only a subset of neighbors of a node, that are termed as preferred nodes, are made eligible for further forwarding of JoinQuery packets. These preferred nodes are selected using our preferred link-based algorithm. We conducted extensive simulation experiments to evaluate the proposed protocol and have compared our protocol with some of the existing protocols. The simulation results show that our protocol performs better than the other multicast protocols in terms of packet delivery ratio and control overhead. Further, we have extended our multicast protocol to support unicast and multicast traffic simultaneously. This unified approach for routing is analyzed in order to study the impact of unicast traffic on multicast traffic, and vice versa. Performance evaluation of the unified approach, that we term as preferred link-based unified routing, was carried out through extensive simulation experiments. © 2004 Published by Elsevier Inc. Keywords: Mobile ad hoc networks; Multicast routing protocols; Preferred link-based routing; Tree-based multicast routing
1. Introduction Mobile ad hoc networks (MANETs) are infrastructureless networks where nodes keep moving all the time, resulting in dynamically changing network topologies. Nodes act as routers/switches as well as end-points. Since MANETs do not rely on support from fixed infrastructure, they can be deployed quickly. They can be used for a variety of applications such as immediate collaborative computing, search and rescue operations, and military applications. The majority of these applications require multicast support to establish ∗ Corresponding author. Fax: +91-44-2257-8352.
E-mail addresses:
[email protected] (R.S. Sisodia),
[email protected] (I. Karthigeyan),
[email protected] (B.S. Manoj),
[email protected] (C.S.R. Murthy). 0743-7315/$ - see front matter © 2004 Published by Elsevier Inc. doi:10.1016/j.jpdc.2004.06.004
communication from one or more source nodes to multiple receiving nodes. Designing a multicast protocol for ad hoc networks is a challenging task due to issues such as, mobility of nodes, limited bandwidth availability, error prone wireless links, shared broadcast radio channel, and hidden and exposed terminal problems [8]. We propose an efficient multicast routing protocol for MANETs which uses a preferred link based approach. We further extended our protocol to handle unicast and multicast traffic in a unified manner. The organization of our paper is as follows. In Section 2, we describe some of the existing unicast and multicast routing protocols for MANETs. We give the motivation behind our work in Section 3. In Section 4, we describe the network model and the assumptions that are made on this model. We present a detailed description of our multicast protocol in Section 5. The algorithm for computing preferred links is
1186
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
described in detail in Section 6. In Section 7, we discuss the important properties of our preferred link-based algorithm. We discuss the additional enhancements that could be made to our protocol in Section 8. In Section 9, we present the simulation results for our proposed protocol. In Section 10, we briefly introduce the unified routing concepts and issues related to it. In Section 11 we describe our unified approach in detail. We present the simulation results for our hybrid unified approach in Section 12. Finally, we conclude our paper in Section 13.
2. Unicast and multicast routing in MANETs Routing 1 protocols for MANETs can be broadly classified as table-driven routing protocols and on-demand routing protocols. In the table-driven routing approach, all nodes maintain consistent global topology information for efficient and loop-free routing. Since the routes are readily available at nodes, the call setup time is very less. But table-driven routing protocols require periodic exchange of control information, resulting in high control overhead. Hence these protocols do not scale well and are very inefficient in performance. DSDV [20], CGSR [5], and WRP [17] are some of the existing table-driven routing protocols. In the on-demand routing approach, reactive techniques are used. Routes are established on the fly whenever required. Periodic information is not required, and hence on-demand protocols involve less control overhead. But since the routes need to be setup on demand, a significant initial call setup time is involved. Some of the existing on-demand routing protocols are, DSR [12], AODV [21], TORA [19], ABR [27], and SSA [7]. The multicast protocols 2 can be classified based on different criteria such as connectivity mechanism of member nodes, tree construction responsibility, their reliance on the underlying unicast routing protocol, and their requirement of global or local topology information for tree construction. A multicast protocol is termed as a sender-initiated multicast protocol when the multicast tree construction is initiated by the multicast source. When multicast members take the responsibility to connect to the source, then the protocol is termed as receiver-initiated multicast protocol. In sender-initiated multicast protocols, the multicast source floods JoinQuery in the network and gets the responses from the potential members. In receiver-initiated multicast protocols, the members initiate JoinQuery to connect to the multicast source. The multicast source, or one of the intermediate nodes initiates a JoinReply message to the member node. Multicast protocols can also be classified based on how members of a multicast group are connected. They can be connected either as a tree or in the form of a mesh. In 1 Routing by default refers to the unicast routing throughout this paper unless otherwise specified. 2 Multicast protocol refers to multicast routing protocol.
tree-based multicast protocols, the members are connected through a tree structure. Tree-based protocols primarily focus on how to construct the tree with minimum control overhead, and involving minimum cost. The cost metric can be shortest distance of each member from the source, or the distance to the nearest forwarding non-group member node of the tree. Throughout this paper, the term distance refers to the distance in terms of hop count. Another important issue is how to quickly reconfigure the tree during link breaks. Tree-based protocols generate less control overhead, but have less packet delivery ratio when compared to meshbased protocols. The lower packet delivery ratio is due to the presence of only a single path to the source, which may break frequently in highly dynamic MANETs, resulting in dynamic partitioning of the multicast tree. Tree-based protocols are highly efficient (efficiency is defined as ratio of the total number of data packets received by the nodes to the total number of data packet transmissions in the network) because of the absence of multiple redundant paths to the multicast source node. Some of the tree-based multicast protocols are, bandwidth efficient multicast routing protocol (BEMRP) [18], multicast zone routing protocol (MZRP) [6], multicast core extraction distributed ad hoc routing protocol (MCEDAR) [25], differential destination-based multicast protocol (DDM) [11], ad hoc multicast routing protocol utilizing increasing id-numbers (AMRIS) [29], and ad hoc multicast routing protocol (AMRoute) [2]. The main characteristic of mesh-based protocols is that a group member may have multiple paths to the same multicast source, and to other members of the multicast group. This redundancy in paths gives these protocols robustness during path breaks that are caused due to the movement of the intermediate or end nodes. This robustness in turn results in high packet delivery ratio. However, because of the presence of multiple paths, the number of data packet and control packet transmissions are more compared to that in tree-based protocols. On-demand multicast routing protocol (ODMRP) [14], forwarding group multicast routing protocol (FGMP) [4], and core-assisted mesh protocol (CAMP) [9] are some of the existing mesh-based multicast protocols.
2.1. Key design issues for multicast protocols in MANETs Limited bandwidth availability, error prone shared broadcast channel, continuous mobility of nodes with limited energy resources, hidden terminal effect [8], and limited security make the design of a multicast routing protocol for MANETs networks a challenging task. The following are some of the important issues involved. Robustness: Due to mobility of the nodes, link failures are quite common in MANETs. Data packets sent by the source may be dropped resulting in low packet delivery. Hence, a multicast routing protocol should be robust enough to sustain the mobility of the nodes and achieve a high packet delivery ratio.
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Efficiency: In the Ad hoc network environment, where the bandwidth is scarce, efficiency of the multicast protocol is very important. Efficiency is defined as the ratio between the total number of data packets received by the receivers and the total number of (data and control) packets transmitted in the network. Control overhead: For keeping track of the members in a multicast group, exchange of control packets is required. This consumes a considerable amount of bandwidth. Since bandwidth is limited in MANETs, the design of a multicast protocol should be in such a way that the total number of control packets transmitted for maintaining the multicast group is kept to a minimum. Quality of Service (QoS): One of the main applications of MANETs is in battle-fields. Hence, provisioning QoS is an important issue in multicast routing protocols. The main parameters which are taken into consideration for providing the required QoS are throughput, delay, and delay jitter. Dependency on the unicast routing protocol: If a multicast routing protocol needs the support of a particular routing protocol, then it would be difficult for it to work in heterogeneous networks. Hence, it is desirable that the multicast routing protocol is independent of any specific unicast routing protocol. Resource management: MANETs consist of a group of mobile nodes, each node having limited battery power and memory. A MANET multicast routing protocol should use minimum power by minimizing the number of packet transmissions. To reduce memory usage, it should use minimum state information.
3. Motivation Current tree-based multicast protocols need to address several important issues. Though they involve lesser control overhead compared to mesh-based protocols, they still have to flood JoinQuery packets throughout the network. This reduces the overall efficiency of the protocol. In tree-based protocols, each member node is connected to the source through a single path only. Therefore, a link break may result in several member nodes getting disconnected from the multicast group. If such disconnections are not detected quickly, there occurs a significant drop in packet delivery ratio. Current tree-based protocols are not adaptive to the link state characteristics such as link stability, link load, and neighbors connectivity. In tree-based protocols, members join the multicast tree based on criteria such as shortest distance from the multicast source or nearest multicast member. The selection of shortest distance from the multicast source or nearest tree node results in delay during connection setup. In this paper, we propose an adaptive distributed multicast protocol that addresses the above issues. Our protocol restricts the flooding of JoinQuery packets throughout the network by selectively allowing only certain nodes to
1187
forward the JoinQuery packets. It also makes sure that the JoinQuery reaches all nodes in the network. In our protocol, two hop local topology information maintained at nodes is used to quickly recover from path breaks. Our protocol is a distributed protocol that can adapt to the local topology characteristics. It also exploits link characteristics such as neighbors connectivity for efficient multicast routing. The path setup process in our protocol does not wait till the shortest available route is found. Instead, it gradually adapts towards the shortest distance between members and the multicast source. Hence, our protocol reduces the connection setup time without compromising on the optimized path. In our protocol, a node selects a subset of nodes from its neighbors list (NL). This subset, which we refer to as the preferred list (PL), contains node ID of neighbors that are allowed to forward the JoinQuery packet. Selection of this subset may be based on link or node characteristics. The JoinQuery packet carries the preferred list. All neighbors receive JoinQuery packet because of the broadcast radio channel, but only neighbors present in the preferred list forward it. In this way the packet is forwarded by K neighbors, where K is the maximum number of neighbors allowed in the PL. The parameters for selecting the preferred list can be chosen based on link characteristics or based on node characteristics or based on a combination of both. The link characteristics used for computing the preferred list can be link stability, residual bandwidth on link, link load, link delay, or channel quality. The node-based parameters used for computing the preferred list can be node degree, node mobility, residual battery life, or number of sessions passing through a node. The combination of these properties makes the protocol more adaptive toward diverse requirements of ad hoc wireless networks.
4. Network model The network is a collection of N nodes where each node can move randomly at any time. Each node periodically transmits a small control packet to its neighbors known as hello packet or beacon. All nodes in the transmission region of a node i are the neighbors of node i, and are denoted by Ni . Each node i also maintains the list of neighbors of its neighbors which is denoted by N Ni . The list of all neighbors and their neighbors is maintained in a table called as neighbor’s neighbor table (NNT); this essentially constitutes the node’s two hop local topology information. A node uses the beacon packets it receives from its neighboring nodes to maintain and keep the NNT upto-date. A beacon transmitted by a node i contains beacon source address i and the neighbor list Ni of node i. Our protocol assumes promiscuous mode support at the MAC layer. When a node operates in the promiscuous mode, it can listen to and extract information out of packets that it hears, that may not be actually intended for it. Hence, in the promiscuous mode, a unicast
1188
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
packet is received and processed by all nodes that are neighbors of the transmitting node. The data packet transmission also exploits the promiscuous mode facility to reduce the collisions due to hidden terminal problem. 4.1. Beaconing mechanism A node transmits a beacon packet after every Tbeacon time period. Each node determines its connectivity with its neighbors every TCkBcon time period (which is an integral multiple of Tbeacon ), by means of beacons received. The reason for checking link breaks after multiple beacons is that beacons are broadcast packets and are prone to collisions. There is a trade-off between maintaining consistent information about the neighbors, the beacon overhead, and time for detecting path breaks based on beacons. Hence, the Tbeacon and TCkBcon values are to be selected properly and should be adaptive to network dynamics. Each node skips current scheduled beacon broadcast if any packet is transmitted during the current Tbeacon period. Here the transmitted packet serves as a beacon, provided all nodes are in the promiscuous mode. This helps in decreasing the beacon overhead when the network load is high because more number of data/control packets being transmitted results in more beacon transmissions being skipped. 4.2. Data structures and formats of packets In this section we describe the various data structures and the formats of packets used in our scheme. 4.2.1. NNT Each node maintains information about its neighbors and their neighbors in a table called Neighbor’s Neighbor Table (NNT). In this way a node has access to its local two hop topology information, which it uses for efficient routing and tree maintenance. 4.2.2. Preferred link table (PLT) Each node computes this table each time it receives a new JoinQuery for which it is an eligible forwarding node (The eligibility criteria would be described later in the section describing the tree construction phase). This table contains the node’s neighbors that are eligible for further forwarding the JoinQuery packets, sorted according to a preferred order. The criteria for sorting these nodes can be based on link characteristics such as, link stability, residual bandwidth on link, link load, and channel quality, or node characteristics such as, node degree, node mobility, and battery life. For each JoinQuery packet to be forwarded, the first K entries from the computed PLT are put into the packet to notify the intended subset of neighbors. Here K is a global parameter that indicates maximum number of neighbors that are allowed to further forward the JoinQuery packet.
4.2.3. Connect table (CT) This table is used to store the tree information. Each entry in this table contains the tuple GroupID, McastSrcID, MemberID, DownLink, Second downlink, Uplink, Second uplink. Here GroupID is used to identify the entry for corresponding multicast group. McastSrcID and MemberID are the end-points of the path. Uplink and Second uplink refer to one hop and two hop away nodes, respectively on the path leading to the multicast source, while Downlink and Second downlink refer to one hop and two hop away nodes, respectively, towards the multicast member node. This information is used for quick repair of the tree and for re-routing of packets when a path break occurs. 4.2.4. Join query buffer table (JQBufferTable) This table is used to restrict forwarding of multiple JoinQuery packets by a node when two or more members of the same group initiate JoinQuery transmissions simultaneously. In such cases, the intermediate nodes store the first forwarded JoinQuery McastSrcI D, GroupI D pair information. This prevents subsequent JoinQuery packets that it receives for the same McastSrcI D, GroupI D pair initiated by the same or other member nodes, from being forwarded. The node buffers JoinQuery for a very short duration TJQbuffer . The buffering time of TJQbuffer should always be less than JoinQuery timeout period TJQTimeOut otherwise JoinQuery failures may occur. When a node receives the JoinReply packet from the multicast source or from other multicast members, the node sends reply packets back to all member nodes whose JoinQuery packets have been buffered in the JQBufferTable. Buffering of JoinQuery helps in reducing the control overhead in certain scenarios. Consider the example shown in Fig. 1. Here the network is partitioned into two sub-graphs G1 and G2, which are bridged by a single link. A path between two nodes, residing in the two different partitions always passes through this link. If two member nodes n1 and n2 of G1 initiate JoinQuery for the multicast source McastSrc in G2, then only the first JoinQuery that reaches the boundary node of G1 (i.e., node n3) is forwarded. The subsequent JoinQuery packets are buffered up to a time period of min(TJQbuffer , TJQTimeOut − T raverseH op × MaxLinkDelay). Here, TraverseHop is the number of hops traversed by the JoinQuery packet and MaxLinkDelay is the maximum propagation delay on any link. When a JoinReply message corresponding to the first JoinQuery arrives at the node n3, it forwards JoinReply packets to both nodes n1 and n2.
5. Description of our multicast protocol Our preferred link-based multicast protocol (PLBM) is a tree based receiver-initiated protocol. Hence, each member node by itself is responsible for getting connected to the multicast source. The main advantage of a receiver-initiated multicast protocol is that the responsibility of maintaining
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
n1
n3
n4
McastSrc
n2 G1
G2 Fig. 1. An illustration of the use of JQBuffer.
the multicast tree is lifted off from the source node. We use a hard state approach for maintaining the tree and hence the overhead of periodic flooding used in the soft state approach is eliminated. We now describe our protocol by viewing the multicast group as a group containing only one multicast source. In Section 8.1, we describe how our algorithm can be extended to a group having multiple multicast sources. 5.1. Multicast tree construction phase In this phase, all member nodes of the multicast group try to get connected to the multicast source by initiating JoinQuery transmissions. This phase starts when a member node tries to connect to the multicast source for the first time. 5.1.1. JoinQuery initiation A member initiates JoinQuery only if the following conditions are satisfied. 1. The node is currently not connected to the multicast source. 2. No buffered JoinQuery entry for the same multicast group, initiated by other members of the group, is present in its JQBufferTable. If such a JoinQuery has been buffered, it implies that the member node is awaiting a JoinReply from the multicast source, meant for itself or for other members. The JoinQuery buffering mechanism prevents a node from forwarding multiple JoinQuery packets to the same multicast source. 3. No tree node is present in the node’s NNT. 4. At least one eligible neighbor is present in its NNT for transmitting the JoinQuery packet. If all the above listed conditions are satisfied, the node computes the preferred list using the preferred link-based algorithm (PLBA) (which is described in Section 6). A preferred link table (PLT) is maintained at each node, wherein the neighbors of the current node are maintained in a preferred order. If no preferred neighbors are present, the member does not initiate a JoinQuery, and retries periodically after every TJQTimeOut period until it gets connected to the multicast source. The member node selects first K entries from the PLT and transmits a JoinQuery packet with this
1189
list of preferred neighbors. These K neighbors are said to be eligible neighbors for forwarding the JoinQuery packet. If tree nodes (multicast source, or connected member nodes, or forwarding nodes) are present in the member node’s NNT, the node sends a JoinConfirm message directly to the tree node whose distance (number of hops) from the multicast source is minimum. Flooding is not used in such cases. When multiple nodes have the same distance to the source, one of them is selected randomly as the forwarding node. The tree node to which the current member node connects can be also selected based on criteria such as most stable node, or least overloaded node. The JoinQuery packet can be transmitted in a number of ways. In the simplest case, it is broadcast to all neighbors. But this broadcast is prone to collisions and can result in JoinQuery failure. Though unicast transmission ensures reliable delivery, sending JoinQuery as a unicast packet to each preferred node involves very high control overhead. Our protocol utilizes the promiscuous mode capability of nodes for forwarding JoinQuery packets. JoinQuery is sent as a unicast packet to only one of the preferred nodes; all other preferred nodes receive the JoinQuery packet in the promiscuous mode.
5.1.2. JoinQuery forwarding When a node receives a JoinQuery packet, it first checks whether it is eligible to forward the packet. A node is considered to be eligible to forward the JoinQuery, only if it is in the preferred list (PL) field of the received JoinQuery packet. Those neighbors that are not eligible discard the received JoinQuery packet. In this way at most K neighbors of a node are eligible to forward a JoinQuery. An eligible node checks its connectivity to the tree. If it is connected, the node prepares a JoinReply packet and sends it back to the JoinQuery source node. It also starts a timer that expires after JOIN_CONFIRM_TIMEOUT period in order to wait for a JoinConfirm packet from the corresponding JoinQuery source node. If an eligible neighbor is not yet connected to the group, but its JQBufferTable has an entry for the same multicast group, then the current JoinQuery is buffered and not forwarded. After receiving a JoinReply for any one of the previous JoinQuery entries which belong to the same group, the node forwards the JoinReply packet to the nodes corresponding to the other buffered JoinQuery entries (which belong to the same group but were initiated by other members). If the node is not connected and no JoinQuery packets are buffered, then it is eligible to forward the JoinQuery to its preferred neighbors. The forwarding is done in the following order. If the multicast source is present in the node’s NNT, the JoinQuery is directly forwarded to the source as a unicast packet. If one or more tree nodes are present in the NNT, then the JoinQuery is forwarded as a unicast packet to the tree node which is at the shortest distance from the multicast source. If neither the multicast source nor tree nodes
1190
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
are present in the NNT, the node computes the PLT using the PLBA algorithm. It selects the first K neighbors from the PLT as preferred neighbors, which are eligible to further forward the JoinQuery packet. The old preferred list of received JoinQuery packet is replaced by the newly computed preferred list. If no eligible nodes are present, the JoinQuery is discarded, and is marked as sent in the JQBufferTable. Any further duplicate JoinQuery packets are just discarded. 5.1.3. JoinQuery at destination When a JoinQuery reaches a tree node (i.e., multicast source, forwarding node, or member node) which is eligible to process the received JoinQuery packet, that node sends back a JoinReply packet to the JoinQuery source node. Then it marks the JoinQuery as processed. All subsequent duplicate JoinQuery packets received are discarded. Our protocol selects the first JoinQuery and does not wait for multiple JoinQuery packets. This reduces the delay in route selection. Our protocol adapts dynamically to the best available path. After sending a JoinReply, the node starts a timer for a JOIN_CONFIRM_TIMEOUT period, and waits for a JoinConfirm message from the corresponding member node. The JoinReply packet follows the JoinQuery’s traversed route in the reverse order. 5.1.4. JoinReply at intermediate node When an intermediate node receives a JoinReply packet, it checks whether it has already processed a JoinReply with the same GroupID, SourceID, SeqNum. If such a JoinReply message has already been forwarded or processed, the current JoinReply packet is discarded. If the JoinReply has not yet been processed, it is marked as processed by storing the GroupID, SourceID, SeqNum tuple, and the intermediate node forwards the JoinReply to the next node in the path towards the JoinQuery source node. The JoinReply message is forwarded to the next node as a unicast packet; all neighboring nodes process the JoinReply using the promiscuous mode. A neighbor node is eligible to forward the JoinReply only if it has not yet processed the JoinReply and it is in the JoinReply path. The main aim of storing the JoinReply processing information is to prevent multiple JoinReply packets from reaching the member node, thereby eliminating transmissions of multiple JoinReply packets. 5.1.5. JoinReply at member node (JoinQuery source node) When the first JoinReply reaches the JoinQuery source, the node confirms its connectivity by sending back a JoinConfirm packet. After sending JoinConfirm packet the source node rejects subsequent JoinReply packets with the same GroupID, SourceID pair, having same or lower SeqNum. As only the first JoinReply packet is selected by the member node to connect to the multicast source, the intermediate nodes allow only one JoinReply reach to the member node. They reject the duplicate JoinReply packets.
5.1.6. JoinConfirm forwarding A node after receiving a JoinConfirm packet marks itself as connected to the multicast tree, and stores path information about the next two hops on both sides, using the path information carried by the JoinConfirm packet. The path information towards member side is termed as downlink side information. The downlink side information consists of the next downlink node, node next to the downlink node (second downlink node), and the member node information. Similarly, the intermediate node also stores the path information towards the direction of multicast source node, termed as uplink side information. This consists of the uplink node, node next to the uplink node (second uplink node), multicast source, and the destination of JoinConfirm packet. This information is stored in the CT. It helps the protocol to adapt with the dynamics of the network. It is used for path optimization, tree reconfiguration during link breaks, and for packet re-routing. An intermediate node, after forwarding a JoinConfirm message, starts a timer which expires after PACKET_AWAIT_TIMER period. The purpose of this timer is to counter the inconsistencies that occur due to the loss of JoinConfirm packet. If an intermediate node does not receive any data packet within the PACKET_AWAIT_TIMER period, the node deletes the corresponding entries from its CT. Once a JoinConfirm packet sent by a member node reaches its destination (multicast source or a connected intermediate node), the tree nodes (multicast source node and forwarding node) start sending data packets to the member node.
5.2. Routing data packets Forwarding nodes (multicast source and intermediate tree nodes) transmit data packets either as unicast packets or as broadcast packets. A data packet is transmitted as a unicast packet when there is only a single downlink node. In general, if multiple forwarding nodes are present in the downlink tree, the packets would usually be transmitted as broadcast packets. But in our algorithm, instead of transmitting the data packet in the broadcast mode which is more prone to collisions, it is transmitted as a unicast packet in the promiscuous mode. Transmitting a data packet using the promiscuous mode has the following advantages. Less collisions: As unicast data packets are protected by the RTS-CTS control packets exchange, the probability of collisions is very less. A data packet is sent only if a CTS is received in response to the RTS sent by the sender. Since RTS and CTS packets contain transmission information, neighbor nodes that hear the RTS or CTS refrain from transmitting when the actual transmission takes place. Hence, data packets suffer less collisions, and are delivered more reliably. Removes inconsistencies: If a node does not receive a CTS packet in response to multiple retransmissions of RTS packets, the intended receiving node is assumed to have moved
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
away. In this way the neighbor information is updated, which helps in maintaining consistent information in the NNT. Another advantage of using promiscuous mode is that, when a non-tree node receives a data packet in the promiscuous from a node that is part of the multicast tree, it can gain limited information about the multicast tree such as the two hop tree nodes, the distance of tree nodes from the multicast source, and also information about the actual node to which the packet was sent. A node piggybacks the downlink nodes list while forwarding data packets for that group. It serves two purposes. First, it explicitly specifies the downlink nodes that are eligible to further forward the data packet, thereby eliminating the possibility of loop formations. It also helps in informing the neighbors about the tree nodes that are within their local topology. Secondly, it helps in removing inconsistencies in the multicast tree information. For example, when a node transmits packets to more than one downlink node and if some of the downlink nodes have moved away, then the nodes that are neighbors of both the transmitted node and the moved node inform the uplink node about the moved nodes. Similarly, if any inconsistency occurs due to collisions of packets, it is also removed by making use of this piggybacked neighbors list. All intermediate forwarding nodes and member nodes after successfully receiving a data packet with sequence number p, cancel the current PACKET_AWAIT_TIMER and start a new timer awaiting a data packet with sequence number p + 1.
5.3. Multicast tree maintenance phase The most important phase in any tree-based multicast protocol for MANETs is the tree maintenance phase. Since only a single path exists between any group member and the multicast source, a link break may result in multiple member nodes getting disconnected from the source. The key issues here are, how to quickly detect a link break, and quickly reconfigure the tree. Quick detection of link breaks in a multicast tree greatly affects the performance of the protocol in terms of its packet delivery ratio. For example, if a tree link which is very near to the multicast source breaks, it significantly affects the packet delivery ratio of the entire sub-tree under the moved node, which may contain many member nodes. In our multicast protocol, link breaks can be detected using two ways. As our multicast protocol is beacon based, a node can detect a link break if it has not received a beacon for a TCkBcon period. The selection of TCkBcon period is very important. If it is too long, the link break detection is delayed resulting in loss of data packets. If it is too short, it results in detection of false link breaks as beacons are also prone to collisions. With beacon periodicity of 1 beacon per second, the TCkBcon should be 3–4 s. The worst case detection time of link breaks based on beacons is ≈ 2 × TCkBcon .
1191
The other mechanism that is used to detect a link break is based on unicast packet transmission characteristics. Transmission of each unicast packet is preceded by the RTS-CTS control packet exchange. A link is assumed to be broken if the sender node does not receive any CTS packet in response to multiple retransmissions of RTS packets. Since in our multicast protocol, each data packet is transmitted as a unicast packet to one of the nodes in the preferred neighbors list (other preferred nodes receive the data packet in the promiscuous mode), a link break can be detected quickly. As we are maintaining the two hop local topology information in the NNT, and the two hop tree information in the CT, the broken link is bypassed quickly using the information available in the NNT and CT. The end node of the broken link towards the multicast source is termed as the uplink node, while the one towards the member node is termed as the downlink node. The following steps are taken by an uplink node when a link break is detected. When the downlink node of a broken link does not get re-connected to the uplink node, the link break is detected after some time. It is detected either based on the absence of beacons from the downlink node concerned, or based on data packet timeout (after PACKET_AWAIT_TIMER period). When a node detects a link break it takes the following actions. • It deletes the neighbor (moved uplink node) and its neighbors information from its NNT and CT. • If the node is a multicast group member or has multiple downlink nodes or member nodes, the node tries to repair the tree by re-connecting to any of the tree nodes in its local two hop topology. If no tree nodes are available it initiates a JoinQuery and sends a StickToMe message to its downlink nodes. The purpose of this message is to maintain intact the subtree for which the current downlink node is the root. This helps in reducing the flooding of JoinQuery packets by multiple nodes when a single link breaks. If only a single downlink node is present, the node informs the single downlink member node about the link break which then re-connects to the multicast tree as described by the procedure in the tree construction phase.
5.4. Tree deletion When a multicast session ends, the multicast source node initiates the DeleteTree message to delete the whole tree. This message is flooded throughout the network so that the stale entries regarding the multicast tree are deleted from the CTs of nodes. All tree nodes after receiving the DeleteTree message remove the entries from their CT and rebroadcast the message. Once a tree is deleted, each node marks the multicast GroupID. This stored GroupID information is later used to restrict the flooding of JoinQuery packets by group members. These members are those nodes which have not yet received the DeleteTree message or those nodes that
1192
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
might have been isolated due to network partitioning and therefore cannot receive the DeleteTree message. 5.5. Tree optimization During the multicast session, if any tree node finds another tree node, which is at a shorter distance from multicast source than its current uplink node, it tries to optimize its path. This is done by sending a PruneMe message to the current uplink node and a JoinConfirm packet to the node having a shorter distance to the source than the current uplink node. Similarly, when a data packet with lower hop count is received by a node (in promiscuous mode) through another node, the node optimizes the path by disconnecting from its current uplink node and re-connecting to the node nearer to the source. 6. Algorithm for computing preferred links We propose an algorithm termed as preferred link-based algorithm (PLBA) which is used by our multicast protocol for computing preferred links. This algorithm selects the neighbors based on their degree. 3 Preference is given to those neighbors whose degree is high. As higher degree neighbors cover more nodes, only a few of them are required to cover all nodes of NNT. This reduces the number of broadcasts. The main motivation behind PLBA algorithm is to involve less number of nodes during flooding of JoinQuery packets in the network. 6.1. Neighbor degree-based preferred link algorithm (NDPL) Let d be the node that calculates the preferred link table PLT. TP is the traversed path, consisting of the IDs of the nodes through which the JoinQuery packet has traversed so far. OLDPL is the preferred list of received JoinQuery packet. The NNT of node d is denoted by N N Td . N (i) denotes the neighbors of node i including node i. Include list (I NL) is a set containing all neighbors reachable by transmitting JoinQuery after execution of algorithm and exclude list (EXL) is a set containing all neighbors that are unreachable by transmitting JoinQuery after execution of the algorithm. Step 1: In this step, the node marks its neighbors that are not eligible for forwarding JoinQuery. These neighbors are marked as not eligible if they have already forwarded the JoinQuery or have been already marked by the previous forwarded nodes as not eligible for forwarding or they are covered by other nodes. The covering of nodes implies that JoinQuery reaches them through other nodes. Step 1a: If a node i of TP is a neighbor of node d, mark all neighbors of node i as reachable. 3 Degree indicates the number of neighbors.
∀i [if i ∈ T P ∩ i ∈ N (d)], I N L = I N L ∪ N (i). /* Mark N(i), already received from i */ Step 1b: If a node i of OLDPL is a neighbor of node d, mark all neighbors of node i as reachable. ∀i [if i ∈ OLDPL ∩ i ∈ N (d)] I N L = I N L ∪ N (i). /* Mark overlapping neighbors */ Step 1c: If neighbor i of node d has a neighbor n present in TP, mark all neighbors of node i as reachable. ∀n [if n ∈ T P ∩ n ∈ N (i) ∩ i ∈ N (d)], I N L = I N L ∪ N (i). /* Mark N(i), already received from i */ Step 1d: If neighbor i of node d has a neighbor n present in OLDPL and n < d, mark all neighbors of node i as reachable. ∀n[if n ∈ OLDPL ∩ n ∈ N (i) ∩ i ∈ N (d) ∩ (n < d) ], I N L = I N L ∪ N (i). /* Mark overlapping neighbors */ Step 2: If neighbor i of node d is not in INL, put node i in preferred link table PLT and mark all neighbors of node i as reachable. If node i is present in INL, mark the neighbors of node i as unreachable, as N (i) may not be included in this step. Here neighbors i of node d are processed in decreasing order of their degree. ∀i, i ∈ N (d), [if i ∈ I N L], I N L = I N L ∪ N (i), P LT = P LT ∪ i, EXL = EXL − { n | n ∈ N (i)}. otherwise EXL = EXL ∪{n | n ∈ N (i) ∩ n ∈ I N L}. After Step 2, JoinQuery is guaranteed to reach all neighbors of node d. If EXL is not empty, it implies that some neighbor’s neighbors n of node d are currently unreachable, they are included in Step 3. Step 3: If neighbor i of node d has a neighbor n present in EXL, insert i in PLT and mark all neighbors of node i as reachable. Delete all neighbors of node i from EXL. Neighbors are processed in decreasing order of their degrees. ∀n [if EXL is not ∅ then let n | n ∈ EXL ∩ N (i)] I N L = I N L ∪ N (i), P LT = P LT ∪ {i}, EXL = EXL − { n | n ∈ N (i)}. After Step 3, all nodes in N N Td are reachable. But it may happen that some nodes in PL get inserted due to overlapping of neighbors in Steps 2 and 3. Step 4: Apply reduction steps to remove overlapping neighbors from PLT without compromising on reachability. Step 4a: Remove each neighbor i from PLT if N (i) is covered by remaining neighbors of PLT. Here minimum degree neighbor is selected every time. P LT = P LT − {i} | N (i) ⊆ N (P LT − i). Step 4b: This removes neighbor i from PLT whose N (i) is covered by node d itself. P LT = P LT − {i} | N (i) ⊆ N (d).
6.2. Description of PLBA The PLBA algorithm maximally utilizes the two hop topology information that it obtains using the NNT. We
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
exploit the fact that higher degree nodes can access more number of nodes. Now we briefly explain each step of PLBA. In Step 1a, the current forwarding node excludes all its neighbors that have already forwarded JoinQuery. In Step 1b, the current forwarding node is prevented from forwarding JoinQuery to those neighbors that are shared by nodes in the preferred list. Step 1c prevents broadcast to those nodes that are neighbors of nodes in the path traversed by the JoinQuery. Step 1d prevents multiple copies of JoinQuery reaching a node through nodes that are neighbors of previous node’s preferred list. Step 2 starts inserting neighbor nodes in the preferred list in decreasing order of their degrees. Our algorithm does not put the higher degree nodes blindly in the preferred list. Instead, after putting each node, we exclude those neighbors that are neighbors of the currently included node. Step 3 includes all those nodes into the PLT whose neighbors are still not covered by the nodes in the partial preferred list. This step guarantees reachability to all the nodes in the NNT, provided the preferred list size is unbounded. If the size is fixed as K, some neighbor’s neighbors may be unreachable when it is not possible to reach all nodes of NNT. Step 4a eliminates redundant nodes that may have been included in Step 3. The sequence (or order) in which this step is applied is very important. To obtain best results, the neighbor with minimum degree is selected from the preferred list and checked whether it has redundant neighbors. The algorithm proceeds by checking the neighbor with the next highest degree, and so on. Step 4b excludes those nodes from PLT that are neither the destination nor have any other new outgoing links. This algorithm is called only when any tree node is not in the CT. Since our algorithm tries to send JoinQuery to disjoint neighbors having high degrees, the list of nodes that can further forward the JoinQuery packet is very less. This is due to the fact that many neighbors are rejected due to overlapping neighbors. Hence control overhead is reduced more efficiently. If the tree nodes are not within two hops, the emphasis should be to include those nodes that have maximum new outgoing links. 6.3. Example We illustrate our algorithm using an example shown in Fig. 2. In this example, node 18 is the multicast source and nodes 1, 6, and 12 are the members. Let us assume that nodes 1, 6, and 12 connect to the multicast source in the same order. The maximum size of preferred list (PL) is taken as 3. 6.3.1. Tree construction When node 1 wants to connect to the multicast source it floods JoinQuery packets into the network as no tree node or multicast source is present in its NNT. Instead of sending JoinQuery to all the members, node 1 computes the preferred
1193
Fig. 2. Multicast example.
neighbors using PLBA. The preferred neighbors are nodes 3 and 4. Node 2 is rejected as all its neighbors have been already covered. Hence it is eliminated in Step 4a of PLBA. When node 3 and node 4 receive JoinQuery from node 1, they in turn compute their preferred neighbors using PLBA. Node 3 sends JoinQuery to nodes 10 and 11, while node 4 forwards JoinQuery to nodes 5, 8, and 9. The JoinQuery is dropped at nodes 11 and 5 as no preferred node is available at these nodes. Nodes 8, 9, and 10 further forward JoinQuery to nodes 16, 16, and 9, respectively. Finally a single JoinQuery reaches the multicast source which then sends a JoinReply to node 1 (member M1) through path 18-16-9-4-1 (path 1) (or 18-16-8-4-1 (path 2)). Node 1 finally confirms its connectivity to the multicast source node i.e., to node 18. When member node 6 (member M2) wants to connect to the multicast source, it first checks whether any tree node (forwarding nodes or connected member nodes) is in its NNT. As node 4 is a forwarding node for the multicast group, node 6 (member M2) connects directly to the multicast source by sending JoinConfirm to node 4 without flooding JoinQuery. For member M3 (node 12), no tree nodes are present in its NNT. It initiates a JoinQuery to be flooded in a limited manner in the network using our PLBA algorithm. It forwards JoinQuery to node 13 and node 11. Nodes 13 and 11 have tree nodes in their NNTs and hence they forward JoinQuery directly. Node 9 receives JoinQuery through node 10, forwarded by node 13. Similarly node 4 receives JoinQuery forwarded by node 11 through node 3. Node 12 (member M3) receives two JoinReply packets from nodes 4 and 9. It connects to the forwarding node whose JoinReply comes first, by sending a JoinConfirm message.
1194
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
6.3.2. Tree maintenance Consider the case when node 9 moves away. This causes a link break between node 16 and node 9. This link break is detected quickly. When node 9 moves it causes data packets to be dropped and hence node 16 re-routes the data packets using the two hop topology information from its NNT and two hop path information from its CT. Node 9 is bypassed and the new path between multicast source and member M1 (node 1) is 18-16-8-4-1. When node 10 is connected to tree through node 9 and does not receive data packets upto PACKET_AWAIT_TIMER, it assumes a link break. Each forwarding node tries to locally repair the tree using its NNT and CT information. In case no tree node is present in its NNT and more than one downlink node is present in its CT, it broadcasts the JoinQuery. Otherwise the link break information is propagated to the corresponding member node (Member M3). Member M3 re-floods the JoinQuery using PLBA algorithm in order to re-connect to the multicast tree.
7. Properties of preferred links algorithm 7.1. Reachability of PLBA Lemma 1. Given that every node in the network has accurate NNT and mobility does not alter the topology during the path finding process, our algorithm can find a route from the source node (SRC) to the destination node (DEST) provided there exists such a path and |P L| is unbounded. Let N(d) be the neighbors list of node d. OLDPL and TP are the preferred list and traversed path of the received JoinQuery. Let pathLength represent the length of the path SRC − I N1 − I N2 − · · · − I Np−1 − DEST from SRC to DEST, where I Ni , is the ith intermediate node in the path. Proof by Induction. Induction basis: pathLength 2. • When DEST is 1 hop away, DEST ∈ N (SRC), hence it finds a path to the DEST. • When DEST is 2 hops away, DEST ∈ N N T (SRC), hence it finds a path to the DEST. Induction step: PLBA finds a path to DEST when the pathLength = k + 1, given that it finds paths with pathLength = k. JoinQuery reaches all nodes with pathLength = k. Every node at pathLength = k builds PL that includes all nodes that include N N T (Nk ) (here Nk refers to the nodes at path length k from the source node) that are not already covered by the nodes on the current traversed path TP (denoted by N(T P )) and OLDPL . Steps 2 and 3 of PLBA described in Section 6 guarantee that JoinQuery would reach all nodes n such that n ∈ N N T (Nk ) ∩ n ∈ / N (T P ) ∩ n ∈ / OLDPL .
As Nk+1 ∈ n, it implies that all nodes at pathLength k+1 are reached i.e., Nk ∪ N N T (Nk ) ⊇ Nk+1 .
7.2. Upper bound of K 7.2.1. Approximate reachability in PLBA Lemma 2. A high value for the maximum number of neighbors may provide high reachability, but each JoinQuery packet may have to carry more information, thereby increasing the control overhead. An optimal maximum value for the maximum |P L| should not be very high, and at the same time should also provide reasonable reachability. We have found out that if the maximum |P L| is restricted to 6, the probability of including all neighbors in the NNT of node is √
1 −
3 3− 2 2 ×{2R}
6×R 2 ×
≈ 0.91, which is quite high.
If |P L| is 6, probability that the nodes in NNT which are not included by the PL in PLBA, is the area of the shaded region in Fig. 4. The area of shaded region is the area of √ ABCA = R 2 × 3 − 23 . In the worst case, there are six such regions for any node. So, the total unreachable area for any node in PLBA for |P L| = 6, is 6 × (area of shaded region). Hence, maximum probability that these nodes are √ unreachable is 6 ×
3 3− 2 2 ×{2R}
R2 ×
.
7.2.2. Relaxing K in PL In this section we analyze the upper bound of K, where K indicates the maximum |P L| required to include all nodes from NNT. We make an assumption that if the distance DIST between two nodes is R, where R is the uniform transmission range of every node, they are not neighbors, but if the distance is R − ε, they are neighbors. ε is an infinitesimal value. With the above assumption, in Step 2 of our algorithm, the worst case maximum number of neighbors included in the PL is 6 (from Section 7.2.1). Consider the following scenarios. If the distance DIST between node and any one of the neighbors is ε, after Step 2 of algorithm |P L| is 1. If the distance DIST between all the neighbors and node is ε < DI ST < R − ε, the maximum value of K is 6. In the worst case, all neighbors are located in the periphery of the node and maximum |P L| is 6. For example, in Fig. 3, if the input to Step 2 of algorithm is in the order 2, 3, 4, 5, 6, 7, all neighbors are included in the PL in the same order and |P L| is 6. This is because the maximum number of neighbors required to cover the whole transmission area of the node is 6. The value for K to include all neighbors in NNT in a very dense 4 network {f or density ∞} is ∞. For example, in Fig. 4, to cover all the neighbors’ neighbors of shaded region 4 Here density of a network indicates area/nodes of a network, assuming uniform distribution.
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
4
3
ε R−
R− ε
R− ε
R− ε
2
R−
ε R−
ε
1
5
R− ε 6
7
Fig. 3. Worst Case scenario of K.
A
O’’’ B R R R
O’
R
C
O’’
Fig. 4. Unreachable area of a node.
we have to include neighbors in PL, which may be ∞. As |P L| is unbounded, we provide an approximation approach for reachability of nodes in NNT in order to restrict the value of |P L|.
1195
In Step 2, worst case occurs when all nodes except destination are neighbors of source node. It implies that neighbor list of source has M − 3 nodes. After Step 2, INL also contains the same number of nodes. Therefore the maximum iterations is (M − 3) × log2 (M − 3) ≈ (M × log2 (M)). The worst case scenario in Step 3 is when all nodes except the destination are present in NNT of source and each neighbor has the same degree. As maximum number of neighbors in PL after Step 2 is 6, the maximum number of neighM−1 bors that can be present in EXL is (M − 1) − (6 × Avg ), d where Avgd is the degree of each neighbor. Therefore, maxM−1 imum iterations is log2 (M − 3) × ((M − 1) − (6 × Avg )) d ≈ (M × log2 (M)). In the same way, Step 4a and Step 4b have maximum number of iterations as |P L| × log2 (|N (i)|) × log2 (M − 3) and |P L| × log2 (|N (i)|) × log2 (|N (Src)|), where N (i) and N (Src) are the neighbor list of any node i and neighbor list of source, respectively. Assuming a limit on the maximum path length given by TTL, in a network with M nodes, on an average, every node in the path can have T M T L neighbors. In such a scenario, the regular JoinQuery flooding mechanism can result in ( T M T L ×T T L) = (M) broadcasts of the JoinQuery. Using PLBA algorithm, at every hop, the number of neighbor nodes selected for further forwarding the JoinQuery is limited to P Lmax (= 6 for 91.6% 2-hop reachability) and hence the broadcast complexity becomes (T T L × P Lmax ) = (1). The worst case scenarios can make our algorithm more computation intensive ((M log2 M) compared to traditional flooding based approaches ((1)). But, since bandwidth is much more precious and since the power consumed for packet transmissions is more than that required for computations, PLBA is superior to traditional flooding based approaches.
7.3. Time complexity analysis 8. Further improvements In this section, we analyze the time complexity of PLBA. We look at the worst case possibility in each step. PLBA being a distributed algorithm, the worst case processing complexity depends on the processing load at each node. Since there are four major steps in the algorithm, and since the worst case input defers for each step, we consider the worst case instances for each case separately. Since the steps in the algorithm are executed sequentially, the time complexity of the algorithm is the highest of the time complexities of the steps. Let us take M as the number of nodes in the network. In the PLBA algorithm, from Step 1a to 1d, the worst-case scenario may occur at the first intermediate node when all nodes except destination are present in its NNT. In that case the node has to search the complete NNT to check the nodes present in PL or TP of received JoinQuery. Since |P L| = K and |T P | = MaxT T L are bounded, maximum number of iterations is log2 (M −3)× max(|P L|,|T P |) ≈ (log2 (M − 3)) = (log2 M).
8.1. Multiple source support in a group Every member of the multicast group is a receiver of data packets from every multicast source in the group. The data packets originated by a multicast source are also transmitted to other multicast sources in the group which are members of the group with respect to the transmitting source. Presence of multiple sources does not affect the performance of our protocol. It only results in increasing the number of entries in CT. This is due to the fact that the protocol has to keep track of each set of members that are eligible to receive packets from each multicast source. 8.2. Elimination of three phase connectivity When a member initiates a JoinQuery, it computes a preferred list. If the preferred list contains only one preferred neighbor, the preferred neighbor sets a FLAG in the JoinRe-
1196
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
ply packet. This FLAG indicates that the member need not initiate a JoinConfirm packet. This is because, the member connects to the tree using the only preferred neighbor that it has. In this case, the JoinConfirm packet is initiated by the member’s single preferred neighbor. This concept is extended to multiple hops if all the intermediate nodes from the member to the current node are connected by a single preferred node chain. If any intermediate node is in the chain of single preferred node and a tree node is present in its CT, it does the following actions. • Sends a JoinConfirm to the connected tree node. • Sends a JoinReply to member node and sets the FLAG in JoinReply packet indicating the member node not to initiate JoinConfirm transmission.
9. Performance study We evaluated the performance of our proposed protocol by carrying out extensive simulation studies. The simulation tool used was GloMoSim [28] developed at the University of California, Los Angeles. The MAC protocol used was IEEE 802.11 DCF. Free-space propagation model was used. The radio type model assumed was radio capture. The nodes move in a 1000 m × 1000 m area. The mobility model considered was random way point. According to this model, a node randomly selects a destination from the physical terrain. It moves in the direction of the selected destination at a speed uniformly chosen between the minimum and maximum speeds that are defined initially. Once it reaches its destination, the node stays there for a certain time period (pause time). It then repeats the process by again selecting another destination and moving towards it. The pause time was taken to be 30 s. The initial network topology consists of nodes randomly distributed within the terrain area. The radio transmission range was taken as 250 m. Channel capacity was taken as 2 Mbits/s. Constant bit rate (CBR) model was used for data flow, with data packet size as 512 bytes. The network traffic load was 10 packets per second. The various parameters are shown in Table 1. The JoinQuery timeout period is 1 s. Data packets are not transmitted when a multicast source is not connected to any of the member nodes. Intermediate nodes do not buffer data packets during link breaks. Each simulation was run for 600 s, and each multicast session runs for 200 s. A multicast session randomly selects disjoint nodes and starts at any time during the first 400 s of the simulation period. All members join the multicast source at the same time and leave the multicast group after 200 s from their time of joining. Final results were averaged over more than 100 simulation iterations. In all simulation experiments a single multicast group with one multicast source is considered. We compare our simulation results with Bandwidth Efficient Multicast Routing Protocol (BEMRP) [18] and OnDemand Multicast Routing Protocol (ODMRP) [14]. Like our PLBM protocol, BEMRP is also a tree based receiver-
Table 1 Simulation parameters Parameter
Value
Parameter
Value
MAC layer 802.11 Pkt size 512 Bytes Mobility model Random way point Session duration 200 s Radio model Radio capture Simulation time 10 min Propagation model Free space Tbeacon 1s Channel capacity 2 Mbps CBR Pkt Rate 10 Pkts/s Simulation area 1000 × 1000 m Transmission range 250 m TCkBcon 4s Broadcast jitter 100 ms
initiated multicast protocol that uses hard state approach for maintaining the multicast tree. ODMRP is a sender-initiated mesh based protocol that uses a soft state approach for maintenance of the multicast tree. We have chosen ODMRP for our comparison studies because we want to compare the performance of our protocol with a mesh-based protocol, especially for comparing the packet delivery ratio metric, and with a protocol which uses soft state tree maintenance approach. Also ODMRP, which shows good packet delivery ratio, is one of best multicast protocols designed for MANETs [16]. 9.1. Metrics The performance metrics used are as follows. Data packet delivery ratio: The ratio of the average number of data packets received by the member nodes to the number of data packets transmitted by the multicast source. Packet delivery ratio = (( Rm )/(N − 1))/Ts , where N is the group size, Rm is number of packets received by members, and Ts is the number of data packets transmitted by the multicast source. Here, the multicast source is not considered as a member. Control packets transmitted per data packet received: This gives a measure of the control overhead of the protocol. Its value must be kept as low as possible. Routing efficiency: This metric represents the ratio of total number of data packets received by the members to the total number of data packets transmitted in the network. It indicates the bandwidth utilization of the protocol for data transmission. When the multicast group size is less, or when the multicast receiver nodes are located far away from the multicast source, the multicast efficiency would be less than one. But when the group size is very large, or when the receiver nodes are located very near (say, within one hop) to the multicast source node, the value of multicast efficiency tends to go above one. 9.2. Effect of mobility We studied the effect of mobility on the performance of our protocol. As shown in Fig. 5, the packet delivery ratio decreases with increasing mobility for all the protocols. But the rates of decrease in packet delivery ratio for PLBM
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, GroupSize = 10
1
Pause Time = 30 sec, K = 4, GroupSize = 10 1.05 PLBM ODMRP BEMRP
1
0.9
PLBM ODMRP BEMRP
0.85
0.95 Efficiency
Pkt Delivery Ratio
0.95
0.8 0.75
0.9 0.85 0.8
0.7
0.75
0.65
0.7
0.6
1197
2
4
6
8 10 Mobility (m/s)
12
14
0.65
16
Fig. 5. Pkt delivery ratio vs. mobility for a 50-node network.
4
6
8 10 Mobility (m/s)
12
14
16
Fig. 6. Efficiency vs. mobility for a 50-node network.
Pause Time = 30 sec, K = 4, GroupSize = 10
0.4 0.35 Ctrl Pkt Per Data Rcvd
and ODMRP are less compared to BEMRP. Packet delivery ratios of PLBM and ODMRP are better than that of BEMRP. PLBM shows a higher packet delivery ratio compared to ODMRP. The packet delivery ratio decreases with increasing mobility. This is due to more number of link breaks which result in more number of multicast tree partitions. When members re-connect after link breaks, it results in more number control packets getting collided with data packets due to increased JoinQuery flooding. PLBM shows better packet delivery ratio due to two reasons, less control overhead due to limited flooding, and quick tree reconfiguration during link breaks using NNT and CT information. ODMRP also shows high packet delivery ratio due to the presence of multiple paths between a member node and the multicast source. Fig. 6 reveals that both PLBM and BEMRP show better efficiency (number of data packets received per data packet transmitted) compared to ODMRP. The reasons for the better routing efficiencies of PLBM and BEMRP are attributed to their tree structure in which only a single path exists between members and the multicast source. In ODMRP, there may exist multiple paths between a member and the source node, which results in more data packet transmissions through the multiple paths. At high mobility, the efficiency of BEMRP decreases much faster compared to PLBM, as lesser number of packets reach the member nodes. Fig. 7 shows the ratio of the number of control packets transmitted to the number of data packets received at a node. ODMRP and PLBM have constant ratios with increasing mobility. In BEMRP, the ratio increases drastically because of the inability to quickly reconfigure the tree during link breaks. This results in less packet delivery ratio and more flooding of JoinQuery packets.
2
0.3
PLBM ODMRP BEMRP
0.25 0.2 0.15 0.1 0.05
2
4
6
8 10 Mobility (m/s)
12
14
16
Fig. 7. Ctrl pkt per data pkt rcvd vs. mobility for a 50-node network.
is varied from 5 members to 20 members. The mobility is fixed as 9 m/s which is considered as a moderate speed within our speed variation from 3–15 m/s. Fig. 8 shows that with increasing group size, the packet delivery ratio increases for both ODMRP and PLBM. This increase in packet delivery ratio for ODMRP is due to the increase in loops among the members; hence it is more immune to mobility. Packet delivery ratio for PLBM is the highest due to the fact that more tree nodes are present in the NNT of every node. This results in increasing the success ratio of tree reconfiguration during link breaks. BEMRP shows decrease in packet delivery ratio. This is due to the fact that as the number of members in the group increases, flooding in the network also increases. This results in more collisions which reduces the packet delivery ratio. As BEMRP does not maintain any local topology information, tree breaks are not repaired locally, resulting in frequent re-flooding of JoinQuery packets.
9.3. Effect of group size In this simulation experiment, we varied the group size while keeping all other parameters constant. The group size
As shown in Fig. 9, the efficiency increases for all the protocols with increasing group size. PLBM and BEMRP show better efficiency compared to ODMRP due to their
1198
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, Mobility = 9 m/s
1
Ctrl Pkt Per Data Rcvd
Pkt Delivery Ratio
0.95 0.9
PLBM ODMRP BEMRP
0.85 0.8 0.75
4
6
8
10 12 14 Group Size
16
18
0.4 0.3 0.2
0
20
Fig. 8. Pkt delivery ratio vs. group size for a 50-node network.
4
1 PLBM ODMRP BEMRP
8
10 12 14 Group Size
16
18
20
Pause Time = 30 sec, K = 4, Mobility 9 m/s, GroupSize = 10
0.95 Pkt Delivery Ratio
1.6
6
Fig. 10. Ctrl pkt per data pkt rcvd vs. group size for a 50-node network.
Pause Time = 30 sec, K = 4, Mobility = 9 m/s
1.8
1.4 Efficiency
PLBM ODMRP BEMRP
0.5
0.1
0.7 0.65
Pause Time = 30 sec, K = 4, Mobility = 9 m/s
0.6
1.2 1 0.8
PLBM ODMRP BEMRP
0.9 0.85 0.8 0.75 0.7 0.65
0.6 0.4
0.6 4
6
8
10 12 14 Group Size
16
18
20
0.55 30
35
40
45 50 Number of Nodes
55
60
Fig. 9. Efficiency vs. group size for a 50-node network.
Fig. 11. Pkt delivery ratio vs. network density for a 50-node network.
tree structures. The increase in efficiency is due to the fact that more number of members become forwarding nodes and more members are present in forwarding nodes’ capture area. When the group size is large, the efficiency of BEMRP increases less sharply compared to PLBM and ODMRP. PLBM exhibits the highest efficiency compared to the other two protocols. The reason behind this is that more member nodes connect to the already connected tree nodes using NNT information. Hence, the number of members per forwarding node in PLBM is more compared to ODMRP and BEMRP which do not maintain local topology information. Fig. 10 shows that PLBM has the least ratio of control packets transmitted per data packet received. It shows slight improvement with increasing group size because of the increase in the number of packets received which is more than the increase in the number of control packets transmitted.
in Fig. 11, as the network size increases, the packet delivery ratio also increases. This is due to the fact that increasing the number of nodes in the network reduces the chances of partitioning in the network. When a network is partitioned, some members may not be able to connect to the source. This reduces the number of packets received by the members. At very low node density, collision of control packets (JoinQuery, JoinReply, and JoinConfirm) results in breaking of paths between a member and the source. This may result in the isolation of the member node for some time period, and hence it reduces the packet delivery ratio. ODMRP and PLBM still show better performance than BEMRP. In ODMRP it is due to presence of multiple paths, while in PLBM it is due to the transmission of control packets (JoinQuery) in unicast mode when target tree nodes are within two hops. PLBM also re-routes JoinConfirm control packets through alternate routes when they are dropped at an intermediate node thereby preventing isolation of members from the tree. Fig. 12 shows that all three protocols show decrease in efficiency with increase in distance (path length) between
9.4. Effect of network density In this simulation experiment, we have studied the effect of increasing the number of nodes in the network. As shown
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, Mobility 9 m/s, GroupSize = 10
1.15 1.1
PLBM ODMRP BEMRP
1.05 Efficiency
1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 30
35
40 45 50 Number of Nodes
55
60
Fig. 12. Efficiency vs. network density for a 50-node network.
0.35
Pause Time = 30 sec, K = 4, Mobility 9 m/s, GroupSize = 10
Ctrl Pkt Per Data Rcvd
0.3
PLBM ODMRP BEMRP
0.25 0.2 0.15 0.1 0.05 30
35
40 45 50 Number of Nodes
55
60
Fig. 13. Ctrl pkt per data pkt rcvd vs. network density for a 50-node network.
members and multicast source. The efficiency of BEMRP is the lowest. Fig. 13 shows that when network size increases, PLBM performs far better than ODMRP and BEMRP in terms of control packets transmitted per data packet received. This is due to the ability of PLBM to maintain high packet delivery ratio and at the same time incur less control overhead.
10. PLBA for unified routing PLBA was originally proposed for routing unicast traffic in Ad hoc wireless networks [26]. In the first part of this paper, we have used PLBA for multicast routing. Till now we have used PLBA for routing unicast and multicast traffic separately. In this section we discuss a unified approach for simultaneously routing both unicast and multicast traffic using PLBA. We term this approach as Preferred Link Based Unified routing (PLBU). Though the performance analysis of ODMRP for unicast traffic was done in [1,15], performance of ODMRP in the presence of both unicast and mul-
1199
ticast traffic has not yet been evaluated. Similarly, the multicast extensions of several unicast protocol such as AODV [21], ABR [27], CEDAR [24], and ZRP [10] also do not evaluate the performance of the protocols in the presence of both types of traffic. In a realistic scenario both unicast and multicast traffic are present in the network at the same time. The basic approach is to have separate unicast and multicast routing protocols for routing each type of traffic separately. But, using separate unicast and multicast routing protocols has several disadvantages. The actual traffic in an Ad hoc wireless network would consist of simultaneous unicast and multicast sessions. Separate unicast and multicast protocols generate separate overheads/control packets; the overheads which are redundant in most cases lead to wastage of bandwidth and decrease in the overall efficiency of the system. A unicast session (for example, a voice session) may need to be converted into a multicast session at any time. This would be very complex if separate unicast and multicast protocols are used. PLBU overcomes the above disadvantages. We integrate unicast and multicast routing in PLBU. PLBU handles unicast as well as multicast traffic seamlessly. The advantages of a unified approach are as follows. • Impact of unicast and multicast traffic on each other can be analyzed and the protocol can be tuned for better performance. • The need for a separate protocol for each type of traffic is eliminated, which reduces complexity and the memory requirements in resource constrained networks such as MANETs. • The path established by multicast sessions can be used by unicast sessions, and vice versa. This reduces the control overhead in bandwidth constrained MANETs.
10.1. Key design issues for the unified protocol To design a unified protocol, the following issues need to be addressed. Centralized vs. distributed operation: The operation of the unified protocol should be distributed. As nodes in MANETs are not fixed, a node acting as a central coordinator may also be in motion. If a centralized approach is used, locating the moved centralized coordinator or frequent re-selection of the coordinators would result in high control overhead. Global topology vs. local topology: A unified protocol with the availability of global topology information, would pro-actively find paths to all the nodes in the network. This would result in low delay during route establishment. But maintaining consistent global topology information at all nodes requires frequent flooding of control messages, which consumes large bandwidth. Maintaining local topology information helps in finding better paths using the available local information such as information regarding link stability, channel quality, link load, or node degree (number of neighbors).
1200
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Use of route cache information: A protocol that uses route cache information has less routing control overhead due to the availability of routing information at intermediate nodes. Routing using route cache information requires the information maintained in the route cache to be consistent with the actual network topology. The disadvantage is that if a single cached path is used by many connections, it results in that particular path getting overloaded. Use of source routing: In the source routing approach, each data packet carries the complete path to be traversed to reach the destination. This eliminates the need to keep the session information at intermediate nodes. The disadvantage of source routing is that it increases the data packet size and hence consumes more bandwidth. As described in [3,13,22,23], distributed routing protocols are more efficient and scalable for MANETs. Simulation studies of PLBM also show that availability of local topology information reduces control overhead considerably. In the following section we describe our proposed hybrid protocol that uses a distributed approach and requires local topology information for routing both unicast and multicast traffic together. Our protocol does not require the availability of global topology information; routing at a node takes place using the node’s 2-hop neighbor information. PLBU uses route cache information for minimizing the control overhead. Since source routing is not possible for multicast sessions, to maintain homogeneity PLBU does not use source routing. 11. Description of unified routing using PLBA The PLBU has two phases, connect phase and reconfiguration phase. For a unicast session, the source initiates the connect phase to connect to the destination, whereas for a multicast session, receivers invoke the connect phase to connect to the multicast source. A node that initiates a connect phase is termed as connectSource node. As in PLBM, we use a receiver-initiated approach for multicast tree construction. 11.1. Connect phase In the connect phase, the connectSource node floods route probe packets in the network and in response gets route reply packets. A connectSource node floods the route probe packet only if all the following conditions are true. • Path to the destination is not available in the route cache table. • The destination is not in the node’s NNT. • The destination is not in the buffer table of the node. Once a node (source or intermediate node) sends a route probe packet, it makes an entry for the packet in its buffer table. If it receives route probe packets from other nodes for the same destination, it buffers the packets. When the node receives a reply for the route probe packet sent, it forwards the reply to the source node of each buffered
route probe packet. Buffering of route probe packets reduces the control overhead in scenarios such as multiple unicast sessions being started for the same destination, or when a multicast source turns out also to be the destination node for other unicast sessions. The route probe packet contains a flag indicating the session type, i.e., whether the session is a unicast or a multicast session. The identifier of the session is SrcI D, DestI D for unicast sessions and McastSrcI D, GroupI D for multicast sessions. The route probe packet also contains the PL which indicates the subset of neighbors of the transmitting node which are eligible for further forwarding the packet, a TTL field used to restrict the scope of the route probe packet, and the traversed path field TP consisting of the list of nodes through which the route probe packet has traversed so far. The packet also carries a unique sequence number SeqNum, generated by the source of route probe packet, to prevent loops and to avoid multiple forwarding of the same route probe packets. On receiving a route probe packet, a node checks its eligibility to further forward the packet. If it is not in the PL of the received packet, it discards it. Otherwise, if the destination is present in its NNT, it directly unicasts the route probe packet to the destination. For unicast sessions, intermediate nodes cannot use their route cache information and send route reply packets for route probe packets that they receive from other nodes. This prevents the generation of multiple route reply packets by different nodes for a single unicast session. Another reason is that, a route probe packet is also initiated when the current path gets broken. In such cases the current information in the route cache table of an intermediate node may be incorrect. However, a node can use the route cache for routing packets generated by itself. On receiving a route probe packet, the destination node sends a route reply packet back to the connectSource node. The destination selects the path traversed by the first route probe packet it receives. It discards all subsequent route probe packets that are received through different paths. The route reply packet contains the session type, session identifier, and the path to be traversed. An intermediate node on receiving a route reply packet, enters the information in its route cache table, and sends route reply packet for each buffered route probe packet. A route cache table entry contains all possible routing information derived from a route. Each entry is identified by the tuple SrcI D, DestI D, OriginalSrc, OriginalDestpath. Here, SrcID and DestID are the end points of the path, and OriginalSrc and OriginalDest represent end-points of the actual path from which the current path is derived. This pair is used to delete all the derived paths that become obsolete once the actual path gets broken. Once a route reply packet reaches the connectSource node, the node sends a connectConfirm message and becomes connected to the destination. Since PLBU does not allow intermediate nodes to send route reply packets for unicast sessions using their route cache information, the source node of a unicast session receives only a single route reply packet,
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
which is from the destination node. For a multicast session, a connectSource (receiver) node may receive multiple route reply packets from different nodes. The node which sends the route reply packet is either a tree node or a node whose route cache table has a route to at least one of the tree nodes. The connectSource node sends the connectConfirm message only for the first route reply packet it receives; subsequent route reply packets are just discarded. 11.2. Routing data packets PLBU does not differentiate between unicast and multicast traffic, and hence it finds routes for both types of sessions in almost the same way. The only difference is that the multicast connect session has to confirm the connectivity to one of the tree nodes as it may receive route reply packets from many tree nodes. This difference is also eliminated if only the multicast source is allowed to send a reply to the route probe packet, but this is at the cost of high control overhead. The transmission of data packets is also done in the same manner for both types of traffic. Source routing is not used; the data packet is transmitted in a hop by hop manner. A data packet carries the path it has traversed so far. This traversed path information is used by the intermediate nodes to refresh their existing route cache entries. This refreshing helps in maintaining an up-to-date view of the path in route cache tables of intermediate nodes. The cache table is used by multicast sessions to get paths to the multicast source nodes, thereby helping in reducing the control overhead. The routing of a data packet is done as follows. The source node sends the data packet to the next node, the address of which is obtained from the route cache table. The reason for not using source routing is to maintain homogeneity while routing data packets for multicast sessions, where source routing is not possible. As mentioned earlier, the data packet carries the path traversed by it so far. After receiving a data packet, a forwarding node appends its address to the traversed path field of the packet, and forwards the packet to the next hop node, the ID of which is available in the route cache table. Data packets belonging to multicast sessions are also routed in the same way. When a data packet transmitted by the source arrives at an intermediate node, it is forwarded as a unicast packet. The data packet carries the list of nodes to which it is intended. All downlink nodes process the data packet in the promiscuous mode. This list carried by the data packet gives the intermediate nodes information about the current multicast session and thereby helps in removing inconsistencies in the multicast tree information maintained at the node. 11.3. Maintaining local information of ongoing sessions The main motivation behind maintaining route cache information is to reduce the control overhead. Lack of route
1201
cache information at intermediate nodes does not affect the functioning of PLBU. It is just an enhancement made to our protocol for reducing control overhead. The key information that PLBU maintains about any session at the intermediate nodes is the two hop path information towards both the sender and the receiver, which is used to quickly reconfigure a broken path. In what follows, we describe the mechanism by which this two hop path information is maintained. Each node maintains a maximum of two hops local topology information which is used to reconfigure the broken path locally. The one hop and two hop nodes towards the source node are termed as uplink and previous-to-uplink nodes, respectively. Similarly the one hop and two hop nodes towards the destination are termed as downlink and second downlink nodes, respectively. This information can be maintained independent of the route cache table. Since the data packet carries the path it has traversed so far, the uplink and second uplink node information maintained at an intermediate node are updated whenever a data packet is received at the node. The downlink node information is extracted from the route reply packet and is refreshed every time the current downlink node successfully receives a data packet from the current node. When the downlink node transmits a data packet to the second downlink node, the current node refreshes the second downlink node information it maintains, using the broadcast nature of the radio channel. This information, also termed as passive acknowledgment, is very useful when ACK packets are not used by the MAC protocol. The unicast source and the multicast source are viewed as the root nodes of the tree network, and the unicast destination and multicast group members are the leaf nodes of that tree. Packet loss occurs due to two reasons, movement of the downlink node to another location, or due to link overload. A downlink node is assumed to be overloaded or to have moved away only after the MAC layer of the current node sends RTS packet multiple times (in our case 7 times) to the next downlink node and no CTS packet is received in response to those RTS packets.
11.4. Reconfiguration phase Dynamic movement of nodes in MANETs causes frequent link breaks. These link breaks, if not repaired locally, result in flooding of route probe packets by the connectSource nodes for re-establishing the path. Local route repair can be performed if the local topology information is available at each node. The usual approach in MANETs is to inform the source node(s) about the link break, which flood the network and find alternate paths. The other approaches are to bypass the broken link by using the local one hop topology information regarding nearby alternate nodes, or to use a local broadcast route maintenance scheme. Since in our protocol we maintain the two hop local topology information, the broken link is bypassed efficiently.
1202
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Mcast Src
Multicast Unicast
18
D1 D2
15
13
17
16
14 D3
M3
10
9
8
7
12 11
4
3
S3 5
1 2
M1
S2
S1 Cached Path (Src − Dest) pairs
6
M2
Path (S1−D1) 2−3−10−14−15 2−15, 2−14, 2−10, 2 3 3−2, 3−10, 3−14, 3−15 10−2, 10−3, 10−14, 10−15 14−2, 14−3, 14−10, 14−15 15−2, 15−3, 15−10, 15−14
Fig. 14. An example of unified routing.
of the multicast group or information about intermediate tree nodes, then this information, after being recorded in the route cache, can be used to connect to one such node without flooding control packets. A member node by using this information could now directly connect to an intermediate tree node. Unicast sessions can obtain more routing information through the multicast packets. • Information can be derived from multicast control packets such as route reply and connectConfirm. For example in Fig. 14, when multicast source node M1 sends a route reply packet, the path information (from multicast receiver node to node M1) carried by the packet is retrieved and stored in the cache table, which could be later used for routing unicast sessions originating from the node. • A multicast data packet carries the path traversed by it so far. Nodes receiving this packet (including those in the promiscuous mode) retrieve the traversed path information and store it in their route cache. Such information also aids unicast sessions.
11.5. Example 12. Performance study Fig. 14 shows how the unified unicast and multicast traffic routing is done. In this example, the multicast group has three members M1, M2, and M3. McastSrc (node 18) is the multicast source. The multicast session is established in the same fashion as described in the multicast example in Section 2. The dark dotted lines represent the multicast tree created by PLBU. Simultaneously, three unicast sessions are started between node 2 and node 15 (S1-D1), node 1 and node 14 (S2-D2), and node 6 and node 10 (S3-D3). The first unicast session is established between S1-D1, i.e., nodes 2 and 15. Here the route probe packets are forwarded according to the PLBU algorithm. The path chosen is 23-10-14-15. The paths derived from this path are shown in Fig. 14. As PLBU functions in the promiscuous mode, node 1 also gets to know about the path between S1-D1 and therefore derives all other possible paths. Using this cached information, the route for the second unicast session (between nodes S2 and D2) is established without any route probe packet flooding. In the case of the third unicast session, source S3 does not have any cached path to the destination D3, and hence it floods the route probe packet in the network. The path selected is 6-5-4-10. In our protocol, only unicast data sessions use route cache information, while the multicast sessions do not use it. The promiscuous mode provides nodes with more routing information, but at the cost of higher processing overhead. A multicast session also can make use of cached information as follows. • If any of the cached paths contains a route to the multicast source, then that route can be used, and hence, a JoinConfirm packet is directly sent to the source. • If control packets such as route reply packets carry information regarding intermediate nodes that are members
12.1. Simulation environment We evaluated the performance of our unified unicast and multicast routing protocol (PLBU) by carrying out various simulation studies. The simulation environment was same as used for the experiments described in Section 9. The route probe packet timeout period was set as 1 s. Multicast sessions do not use route cache information while unicast sessions are allowed to use it. The reason behind preventing multicast sessions from using route cache information is that, if such route cache information is used, the point of joining in the multicast tree of a new multicast receiver node may not be optimal. Further, since our PLBA is efficient, there does not arise any important need for further optimizing the scheme by using the route cache. Data packets are not transmitted when a multicast source is not connected to any of the member nodes. In the same way, for unicast sessions, data packets are also not transmitted until the source gets connected to the destination. Intermediate nodes do not buffer data packets during link breaks. The simulation is run for 600 s, and each multicast session and unicast session runs for 200 s. Both types of sessions start any time during the first 400 s of the simulation period by randomly selecting the source and receiver nodes of the sessions. All members join the multicast group at the same time, and leave the group after 200 s from their time of joining. Final results are averaged over more than 100 simulation iterations. For all simulations, a single multicast group with one multicast source is considered. In the graphs discussed below, PLB_Multicast denotes the performance curves for the multicast sessions, and PLB_Unicast is used to represent the performance curves for the unicast sessions.
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, Unicast Sessions = 4, GroupSize = 10
0.99 0.98
Pause Time = 30 sec, K = 4, Unicast Sessions = 13, GroupSize = 10
0.95
PLB_Multicast PLB_Unicast
1203
PLB_Multicast PLB_Unicast
0.9 Pkt Delivery Ratio
Pkt Delivery Ratio
0.97 0.96 0.95 0.94 0.93 0.92 0.91
0.8 0.75 0.7 0.65 0.6
0.9 0.89
0.85
2
4
6
8 10 Mobility (m/s)
12
14
0.55 2
16
Fig. 15. Pkt delivery ratio vs. mobility at low load.
12.2. Effect of mobility Fig. 15 shows the results for varying mobility at low load. Here the number of unicast sessions is 4, and one multicast session with group size of 10 is initiated. With increasing mobility, the packet delivery ratio for both unicast as well as multicast sessions gradually decreases. This is because of the occurrence of more number of path breaks at high mobility, which leads to more number of packets being dropped at the intermediate nodes. Fig. 16 shows the variation in packet delivery ratio with mobility, at high load. Here the number of unicast sessions is increased to 13. The packet delivery ratio decreases with increasing mobility, but unicast sessions suffer less when compared to multicast sessions. This is due to two reasons. First, the unicast sessions use route cache and at high load the chances that the required route is already available in the route cache are very high. Second, due to the high load, control overhead increases because of more flooding by increased number of sessions. A link break or session failure affects only one destination in a unicast session, while for a multicast session multiple destinations (multicast group members) may get affected. At low load the packet delivery ratio decreases by only 3–4 percent for both unicast as well as multicast sessions, while at high load the unicast packet delivery ratio decreases by 5–6 percent and multicast packet delivery ratio decreases by 20–25 percent. The variation in control overhead with varying mobility at low and high load is shown in Figs. 17 and 18, respectively. Control overhead refers to the total number of control packets transmitted during the simulation period of 200 s. As we have seen, at low load both unicast and multicast sessions show increase in control overhead. This is because of the increase in number of path breaks due to high mobility,
6
8 10 Mobility (m/s)
12
14
16
Fig. 16. Pkt delivery ratio vs. mobility at high load.
Control Overhead (ctrl pkts per session)
We evaluated our scheme for varying mobility and for varying load. Simulation experiments were also performed by varying the group size and by varying the number of unicast sessions.
4
Pause Time = 30 sec, K = 4, Unicast Sessions = 4, GroupSize = 10
2400
PLB_Multicast PLB_Unicast
2200 2000 1800 1600 1400 1200 1000 800 600 400
2
4
6
8 10 Mobility (m/s)
12
14
16
Fig. 17. Control overhead vs. mobility at low load.
which results in increased flooding by the unicast sources and multicast members. At high load, when the mobility is low (Fig. 18), the combined control overhead of 13 unicast sessions is less than the control overhead of the single multicast session (with group size 10). This is due to increased success of unicast sessions in finding routes directly using the information from the route cache. This is not the case with the multicast session as it does not use route cache information for routing. But when mobility increases, route cache entries start becoming stale very quickly, and hence the combined unicast control overhead becomes higher than the control overhead of the multicast session. The variation in the ratio of control packets transmitted per data packet received for low and high load conditions is shown in Figs. 19 and 20, respectively. At low load when the mobility is low, the ratio is more for multicast traffic compared to unicast traffic. But at high mobility, the ratio for unicast sessions becomes higher than that for the multicast session. This indicates the advantages of establishing a single multicast session rather than multiple unicast sessions. At high load, the ratio for the multicast session is higher than that for the unicast sessions, under all mobility conditions.
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, Unicast Sessions = 13, GroupSize = 10
5000 4500
0.35
4000 PLB_Multicast PLB_Unicast
3500 3000 2500 2000 1500
2
4
6
8 10 Mobility (m/s)
12
14
0.3 0.25 0.2 0.15
0.05
16
2
4
6
8 10 Mobility (m/s)
12
14
16
Fig. 20. Ctrl pkt per data pkt rcvd vs. mobility at high load.
Pause Time = 30 sec, K = 4, Unicast Sessions = 4, GroupSize = 10 Pause Time = 30 sec, K = 4, Unicast Sessions = 4, GroupSize = 10
1.1
PLB_Multicast PLB_Unicast
1 PLB_Multicast PLB_Unicast
0.9 Efficiency
Ctrl Pkt Per Data Rcvd
PLB_Multicast PLB_Unicast
0.1
Fig. 18. Control overhead vs. mobility at high load.
0.17 0.16 0.15 0.14 0.13 0.12 0.11 0.1 0.09 0.08 0.07 0.06
Pause Time = 30 sec, K = 4, Unicast Sessions = 13, GroupSize = 10
0.4
Ctrl Pkt Per Data Rcvd
Control Overhead (ctrl pkts per session)
1204
0.8 0.7 0.6 0.5 0.4
2
4
6
8 10 Mobility (m/s)
12
14
16
0.3
2
4
6
8 10 Mobility (m/s)
12
14
16
Fig. 19. Ctrl pkt per data pkt rcvd vs. mobility at low load. Fig. 21. Efficiency vs. mobility at low load.
Pause Time = 30 sec, K = 4, Unicast Sessions = 13, GroupSize = 10
1.1
PLB_Multicast PLB_Unicast
1 0.9 Efficiency
The main reason behind this is the decrease in packet delivery ratio of the multicast session at high load. This decrease in the number of packets received by the multicast members affects the ratio for the multicast session, which increases from 0.08–0.13 at low load to 0.25–0.40 at high load. The increase in the ratio of control packets transmitted per data packet received is less for unicast traffic when compared to multicast traffic. From these results we can conclude that at high load, unicast sessions are more efficient compared to multicast sessions. The efficiency variation for unicast and multicast traffic with varying mobility is shown in Figs. 21 and 22, for low load and high load conditions, respectively. Here efficiency is defined as the number of data packets received per data packet transmitted in the network. As shown in both figures, the efficiency of the multicast session is more than that of the unicast sessions. This is because of the fact that for the multicast session there are 9 receiver nodes that receive data packets and so the number of data packets received is in most cases higher than the number of data packets transmitted by the single multicast source node. But in the case of unicast sessions, a single transmitted data packet is received by a
0.8 0.7 0.6 0.5 0.4 0.3
2
4
6
8 10 Mobility (m/s)
12
14
16
Fig. 22. Efficiency vs. mobility at high load.
single receiver node only, and so the number of data packets received is never greater than the number of data packets transmitted.
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 10
1
PLB_Multicast PLB_Unicast
Pkt Delivery Ratio
0.95 0.9 0.85 0.8 0.75 0.7
4
5
6
7 8 9 10 Unicast Sessions
11
12
13
Control Overhead (ctrl pkts per session)
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 20
1
PLB_Multicast PLB_Unicast
Pkt Delivery Ratio
0.95 0.9 0.85 0.8 0.75
4
5
6
7
8 9 10 Unicast Sessions
11
12
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 10
5000 4500
PLB_Multicast PLB_Unicast
4000 3500 3000 2500 2000 1500 1000
4
5
6
7 8 9 10 Unicast Sessions
11
12
13
Fig. 25. Control overhead vs. unicast load, with group size 10.
13
Fig. 24. Pkt delivery ratio vs. unicast load, with group size 20.
12.3. Effect of unicast load In this set of simulation experiments, we kept the multicast group size fixed, and evaluated the performance of our unified protocol by varying the unicast load. The node mobility range was fixed at a 3–15 m/s. Figs. 23 and 24 show the variation in packet delivery ratio with varying number of unicast sessions for multicast group sizes of 10 and 20 members, respectively. As the unicast load increases, the packet delivery ratio of both unicast and multicast sessions decreases. This is due to the increased flooding by the increasing number of unicast sessions. This flooding causes more collisions, resulting in increased control overhead, which in turn causes more collisions with data packets. The increase in the number of unicast sessions also results in the increase in number of data packets which also collide with other data packets, thereby further reducing the packet delivery ratio. Figs. 25 and 26 show variation in control overhead with increasing number of unicast sessions for group sizes of 10 and 20 nodes (9 and 19 receiver member nodes), respectively. It can be seen from both figures that the control overhead for the unicast sessions is lesser than that of the
Control Overhead (ctrl pkts per session)
Fig. 23. Pkt delivery ratio vs. unicast load, with group size 10.
1205
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 20
9000 8000
PLB_Multicast PLB_Unicast
7000 6000 5000 4000 3000 2000 1000
4
5
6
7 8 9 10 Unicast Sessions
11
12
13
Fig. 26. Control overhead vs. unicast load, with group size 20.
multicast session. For example in Fig. 25, the control overhead for the multicast session with 9 member nodes is more compared to the combined control overhead of the 13 unicast sessions. This is due to the availability of more number of routes in the route cache, which reduces flooding for unicast sessions. Since multicast sessions do not use route cache information, flooding does not get reduced in the case of the lone multicast session. Also, as expected the control overhead of the multicast session with group size of 20 is higher than that of the multicast session whose group size is 10. From Fig. 26 it can be seen that when the network load is high (10–13 unicast sessions), the rate of increase in control overhead for unicast sessions become high. This is due to the increase in number of collisions of multicast control packets with unicast control packets, which results in route request failures. Figs. 27 and 28 show the variation in the ratio of control packets transmitted per data packet received with increasing number of unicast sessions, for multicast group sizes of 10 and 20 members, respectively. The unicast sessions show a better ratio compared to the multicast sessions. This is due to the much better packet delivery ratio and lesser control
1206
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
1.1
PLB_Multicast PLB_Unicast
0.35
PLB_Multicast PLB_Unicast
1 0.9
0.3 Efficiency
Ctrl Pkt Per Data Rcvd
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 10
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 10
0.4
0.25 0.2
0.8 0.7 0.6 0.5
0.15
0.4 0.1
4
5
6
7 8 9 10 Unicast Sessions
11
12
0.3
13
Fig. 27. Ctrl pkt per data pkt rcvd vs. unicast load, with group size 10.
4
5
6
7 8 9 10 Unicast Sessions
11
12
13
Fig. 29. Efficiency vs. unicast load, with group size 10.
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 20
1.8
PLB_Multicast PLB_Unicast
1.6 PLB_Multicast PLB_Unicast
1.4 Efficiency
Ctrl Pkt Per Data Rcvd
Pause Time = 30 sec, K = 4, Mobility = 9, Group Size = 20
0.3 0.28 0.26 0.24 0.22 0.2 0.18 0.16 0.14 0.12 0.1 0.08
1.2 1 0.8 0.6 0.4 0.2
4
5
6
7 8 9 10 Unicast Sessions
11
12
13
Fig. 28. Ctrl pkt per data pkt rcvd vs. unicast load, with group size 20.
4
5
6
7 8 9 10 Unicast Sessions
11
12
13
Fig. 30. Efficiency vs. unicast load, with group size 20.
12.4. Effect of multicast load overhead for unicast sessions. The better packet delivery ratio is due to the fact that in the case of a unicast session a path break results in only one destination node being unable to receive data packets, while in the case of a multicast session there may be multiple members that are affected. The usage of route cache information for unicast routing, which reduces flooding, also contributes towards the lower ratio in the case of unicast sessions. As the group size increases, the multicast sessions show improved ratios. Figs. 29 and 30 show the variation in the efficiency of unicast and multicast traffic with varying load, for group sizes of 10 and 20 nodes, respectively. As defined earlier, efficiency is the number of data packets received per data packet transmitted in the network. As shown in the graphs, efficiencies of both unicast and multicast traffic remain almost constant with increase in unicast traffic load. The multicast session shows better efficiency due to the fact that many receivers exist for a single multicast session, while for a unicast session a packet has to travel multiple hops to reach the single destination node.
In this set of experiments, we kept the number of unicast sessions constant and studied the performance of our protocol by varying the multicast group size. A moderate mobility of 9 m/s was used. We have evaluated the performance of our protocol by fixing number of unicast sessions as 7 and 13. Figs. 31 and 32 show the variation in packet delivery ratio with varying group size, for networks with 7 and 13 unicast sessions, respectively. From these figures it can be seen that the packet delivery ratio of the multicast sessions decreases considerably with increasing unicast traffic load. Further, the drop in packet delivery ratio is more significant when the number of unicast sessions is increased to 13 (Fig. 32), compared to the case when the number of unicast sessions in 7 (Fig. 31). But, in the case of unicast sessions there occurs very less variation in the packet delivery ratio. From these results we can conclude that multicast sessions are more affected by increasing group size, while unicast sessions remain more or less unaffected.
PLB_Multicast PLB_Unicast
Pkt Delivery Ratio
0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82
4
6
8
10 12 14 Group Size
16
18
20
Fig. 31. Pkt delivery ratio vs. multicast load, with 7 unicast sessions.
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 13
0.95
Pkt Delivery Ratio
0.9 0.85
PLB_Multicast PLB_Unicast
0.8 0.75 0.7 0.65 0.6 0.55 0.5
4
6
8
10
12 14 Group Size
16
18
20
1207
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 7
5500
PLB_Multicast PLB_Unicast
5000 4500 4000 3500 3000 2500 2000 1500 1000
4
6
8
10 12 14 Group Size
16
18
20
Fig. 33. Control overhead vs. multicast load, with 7 unicast sessions.
Control Overhead (ctrl pkts per session)
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 7
0.98
Control Overhead (ctrl pkts per session)
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 13
9000
PLB_Multicast PLB_Unicast
8000 7000 6000 5000 4000 3000 2000
4
6
8
10
12 14 Group Size
16
18
20
Fig. 32. Pkt delivery ratio vs. multicast load, with 13 unicast sessions.
Fig. 34. Control overhead vs. multicast load, with 13 unicast sessions.
Figs. 33 and 34 show the variation in control overhead with varying group size, for number of unicast sessions fixed as 7 and 13, respectively. As evident from the graphs, the unicast control overhead remains unaffected with varying group size when the number of unicast sessions is 7, while it increases slightly when there are 13 unicast sessions. The increase in control overhead for the 13 unicast sessions case is due to the increase in route request failures that occur because of packet collisions. These collisions occur due to the increased number of control packet transmissions. On the other hand, the multicast control overhead increases proportionally with increasing group size. This is due to the increase in the number of receivers with the increase in group size, which in turn increases flooding in the network. This increased flooding occurs since we have used a receiverinitiated approach where members flood JoinQuery packets in order to connect to the multicast source. Figs. 35 and 36 show the variation in the ratio of the number control packets transmitted to number of data packets received, with varying group size, for 7 and 13 unicast sessions, respectively. As the group size increases, both the number of control packets transmitted and the num-
ber of data packets received increase. The increase in control overhead is due to increased flooding, while the increase in the number of data packets received is due to the increase in the number of receivers. The ratio for unicast traffic slightly increases when the number of unicast sessions is 7, while it remains almost constant for the 13 unicast sessions case. The reason behind this is that when the group size increases, for the 13 unicast sessions case, the increase in the number of packets received is less compared to the increase in control packets transmitted. But when the number of unicast sessions is 7, they are more or less the same. Figs. 37 and 38 show the variation in the efficiency of the multicast and unicast sessions with varying group size, for 7 and 13 unicast sessions, respectively. As shown in figures, the efficiency of the multicast session improves steadily with increasing group size, while that of the unicast sessions remains almost constant. The main reason is the increase in number of intermediate tree nodes for the multicast session. The other reason for this increasing efficiency is that more data packets are received due to the increase in the number of multicast members, and also since path breaks are
1208
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 7
0.21
PLB_Multicast PLB_Unicast
1.4
0.19 0.18 0.17 0.16
0.8
0.14
0.4 0.2 4
6
8
10 12 14 Group Size
16
18
20
PLB_Multicast PLB_Unicast
0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1
4
6
8
10 12 14 Group Size
16
18
20
Fig. 36. Ctrl pkt per data pkt rcvd vs. multicast load, with 13 unicast sessions.
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 7
1.8
PLB_Multicast PLB_Unicast
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2
4
6
8
10 12 14 Group Size
16
18
4
6
8
10
12 14 Group Size
16
18
20
Fig. 38. Efficiency vs. multicast load, with 13 unicast sessions.
13. Conclusion
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 13
0.6
Ctrl Pkt Per Data Rcvd
1
0.6
Fig. 35. Ctrl pkt per data pkt rcvd vs. multicast load, with 7 unicast sessions.
Efficiency
1.2
0.15
0.13
PLB_Multicast PLB_Unicast
1.6
Efficiency
Ctrl Pkt Per Data Rcvd
0.2
Pause Time = 30 sec, K = 4, Mobility = 9, Unicast Sessions = 13
1.8
20
Fig. 37. Efficiency vs. multicast load, with 7 unicast sessions.
locally repaired very quickly, packet loss is very less. The efficiency of unicast transmission is independent of the increase in multicast group size.
We have proposed an efficient receiver-initiated tree based multicast protocol called Preferred Link Based Multicast protocol (PLBM) that uses two hop local topology information for efficient multicast routing. During forwarding of JoinQuery, our protocol uses a preferred link based approach in which a subset of neighbors are selected. The selection is made using our Preferred Link Based Algorithm (PLBA). This algorithm selects disjoint neighbors that have higher neighbor degrees and discards overlapping nodes. The two hop topology is also used during tree maintenance. PLBA can be easily integrated with any existing multicast protocol. The JoinQuery forwarding mechanism is very important in multicast protocols for MANETS. PLBA can enhance the performance of almost any existing multicast protocol if is used for the JoinQuery forwarding phase. We evaluated the performance of our protocol, and compared it with two existing protocols, ODMRP and BEMRP. Simulation results show that our protocol performs better than the other two protocols in terms of both packet delivery ratio and control overhead. Also, when the mobility is high or when the multicast group size is large, our protocol is more efficient in terms of bandwidth consumed by control packets, compared to the other two protocols. Another advantage of PLBM is that it provides better flexibility and adaptability through the preferred link concept. The criterion for selecting the preferred list need not be restricted to just the neighbor degree alone; any other node or link characteristic (such as link stability, link load, residual bandwidth, and link delay) can be used for computing the preferred links. We have also proposed a unified mechanism, the PLBU protocol, for routing unicast and multicast traffic together in a homogeneous manner. The unified approach makes minimum differentiation between unicast and multicast traffic. Through extensive simulation studies we have analyzed the behavior of our unified protocol in the presence of simultaneous multicast and unicast sessions.
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
Acknowledgements This work was supported by the Department of Science and Technology, New Delhi, India. The authors wish to thank the anonymous reviewers for their comments and suggestions that helped improve the quality of this paper.
References [1] S.H. Bae, S.J. Lee, M. Gerla, Unicast performance analysis of the ODMRP in a mobile Ad hoc network testbed, in: Proceedings of IEEE ICCCN 2000, October 2000, pp. 148–153. [2] E. Bommaiah, M. Liu, A. McAuley, R. Talpade, AMRoute: Ad hoc multicast routing protocol, Internet-Draft, draft-talpade-manetamroute-00.txt, August 1998. [3] J. Broch, D.A. Maltz, D.B. Johnson, Y.C. Hu, J. Jetcheva, A performance comparison of multi-hop wireless Ad hoc network routing protocols, in: Proceedings of MOBICOM’98,1998. [4] C.C. Chiang, M. Gerla, L. Zhang, Forwarding group multicasting protocol for multi-hop, mobile wireless networks, ACM-Baltzer J. Cluster Comput. 1 (2) (1998) 187–196 (Special Issue on Mobile Computing). [5] C.C. Chiang, H.K. Wu, W. Liu, M. Gerla, Routing in clustered multihop, mobile wireless networks with fading channel, in: Proceedings of IEEE SICON’97, April 1997, pp. 197–211. [6] V. Devarapalli, A.A. Selcuk, D. Sidhu, MZR: A multicast protocol for mobile Ad hoc networks, Internet Draft, draft-vijay-manet-mzr01.txt, July 2001. [7] R. Dube, C.D. Rais, K.Y. Wang, S.K. Tripathi, Signal stability based adaptive routing for Ad hoc mobile networks, IEEE Personal Comm. (1997) 36–45. [8] C.L. Fullmer, J.J. Garcia-Luna-Aceves, Solutions to hidden terminal problems in wireless networks, in: Proceedings of ACM SIGCOMM, September 1997, pp. 39–49. [9] J.J. Garcia-Luna-Aceves, E.L. Madruga, The core-assisted mesh protocol, IEEE J. Selected Areas in Comm. 17 (8) (1999) 1380– 1994. [10] Z.J. Haas, The routing algorithm for the reconfigurable wireless networks, in: Proceedings of ICUPC’97, October 1997. [11] L. Ji, M.S. Corson, Differential destination multicast (DDM) specification, Internet-draft, draft-ietf-manet-ddm-00.txt, July 2000. [12] D.B. Johnson, D.A. Maltz, Dynamic source routing in Ad hoc wireless networks, Mobile Computing, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996, pp. 153–181 (Chapter 5). [13] S.J. Lee, M. Gerla, C.K. Toh, A simulation study of table-driven and on-demand routing protocols for mobile Ad hoc networks, IEEE Networks Magazine 13 (4) (1999) 48–54. [14] S.J. Lee, M. Gerla, C.C. Chiang, On demand multicast routing protocol, in: Proceedings of IEEE WCNC’99, September 1999, pp. 1298–1302. [15] S.J. Lee, W. Su, M. Gerla, Exploiting the unicast functionality of the on-demand multicast routing protocol, in: Proceedings of IEEE WCNC’2000, September 2000. [16] S.J. Lee, W. Su, J. Hsu, M. Gerla, R. Bagrodia, A performance comparison study of Ad hoc wireless multicast protocols, in: Proceedings of the IEEE INFOCOM’00, March 2000, pp. 565–574. [17] S. Murthy, J.J. Garcia-Luna-Aceves, An efficient routing protocol for wireless networks, ACM Mobile Networks Appl. J. 1 (2) (1996) 183– 197 (Special Issue on Routing in Mobile Communication Networks). [18] T. Ozaki, J.B. Kim, T. Suda, Bandwidth efficient multicast routing protocol for Ad hoc networks, in: Proceedings of IEEE ICCCN’99, October 1999, pp. 10–17.
1209
[19] V.D. Park, M.S. Corson, A highly adaptive distributed routing algorithm for mobile wireless networks, in: Proceedings of IEEE INFOCOM’97, April 1997, pp. 1405–1413. [20] C.E. Perkins, P. Bhagwat, Highly dynamic destination-sequenced vector routing (DSDV) for mobile computers, in: Proceedings of ACM SIGCOMM’94,1994, pp. 234–244. [21] C.E. Perkins, E.M. Royer, Ad hoc on-demand distance vector routing, in: Proceedings of IEEE Workshop on Mobile Computing Systems and Applications, February 1999, pp. 90–100. [22] E.M. Royer, C.K. Toh, A review of current routing protocols for Ad hoc mobile wireless networks, IEEE Personal Communications Magazine (1999) 46–55. [23] R. Samir, C. Robert, Y. Jiangtao, S. Rimli, Comparative performance evaluation of routing protocols for mobile, Ad hoc networks, in: Proceedings of IEEE IC3N’98, October 1998. [24] P. Sinha, R. Sivakumar, V. Bharghavan, CEDAR: A core extraction distributed Ad hoc routing algorithm, IEEE J. Selected Areas Comm. 17 (8) (1999) 1454–1466. [25] P. Sinha, R. Sivakumar, V. Bharghavan, MCEDAR: Multicast core extraction distributed Ad hoc routing, in: Proceedings of IEEE WCNC’99, September 1999, pp. 1313–1317. [26] R.S. Sisodia, B.S. Manoj, C. Siva Ram Murthy, A preferred link based routing protocol for wireless Ad hoc networks, J. Comm. Networks 4 (1) (2002) 14–21. [27] C.K. Toh, Associativity-based routing for Ad hoc mobile networks, Wireless Personal Comm. 4 (2) (1997) 1–36. [28] UCLA Computer Science Department, Parallel Computing Laboratory and Wireless Adaptive Mobility Laboratory. GloMoSim: A Scalable Simulation Environment for Wireless and Wired Network Systems. [29] C.W. Wu, Y.C. Tay, C.K. Toh, Ad hoc multicast routing protocol utilizing increasing id-numberS (AMRIS) functional specification, Internet-Draft, draft-ietf-manet-amris-spec-00.txt, November 1998. Rajendra Singh Sisodia obtained his B.Tech degree in Computer Science and Engineering in 1999 from Barkatullah University, Bhopal, and his M.S (by research) degree in Computer Science and Engineering from the Indian Institute of Technology (IIT), Madras, India in 2002. He currently is a software engineer with Philips Research Lab, Bangalore. His research interests include Ad hoc wireless networks, optical networks, and software engineering. I. Karthigeyan obtained his B.Tech degree in Computer Science and Engineering from the University of Madras, India, in 2000, and his M.S. (by Research) degree in Computer Science and Engineering from the Indian Institute of Technology (IIT), Madras, India in 2004. He is currently working as a Software Engineer at Lucent Technologies, INS India Development Center, Bangalore, India. His research interests include Wireless Networks and Optical Networks. B.S. Manoj received his Ph.D degree in Computer Science and Engineering from the Indian Institute of Technology, Madras, India, in July 2004. He has worked as a Senior Engineer with Banyan Networks Pvt. Ltd., Chennai, India from 1998 to 2000 where his primary responsibility included design and development of protocols for real-time traffic support in data networks. He had been an Infosys doctoral student in the Department of Computer Science and Engineering at the Indian Institute of Technology-Madras, India. He is a recipient of the Indian Science Congress Association Young Scientist Award for the Year 2003. Since January 2004, he has been a Project Officer at the Department of Computer Science and Engineering, Indian Institute of Technology Madras, India. His current research interests include ad hoc wireless networks, next generation wireless architectures, and wireless sensor networks. C. Siva Ram Murthy received the B.Tech. degree in Electronics and Communications Engineering from Regional Engineering College (now National Institute of Technology), Warangal, India, in 1982, the M.Tech.
1210
R.S. Sisodia et al. / J. Parallel Distrib. Comput. 64 (2004) 1185 – 1210
degree in Computer Engineering from the Indian Institute of Technology (IIT), Kharagpur, India, in 1984, and the Ph.D. degree in Computer Science from the Indian Institute of Science, Bangalore, India, in 1988. He joined the Department of Computer Science and Engineering, IIT, Madras, as a Lecturer in September 1988, and became an Assistant Professor in August 1989 and an Associate Professor in May 1995. He has been a Professor with the same department since September 2000. He has held visiting positions at the German National Research Centre for Information Technology (GMD), Bonn, Germany, the University of Stuttgart, Germany, the University of Freiburg, Germany, the Swiss Federal Institute of Technology (EPFL), Switzerland, and the University of Washington, Seattle, USA. He has to his credit over 100 research papers in international journals and over 80 international conference publications. He is the co-author of the textbooks Parallel Computers: Architecture and Programming, (PrenticeHall of India, New Delhi, India), New Parallel Algorithms for Direct Solution of Linear Equations, (John Wiley & Sons, Inc., New York, USA),
Resource Management in Real-time Systems and Networks, (MIT Press, Cambridge, Massachusetts, USA), WDM Optical Networks: Concepts, Design, and Algorithms, (Prentice Hall, Upper Saddle River, New Jersey, USA), and Ad Hoc Wireless Networks: Architectures and Protocols, (Prentice Hall, Upper Saddle River, New Jersey, USA). His research interests include parallel and distributed computing, real-time systems, lightwave networks, and wireless networks. Dr. Murthy is a recipient of the Sheshgiri Kaikini Medal for the Best Ph.D. Thesis from the Indian Institute of Science, the Indian National Science Academy (INSA) Medal for Young Scientists, and Dr. Vikram Sarabhai Research Award for his scientific contributions and achievements in the fields of Electronics, Informatics, Telematics & Automation. He is a co-recipient of Best Paper Awards from the 1st Inter Research Institute Student Seminar (IRISS) in Computer Science, the 5th IEEE International Workshop on Parallel and Distributed Real-Time Systems (WPDRTS), and the 6th International Conference on High Performance Computing (HiPC). He is a Fellow of the Indian National Academy of Engineering.