MFVL HCCA: A MODIFIED FAST-VEGAS-LIA HYBRID CONGESTION CONTROL ALGORITHM FOR MPTCP TRAFFIC FLOWS IN MULTIHOMED SMART GAS IOT NETWORKS - MDPI
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
electronics Article MFVL HCCA: A Modified Fast-Vegas-LIA Hybrid Congestion Control Algorithm for MPTCP Traffic Flows in Multihomed Smart Gas IoT Networks Mumajjed Ul Mudassir 1,2, * and M. Iram Baig 2 1 Electrical and Computer Engineering Department, Air University, Islamabad 44000, Pakistan 2 Electrical Engineering Department, University of Engineering and Technology, Taxila 47050, Pakistan; iram.baig@uettaxila.edu.pk * Correspondence: mumajjed@mail.au.edu.pk Abstract: Multihomed smart gas meters are Internet of Things (IoT) devices that transmit information wirelessly to a cloud or remote database via multiple network paths. The information is utilized by the smart gas grid for accurate load forecasting and several other important tasks. With the rapid growth in such smart IoT networks and data rates, reliable transport layer protocols with efficient congestion control algorithms are required. The small Transmission Control Protocol/Internet Protocol (TCP/IP) stacks designed for IoT devices still lack efficient congestion control schemes. Multipath transmission control protocol (MPTCP) based congestion control algorithms are among the recent research topics. Many coupled and uncoupled congestion control algorithms have been proposed by researchers. The default congestion control algorithm for MPTCP is coupled congestion control by using the linked- increases algorithm (LIA). In battery powered smart meters, packet retransmissions consume extra power and low goodput results in poor system performance. In this study, we propose a modified Citation: Mudassir, M.U.; Baig, M.I. Fast-Vegas-LIA hybrid congestion control algorithm (MFVL HCCA) for MPTCP by considering the MFVL HCCA: A Modified requirements of a smart gas grid. Our novel algorithm operates in uncoupled congestion control Fast-Vegas-LIA Hybrid Congestion mode as long as there is no shared bottleneck and switches to coupled congestion control mode Control Algorithm for MPTCP Traffic Flows in Multihomed Smart Gas IoT otherwise. We have presented the details of our proposed model and compared the simulation results Networks. Electronics 2021, 10, 711. with the default coupled congestion control for MPTCP. Our proposed algorithm in uncoupled mode https://doi.org/10.3390/ shows a decrease in packet loss up to 50% and increase in average goodput up to 30%. electronics10060711 Keywords: multipath TCP; Internet of Things; congestion control; smart gas meter; smart gas network Academic Editor: Seok-Joo Koh Received: 11 February 2021 Accepted: 10 March 2021 1. Introduction Published: 18 March 2021 The goal of the Internet of Things (IoT) is to connect different devices and sensors to the Internet. According to a prediction by the Statista research department, the number Publisher’s Note: MDPI stays neutral of active IoT-connected devices such as sensors, nodes, and gateways will reach up to with regard to jurisdictional claims in 30.9 billion units worldwide by 2025 [1]. Smart city is the new trend of the era and the published maps and institutional affil- IoT is playing a major role in the design and deployment of smart city infrastructure. The iations. design of a smart city is divided into multiple domains, subsystems, and blocks and it is really difficult to implement an efficient design for a smart city by using the IoT. Some of the important domains of a smart city are electricity and natural gas management, water management, irrigation, waste material, parking space, and intelligent street lighting [2,3]. Copyright: © 2021 by the authors. The datasets used in the design and simulation of such domains are publicly available on Licensee MDPI, Basel, Switzerland. the Internet [4,5]. This article is an open access article distributed under the terms and 1.1. Smart Gas Networks conditions of the Creative Commons All over the world, natural gas is being used in homes and industries for heating and Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ fueling purposes. Many research and development organizations are conducting their 4.0/). research in the area of IoT-based smart grids for natural gas [6,7]. Gas utilities gather Electronics 2021, 10, 711. https://doi.org/10.3390/electronics10060711 https://www.mdpi.com/journal/electronics
Electronics 2021, 10, 711 2 of 27 real-time data for load forecasts, efficient gas transportation, pressure gauging, gas theft detection, gas leaks, pipe corrosion, and remote cut off in emergency situations. To achieve this, communication networks with advanced infrastructure are required; smart meters and sensors use these networks for data transmission. All the components are connected to form a smart grid [8]. Once the network is established [9], data can be collected by gas utilities even from old non-smart gas meters by attaching some extra devices and sensors [10]. Then, the data is analyzed with the help of artificial intelligence software for further actions [11]. 1.2. Smart Gas Meters A smart gas meter is an IoT device that consists of a processing unit connected with different sensors, wireless communication modules, real-time clock, power management units, electric motors to control valves, display unit, and a battery [12]. In addition to measuring gas flow, it also wirelessly connects to a smart gas grid over wide area networks allowing data access, remote location monitoring, infrastructure maintenance, automatic billing, and load forecasting [13]. It can request a remote gas cut-off after detecting emergency situations by sensing earthquakes, gas leakage, etc. Batteries are used as the power source for smart meters. Low power modules are preferred in the design of a smart meter to enhance battery life [14]. Multiple wireless interface standards can be used in a smart meter design to enable it to simultaneously connect to multiple heterogeneous or homogeneous networks. 1.3. Selection of Appropriate Protocol in an IoT Network The Internet runs on hundreds of protocols; many protocols are supported by IoT and many are still under development. When designing an IoT system, the system requirements should be defined very precisely, and then the right protocol should be chosen to address them. Currently, due to an increase in memory size and processing power, small, embedded devices and modules are capable of running large programs and algorithms. Development of new wireless communication standards such as IEEE 802.11 ah (Wi-Fi Halow) for IoT have also enabled the devices to communicate at much higher data rates over long distances [15,16]. In any communication network with many devices, network congestion is the main issue that causes poor data rates and packet loss [17]. In the IoT, Transmission Control Protocol (TCP) has traditionally been avoided as a transport-layer protocol due to the extra overhead associated with it. However, recent trends and developments in IoT devices and networks are favoring TCP for congestion control and end-to-end reliable delivery of data [18]. 1.4. Multipath Transmission Control Protocol (MPTCP) in Multihomed Devices Modern IoT devices are equipped with multiple network interfaces, capable of simulta- neously connecting to multiple network links and different Internet Protocol (IP) addresses. These links can be used for concurrent transfer of data, then if any links fail others can be used for successful data delivery. The multipath transmission control protocol (MPTCP) is embedded in all such modern multihomed devices. Multihoming is defined as the ability of a host or device to simultaneously connect to multiple heterogeneous or homogeneous networks [19]. Multihomed devices with MPTCP protocol support, divide the application’s data into multiple streams, and then utilize multiple network paths simultaneously for data transmission/reception. Load balancing, congestion control, and dynamic switching are handled by the protocol in order the improve throughput and quality of service [20]. 1.5. Smart Gas Grid Infrastructure In Figure 1, we present the structure of an IoT-based smart natural gas grid that uses multihomed smart gas meters with dual low power Wi-Fi Halow interfaces capable of simultaneously connecting to two gateways of different Internet service providers (ISPs) for parallel transfer of data to server. The information received from these smart meters, and
1.5. Smart Gas Grid Infrastructure In Figure 1, we present the structure of an IoT-based smart natural gas grid that uses multihomed smart gas meters with dual low power Wi-Fi Halow interfaces capable of Electronics 2021, 10, 711 3 of 27 simultaneously connecting to two gateways of different Internet service providers (ISPs) for parallel transfer of data to server. The information received from these smart meters, and from some other data sources, for example, weather information, is then used by the smartsome from grid other for short-term loadfor data sources, forecasting example,(STFL). weatherDeep learning is information, methods then usedarebyused the for smartac- curate grid forload forecasts. short-term loadGas distribution forecasting management (STFL). makes Deep learning decisions methods according are used to the for accurate forecasts load for intelligent forecasts. distribution Gas distribution of gas to different management areas. In according makes decisions such IoT networks where to the forecasts end-to-end for reliable intelligent transmission distribution ofdifferent of gas to data from smart areas. In meters such IoTto networks a server iswhere required, TCP is end-to-end always preferred reliable over of transmission UserdataDatagram from smartProtocol (UDP) meters to a and MPTCP server is used TCP is required, in multihomed is always devices. over User Datagram Protocol (UDP) and MPTCP is used in multihomed devices. preferred Figure 1. Figure 1. An An Internet Internet of of Things Things (IoT)-based (IoT)-based smart smart gas gas network. network. 1.6. Problem Analysis With thethe growing growing number number of of devices devices inin IoT IoT networks, networks, thethe network network congestion congestion also also increases. increases. For smooth data transfer, a reliable transport layer protocol with an For smooth data transfer, a reliable transport layer protocol with an efficient efficient congestion congestion control control algorithm algorithmisisrequired. required.The TheInternet InternetofofThings Thingsusesusessmall smallTCP/IP TCP/IP stacks stacks with with very limited capabilities. Many vulnerabilities and flaws have been found in very limited capabilities. Many vulnerabilities and flaws have been found in such such stacks. stacks. Growing Growing technological technological developments developments are are producing producing powerful powerful small small devices devices andand large IoT networks with high data rates, and therefore network congestion large IoT networks with high data rates, and therefore network congestion is a critical is a critical issue. MPTCP is a protocol issue. MPTCP used for used is a protocol multipath data transfer. for multipath dataMany congestion transfer. control algorithms Many congestion control have been proposed by different researchers for MPTCP and the performance algorithms have been proposed by different researchers for MPTCP and the performance evaluation of these algorithms evaluation is still under of these algorithms debate. is still underThere is aThere debate. lot ofisresearch going on, a lot of research to design going on, to new congestion control algorithms for MPTCP. The default coupled design new congestion control algorithms for MPTCP. The default coupled congestion congestion control by using linked-increases algorithm (LIA) of MPTCP ensures fairness in control by using linked-increases algorithm (LIA) of MPTCP ensures fairness in the case the case of a shared bottleneck of a sharedbut suffers from bottleneck low throughput but suffers from low otherwise, throughputdue to its coupled otherwise, due toarchitecture. its coupled In the case of a smart gas grid, the goal of a congestion control algorithm is to increase architecture. In the case of a smart gas grid, the goal of a congestion control algorithm is goodput for timely transmission of important data and decrease packet retransmissions to increase goodput for timely transmission of important data and decrease packet re- to save power. To fulfill this requirement, we have proposed a novel congestion control transmissions to save power. To fulfill this requirement, we have proposed a novel con- algorithm for MPTCP and compared our results with the default coupled congestion control algorithm of MPTCP. 1.7. Contribution Our main contribution in this research is the design of a hybrid congestion control algorithm for MPTCP. The default MPTCP scheduler is used that distributes packets among subflows by observing round trip time delays and congestion windows. As soon as there is
Electronics 2021, 10, 711 4 of 27 a space available in the queue of a subflow, packets are injected into the pipe. The proposed modified Fast-Vegas-LIA hybrid congestion control algorithm (MFVL HCCA) is based on a modified Fast TCP, modified TCP Vegas, and LIA congestion control algorithms. It uses a shared bottleneck detection method and works in uncoupled mode or coupled mode accordingly. We simulated our design in Network Simulator 2.34 (NS 2.34) and compared the results with the default coupled congestion control LIA of MPTCP. The remainder of this paper is organized as follows: In Section 2, we give an overview of background and related research work; in Section 3, we describe the details of the proposed model; the results and discussions are presented in Section 4; and our conclusions are atated in Section 5. 2. Background and Related Work Many protocols for the IoT are available and research is still going on to discover new protocols for this area. Researchers use different protocol stacks after carefully observing the requirements of an IoT system to fulfill its needs [21]. Some of the Application Layer protocols for IoT devices that require TCP at the transport layer for reliable communication are Extensible Messaging and Presence Protocol (XMPP), Message Queuing Telemetry Transport (MQTT) Protocol, and Advanced Message Queuing Protocol (AMQP). The MQTT Protocol is widely used for data transmission between devices and servers. However, any other suitable protocol can also be used [22]. For memory constrained devices, some TCP/IP protocol stacks with limited function- ality are available. These stacks are open source and many embedded system developers and programmers from all around the world are making improvements in the codes and functionalities [23]. Millions of IoT devices are using these stacks. According to a recent report by Forescout Research Labs, 33 vulnerabilities have been found in uIP, FNET, pi- coTCP, and Nut/Net stacks. An attacker can exploit these flaws to take full control of the device, execute code remotely, and steal data [24]. Therefore, these tiny open-source protocol stacks are no longer trustworthy. With the increase in processing speed, battery capacity and memory size of IoT devices, now, more advanced algorithms and standard protocols are required to fulfill the requirements of IoT systems. By 2025, billions of IoT devices will be connected with the Internet, sending tens of zettabytes of data [1]. Some of the most commonly used physical layer protocols for IoT are 802.15.4, 802.11 (Wi-Fi), Bluetooth Low Energy, and ZigBee Smart. A new low power and long range Wi-Fi standard 802.11 ah with the name “Wi-Fi HaLow” has been developed for IoT devices [15]. It has a throughput range from hundreds of kilobits per second (kbps) to tens of megabits per second (Mbps) [25]. Table 1 shows the characteristics of different physical layer standards for IoT [26]. Table 1. Characteristics of different physical layer standards for IoT. Wi-Fi HaLow LoRaWan Sigfox NB-IoT Bluetooth Z-Wave Zigbee Idle power Low Low Low Low Low Low Low consumption Data rate 150 k–86.7 MBps
Electronics 2021, 10, 711 5 of 27 In multihomed devices, the MPTCP is used at the transport layer. The MPTCP transmits data over multiple paths by using subflows. The aim of this scheme is to increase throughput and robustness [28]. One common application of multipath communication in mobile phones is to use Wi-Fi and 3G paths simultaneously to transfer data in parallel, so that if any network path fails then another could be used for data transfer [29]. However, an IoT device can also have multiple interfaces of same standard available, for example, Wi-Fi to connect to multiple Wi-Fi networks of different ISPs for concurrent transfer of data. The MPTCP also performs load balancing among different paths. In the MPTCP, separate congestion windows are maintained on each path; however, in order to prevent harm to fairness, especially in case of a shared bottleneck link, execution of congestion control independently on each path is avoided by most algorithms. Many congestion control algorithms have been proposed to couple together all the sublows of a single multipath flow in order to achieve fairness and efficiency [30]. A congestion control algorithm can be delay based, loss based, or hybrid. Most of the congestion control algorithms for the MPTCP are loss based. To ensure fairness and better performance over the Internet, three design goals of the MPTCP congestion control algorithm have been defined. The goals are that a multipath flow should at least perform as well as a single path TCP flow; if more than one subflow shares a single bottleneck link, then, the multipath subflows should not harm other TCP flows; and a multipath flow should utilize the less congested path more than the congested one [31]. Several protocol designs from various authors are available for multipath data transfer. A protocol pTCP was proposed to transfer data concurrently through multiple paths [32]. In [33], the authors considered the wireless link as bottleneck to ensure protocol fairness and proposed a method to utilize the total bandwidth available on multiple paths of a multihomed mobile host. In [34], the authors proposed the design of a multipath TCP and discussed, in detail, the algorithm for detecting shared congestion at the bottleneck link by using fast retransmit events of different paths. In [35], the authors proposed a concurrent multipath transfer method (CMT) by using Stream Control Transmission Protocol (SCTP). The CMT-SCTP is the improved version of SCTP for implementing multipath transfer in multihomed hosts. All these schemes use uncoupled congestion control by running separate congestion control on each subflow. In addition, some of the protocols show a high degree of unfairness, in the case of shared bottleneck. In order to solve the issue of unfairness when multiple subflows of an MPTCP con- nection share the same bottleneck link, coupled congestion control algorithms have been proposed. In coupled congestion control algorithms, the congestion window of each sub- flow is updated by keeping in view the total congestion window of all subflows. The aim is to ensure bottleneck fairness and overall fairness in the network. In the case of a shared bottleneck link, these algorithms work efficiently but the possibility of a com- mon bottleneck link shared by multiple flows is very rare. In the absence of a shared bottleneck link, the performance of these algorithms only results in the underutilization of available bandwidth. Several loss-based coupled congestion control based schemes such as LIA [31], balanced linked adaptation (BALIA) [36], opportunistic linked-increases algorithm (OLIA) [37] have been proposed. All these algorithms only control the increase mechanism of congestion window in congestion avoidance phase and the rest of the phases are like TCP Reno. In loss-based algorithms, the congestion window is adjusted on packet loss detection, so packet retransmissions are high and packet losses only give a rough esti- mate of network congestion. Another loss-based algorithm Dynamic LIA(D-LIA) [38] has been proposed that dynamically controls the decreasing mechanism of congestion window less aggressively but this behavior is not efficient and only adds more packet losses. A delay-based congestion control algorithm weighted Vegas (WVegas) [30] based on TCP Vegas has been proposed. The WVegas algorithm uses fine grained load balancing by using packet queuing delay as congestion signals. This algorithm shows low packet losses and better intra-protocol fairness. The authors have tried to improve fairness and load balancing more than the throughput. In order to resolve the issue of underutilization
Electronics 2021, 10, 711 6 of 27 of bandwidth by WVegas in large bandwidth delay product networks, another delay-based algorithm, MPFast [39], has been proposed. The MPFast algorithm uses Fast TCP as a congestion control algorithm for multipath transfer. But the aggressive behavior of Fast TCP causes more packet losses in congested networks. In [28], the authors proposed machine learning methods for MPTCP path management to select high quality paths. In [20], a new MPTCP scheme with application distributor for low memory multi- homed IoT devices was proposed. The aim of the scheme was to solve the buffer blocking issues in multihoming. In [40], the authors proposed an energy efficient congestion control scheme, emReno based on mutipath TCP, to shift traffic from one path to a low cost energy path. Table 2 shows the comparison of existing MPTCP congestion control algorithms with the proposed MFVL HCCA. Table 2. Comparison of existing multipath transmission control protocol (MPTCP) congestion control algorithms with the modified Fast-Vegas-LIA hybrid congestion control algorithm (MFVL HCCA). Congestion Network Suitability for Control Standardized Mode of Operation Congestion Merits Demerits and Research Gaps Smart Meters in Method Detection Method Smart Gas Grid Increased Improvements in shared Coupled and Packet queuing throughput, bottleneck detection method is MFVL HCCA No uncoupled Delay and packet High reduced packet loss, required when the subflows (both) loss TCP friendliness. suffer different network delays Reduced throughput, underutilization of bandwidth in coupled mode, TCP friendliness in Yes increased packet loss rate due LIA [31] Coupled only Packet loss the ase of shared Low RFC 6356 to loss-based detection method bottleneck link and additive increase multiplicative decrease (AIMD) of congestion window Responsive, Reduced throughput, less non-flappy, TCP aggressive behavior due to OLIA [37] No Coupled only Packet loss friendliness, less Low coupled control mode, traffic transmission increased packet loss. over congested path TCP friendliness, Reduced throughput, due to BALIA [36] No Coupled only Packet loss Low responsiveness. coupled control Increased packet retransmissions due to dynamic decrease of congestion window Increased D-LIA [38] No Coupled only Packet loss in the case of packet loss, Low throughput dynamic slow decrease causes more packet losses as compared with multiplicative decrease Reduced throughput and Better intra protocol underutilization of bandwidth Packet queuing fairness Better load due to less aggressive behavior, WVegas [30] Coupled only Low delay balancing Reduced less efficient for high packet losses bandwidth delay product networks Increased throughput for Increased packet loss due to Packet queuing MPFast [39] No Coupled only large bandwidth aggressive behavior of Fast TCP Low delay delay product on more congested link networks Unfriendliness, reduced Reduced energy throughput, increased packet emReno [40] No Semi-coupled Packet loss Low consumption loss due to aggressive behavior of TCP Reno Unfriendliness, increased Increased packet loss due to aggressive CMT-SCTP [35] No Uncoupled Packet loss Low throughput behavior based on standard TCP 3. Proposed Modified Fast-Vegas-LIA Hybrid Congestion Control Algorithm (MFVL HCCA) Design In this section we present our proposed algorithm design. The algorithm MFVL is a hybrid congestion control algorithm which uses the following algorithms as submodules:
Electronics 2021, 10, 711 7 of 27 • Modified Fast TCP congestion control algorithm (MFast); • Modified TCP Vegas congestion control algorithm (MVegas); • Shared bottleneck detection method and coupled congestion control LIA. After discussing details of these submodules, we explain the functionality of our main algorithm. 3.1. Modified TCP Vegas Congestion Control Algorithm (MVegas) Brakmo and Peterson proposed a delay-based algorithm called TCP Vegas for reliable transfer of data. According to the authors, it can achieve a much higher throughput than a loss-based algorithm TCP Reno and less packet retransmissions are required [41]. However, a drawback of the TCP Vegas is the inability to receive a fair bandwidth share when competing with TCP Reno and some other TCP variants. The TCP Vegas always consumes less bandwidth as compared with others [42]. In [43], the authors proposed modifications in the TCP Vegas to overcome this problem. Hence, some modifications are required in the original TCP Vegas to improve its performance in terms of overcoming the loss of bandwidth consumption. The TCP Vegas adjusts its congestion window size by calculating the difference between expected and actual throughput. A greater difference is the result of increased round trip time (RTT) delay and indicates that the network is congested. The TCP Vegas starts with a slow start, and then switches to congestion avoidance mode. 3.1.1. Slow Start Vegas uses threshold γ during slow start. The default value of γ is 1. When the difference (expected throughput–actual throughput) is less than γ, the congestion window (cwnd) is increased by 1 every other RTT. Hence, the cwnd in slow start grows exponentially, but at a slower rate than TCP Reno. When the difference is larger than γ or the value of cwnd becomes equal to the threshold value for congestion window in slow start (ssthresh), then, the congestion avoidance phase starts. Upon leaving slow start phase the cwnd is decreased by 1/8 of its current value in order to prevent network congestion. 3.1.2. Congestion Avoidance During the congestion avoidance phase, two threshold constants, α and β, are used. In the TCP Vegas, the congestion control algorithm tries to maintain the number of packets in network queues between a minimum and maximum value in order to prevent network congestion and avoid packet loss. The minimum value is represented by α and the maxi- mum value is represented by β. The default value of α is 1 and for β it is 3 in NS-2.34. The cwnd is updated according to Equation (1). Equation (2) gives the difference between the expected throughput and actual throughput. RTT is the observed RTT and baseRTT is the minimum observed RTT. cwnd + 1 i f di f f < α cwnd = cwnd − 1 i f di f f > β (1) cwnd otherwise cwndt cwndt di f f t = − (2) baseRtt rtt In our proposed design, we have made some modifications, only to the congestion avoidance phase. Let di f f t and di f f (t−1) represents the differences at current time t and previous time (t − 1). Let ϕ represent the ratio of differences, then di f f (t−1) ϕ= . (3) di f f t If di f f t is less than di f f (t−1) , it shows a gradual decline in network congestion and, correspondingly, ϕ holds a value greater than 1. If di f f t is greater than di f f (t−1) , it shows a
Electronics 2021, 10, 711 8 of 27 gradual increase in network congestion and, consequently, ϕ will have a value less than 1. In order to make Vegas a bit more aggressive, for the purpose of consuming the shared bandwidth efficiently, we have introduced two increasing factors α0 and β0 . Their current values at time “t” are represented by α0t and β0t . On reset(start), in our proposed model, they are set to constant values of 1 and 2, respectively. Their values are increased or decreased dynamically at run time by sensing the current difference and previous difference between throughputs. The updated values after dynamic change are represented by α0t+1 and β0t+1 , respectively. Our proposed model tries to keep packets between (α + α0t ) and (β + β0t ) in network queues by adjusting the cwnd accordingly. The difference between throughputs is calculated and if (di f f t ) is less than (α + α0t ), it shows room for more packets to be injected into the network and, consequently, the cwnd is updated according to Equation (4) as follows: cwnd = cwnd + 1 (4) The value of α0t+1 is calculated by using Equation (5) as follows: α0t+1 = min(k ∗ α, ρ) (5) where ρ = α0t + ϕ. (6) A value of ϕ > 1 indicates that there is a decrease in network congestion over time, and as a result, the proposed model increases α0 and β0 dynamically by adding the value of ϕ in their current values. The updated α0t+1 can have a maximum value equal to (k ∗ α) where k is a constant and its value lies between 1 to 2. With k = 1, α0t+1 can achieve a maximum value of 1, because the default value of α is 1, and with k = 2, α0t+1 can have a maximum value of 2. In the proposed design, k was set to 2 after carefully evaluating it through different experiments. We observed that increasing it further, increases the maximum value of α0t+1 , resulting in a more aggressive increase in congestion window, and thus resulting in more packet drops. Decreasing k below 1, results in a decreased maximum value of α0t+1 , which results in underutilization of available bandwidth due to less aggressive growth of the congestion window. The value of β0t+1 is calculated by using Equation (7) (β0t+1 can have a maximum value equal to β) as follows: β0t+1 = min( β, ω ) (7) where ω = β0t + ϕ. (8) Any value greater than the maximum value can result in higher packet loss due to a more aggressive increase in the congestion window. If di f f t > β + β0t , it indicates that the network is currently congested, hence, the congestion window is decreased, according to Equation (9), as follows: cwnd = cwnd − 1 (9) and β0t+1 is calculated, according to Equation (10), as follows: β0t+1 = max (α + 1), ω 0 (10) where ω 0 = β0t − ϕ. (11) β0t+1 can only be decreased to a minimum value of α + 1, and further reducing it will make it equal to or less than α which would result in the underutilization of available bandwidth and network queues. For the sake of avoiding severe degradation in the utilization of available bandwidth, we decided not to decrease α0 dynamically in the case
Electronics 2021, 10, 711 9 of 27 of network congestion, as it is a factor that decides the minimum number of packets to be kept in network queues. 3.1.3. Loss Recovery When packet loss is detected (due to time out), then ssthresh is set to half of the cwnd, cwnd is reset to 2, and slow start phase starts again. When three duplicate ACKs are received, then fast retransmission and fast recovery is executed. After fast retransmit, cwnd is set to 34 of the current cwnd, and congestion avoidance phase is executed again. 3.2. Modified Fast TCP Congestion Control Algorithm Another congestion control algorithm Fast TCP [44] uses queuing delay along with packet loss to detect network congestion. The Fast TCP updates its window according to Equation (12), under which the network moves toward equilibrium rapidly in larger steps and slows down by taking smaller steps near equilibrium. The Fast TCP adjusts its window in three phases as follows: slow start (SS), multi- plicative increase (MI), and exponential convergence (EC). The Fast TCP uses the same slow start algorithm of TCP Reno with only a slight variation by using a threshold gamma. The Fast TCP exits slow start when the number of packets present in the network queue exceeds gamma. In order to achieve equilibrium, the Fast TCP uses MI; the Fast TCP in- creases or decreases its window on alternate RTTs in both MI and EC phases as a protection measure. When a packet loss is detected, the window is reduced to half and loss recovery phase starts. In EC phase, the window is increased exponentially. The new window size is calcu- lated by using Equation (12) as follows: baseRTT w ← min 2w, (1 − γ)w + γ w + α(w, qdelay) (12) RTT where γ ∈ [0, 1], baseRTT is the current minimum RTT, and qdelay represents the end-to- end queuing delay (average), and α(w, qdelay) is a constant that represents the number of packets each flow tries to maintain in the network buffer(s) at equilibrium [45]. In our algorithm, we modified this behavior of the Fast TCP by introducing a control- ling factor σ. The modified equation is given as follows: baseRTT w ← min 2w, (1 − γ)w + σ ∗ γ w + α(w, qdelay) (13) avgRTT where σ is a constant and its value can be adjusted from 0 to 1. Towards 0, Fast TCP will behave less aggressively and towards 1, Fast TCP will move towards its natural aggressive behavior. In our design, we used σ = 0.5, after testing its different values for different simulation scenarios. By increasing the value of this controlling factor, more throughput can be achieved but unfairness and packet drop also increases. 3.3. Shared Bottleneck Detection and Coupled Congestion Control In [34], the authors proposed a shared congestion detection method using fast re- transmits. If the two subflows or paths share the same bottleneck congestion, they are said to be “correlated” otherwise “independent”. We have assumed that the latency of paths is equal. Every time a fast retransmit event happens at any subflow, the time of the event in that specific subflow is recorded in a list. In this way, two lists, i.e., A and B, with timestamps (a1, a2, . . . , am) and (b1, b2, . . . , bn) from subflows S1 and S2 are obtained. Then, timestamps from A and B are compared in such a way that if |ai − bj| < interval, then (ai,bj) is said to be a match. A match shows that packets were lost and that a fast retransmit event took place at both paths at about the same time. Therefore, both paths probably share the same congested link. The maximum number of matched pairs (ai,bj) are represented by match(A,B). The term min(m,n) gives the minimum number of recorded
Electronics 2021, 10, 711 10 of 27 fast retransmit timestamps from lists A and B. Two subflows are considered to be sharing the same bottleneck congested link if detection results in a value greater than a threshold δ. Match(A, B) Detection = > δ, (14) min(m.n) The authors in [34], after performing several experiments, showed that by using interval = 200 ms and δ = 0.5, the shared congestion could be detected successfully. We used the same method to detect shared bottleneck. If shared bottleneck is detected, then, our proposed model switches to coupled congestion control mode and selects default coupled congestion control LIA of MPTCP [31] for both subflows. LIA uses the following steps for only increasing the cwnd in congestion avoidance phase (the remaining phases behave similar to a standard TCP algorithm): • cwnd_count_i is maintained for each subflow ‘i’; • cwnd_count_i is the number of segments acked since the last cwndi increment; • cwndi is incremented by 1 (and after increment, cwnd_count_i is set to 0); when cwndtotal cwndcount > max al phascale ∗ , cwndi , (15) al pha where cwndmax al pha = al phascale ∗ cwndtotal ∗ 2 , (16) rttmax ∗cwndi sum rtti and al phascale is the precision parameter. According to [31], setting al phascale to 512 works well in most cases. We used the same algorithm in our model without any modifications. 3.4. Main Functionality of MFVL HCCA and Modes of Operation Here, we discuss the main flow of our algorithm. Figure 2a shows the IoT proto- col stack with MPTCP as transport layer protocol. The system architecture is shown in Figure 2b. A smart meter with two Wi-Fi Halow interfaces is connected to a remote database server via multipath connectivity. The multipath flow is divided into two sub- flows. Two gateways from different ISPs are used for multipath connectivity. Our proposed congestion control algorithm maintains separate congestion windows for each subflow. The MFVL HCCA operates in the following two modes: 1. Uncoupled mode; 2. Coupled mode. On reset, the MFVL algorithm starts in an uncoupled mode and runs the modified TCP Vegas congestion control algorithm for each subflow. On every packet loss, packet retransmission takes place. For a duration of T (s), the number of retransmitted packets belonging to a particular subflow are counted, and the timestamps are also saved in an array. After the time T, shared bottleneck detection is performed by using the timestamps of packet retransmissions for each subflow. The probability of a bottleneck link shared by two subflows belonging to the same multipath flow is very rare but if shared bottleneck is detected, then the MFVL algorithm switches to coupled congestion control mode and LIA is selected as the coupled congestion control algorithm for both subflows. If no shared bottleneck is detected, then, the MFVL Algorithm keeps working in uncoupled congestion control mode. The number of retransmitted packets of Subflow 1 and Subflow 2 are compared. The modified Fast algorithm is selected for subflow with less packet retransmissions and the modified Vegas is selected for subflow with more packet retransmissions. After time T, the shared bottleneck detection and selection process are repeated again by observing packet retransmissions
Electronics 2021,10, Electronics 2021, 10,711 x FOR PEER REVIEW 11 12 of of 27 28 (a) (b) Figure 2. Cont.
x FOR PEER REVIEW Electronics 2021, 10, 711 13 12 of 28 27 (c) 2. (a) Figure 2. (a)IoT IoTProtocol Protocol stack stack with with proposed proposed modified modified Fast-Vegas-LIA Fast-Vegas-LIA hybridhybrid congestion congestion controlcontrol algorithm algorithm (MFVL (MFVL HCCA) HCCA) for for multipath multipath transmission transmission control(MPTCP); control protocol protocol (b) (MPTCP); (b) System (c) System architecture; architecture; (c) Proposed Proposed MFVL MFVL HCCA HCCA flowchart. flowchart.
Electronics 2021, 10, 711 13 of 27 3.4.1. Algorithm Limitations The shared bottleneck detection method used in the proposed algorithm works well when the two subflows (paths) experience the same network latency, but in the case of different network delays, time synchronization is an issue. Hence, further improvements in the design are required. In the future, we are planning to improve the efficiency of the shared bottleneck detection method by reducing the time required to detect a shared congested link and to handle multiple subflows experiencing different network delays. 3.4.2. Algorithm Explanation The notations used in the proposed algorithm are given in Table 3 along with their definitions. Table 3. Notations used in the proposed algorithm. Notation Definition ‘t’ Simulation time in seconds A constant value for the time duration to count packet retransmissions, in simulations we T used T = 50 s A time interval used in the calculation of shared bottleneck detection, in simulations we used Interval interval = 200 ms bn_thresh The bottleneck detection threshold δ. In simulations we used bn_thresh = 0.5 P1_R A variable to store number of packet retransmissions belonging to subflow 1 (path 1) P2_R A variable to store number of packet retransmissions belonging to subflow 2 (path 2) P1[ ] An array to store packet retransmission time stamps belonging to subflow 1 (path 1) P2[ ] An array to store packet retransmission time stamps belonging to subflow 2 (path 2) Match A variable to store matched packet retransmission pairs from both subflows Bneck_Detect A variable to store result of bottleneck detection calculation i, j Variables i and j for loops and array index i_max, j_max Variables to store maximum values of i and j Algorithm 1 shows the pseudo code. The algorithm starts with initial values of T = 50 s, bn_thresh = 0.5, interval = 200 ms, and all other variables equal to 0. At the start, the modified TCP Vegas is selected for each subflow as the congestion control algorithm. For a simulation time t = 0 to t = T s, on each packet retransmission event belonging to Subflow 1 (Path 1), the variable P1_R and “i” are incremented and the time stamp for the event is stored in array P1[i]. On each packet retransmission event belonging to Subflow 2 (Path 2), the variable P2_R and “j” are incremented and the time stamp for the event is stored in array P2[j]. When the loop ends, i_max holds the maximum value of i and j_max holds the maximum value of j. The values of P1_R and P2_R show the number of packet retransmissions that took place during time T s for Subflows 1 and 2, respectively. The arrays P1[ ] and P2[ ] have the total number of stored time stamp values equal to i_max and j_max, respectively. The values stored in P1[ ] and P2[ ] are the time stamps of packet retransmission events belonging to Subflow 1 and Subflow 2, respectively. In the next step, by using nested loops the matched pairs from two arrays P1[ ] and P2[ ] are detected in such a way that each element of P1[ ] is compared with each element of P2[ ] one by one, by subtracting their values. If the absolute value results in a number less than the “interval”, the pair is said to be matched. The value of the interval in our case is 200 ms. Therefore, all the pairs of two arrays P1[ ] and P2[ ] are said to be matched that have a value difference of less than 200 ms. The “match” variable is incremented on detecting each matched pair. The matched pair indicates that the retransmission took place on both subflows at almost the same time. A greater number of matches indicate that both the subflows are sharing the same congested bottleneck link. For bottleneck detection, the number of matched pairs is divided by the minimum number of array elements. If the answer is greater than bn_thresh (bottleneck detection threshold value), then, there is a high probability that a shared bottleneck is present, otherwise absent. In the case of shared bottleneck, the default coupled congestion control LIA is used for both subflows (paths).
Electronics 2021, 10, 711 14 of 27 If shared bottleneck is not detected, then the numbers of packet retransmissions in both subflows are compared. The modified TCP Vegas congestion control algorithm is selected for subflow with more packet retransmissions and the modified Fast TCP congestion control algorithm is selected for subflow with less packet retransmissions. The algorithm then jumps to label “Again” and from t = 0 to t = T s the whole procedure is repeated again for the next cycle. The flow chart of the proposed algorithm is shown in Figure 2c. Algorithm 1. MFVL HCCA 1: Inputs: T, Interval, bn_thresh //T = 50 s, Interval = 200 ms, //bn_thresh = 0.5 2: Outputs: P1_R, P2_R, P1[ ], P2[ ], Match,Bneck_Detect 3: Start: 4: Run Modified Vegas Congestion Control on subflow1 (Path1) 5: Run Modified Vegas Congestion Control on subflow 2(Path2) 6: Again: 7: Initialize: t = 0, i = 0, j = 0, i_max = 0, j_max = 0, P1_R = 0, P2_R = 0, P1[0] = 0, P2[0] = 0 8: For (t = 0 to t = T s) do 9: if (Packet Retransmission takes place at subflow1 ) then 10: i++ 11: P1_R = P1_R + 1 12: P1[i] = t 13: End if 14: if (Packet Retransmission takes place at subflow2 ) then 15: j++ 16: P2_R = P2_R + 1 17: P2[j] = t 18: End if 19: End For 20: i_max = i 21: j_max = j 22: For( i = 1 to i = i_max) do 23: For(j = 1 to j = j_max) do 24: If (|P1[i] − P2[j]| < Interval) then 25: Match++ 26: End if 27: End For 28: End For 29: Bneck_Detect = Match/min(i_max,j_max) 30: If (Bneck_Detect > bn_thresh) then 31: Run Coupled Congestion Control on subflow1 and subflow2 32: End if 33: If (P1_R < P2_R) then 34: Run Modified Vegas Congestion Control on subflow 2 35: Run Modified Fast Congestion Control on subflow 1 36: End if 37: if (P1_R > P2_R) then 38: Run Modified Vegas Congestion Control on subflow 1 39: Run Modified Fast Congestion Control on subflow 2 40: End if 41: Go to Again 4. Simulations, Results, and Discussions All the simulations were done using NS-2.34. We used wired-cum-wireless network topology. For the MPTCP simulation, the available MPTCP module [46] with coupled congestion control LIA (MPTCP-CC-LIA) for NS2 was used. Modifications were done to the original Fast TCP and TCP Vegas codes in NS-2.34, and then these modified codes were used to implement the proposed MFVL HCCA for the MPTCP. Different experiments were performed to compare the performance of our proposed model with MPTCP-CC-LIA. The final results were plotted using GNUPLOT. Figure 3 shows the network connections. Node “S” is a multihomed source node with two interfaces “S_0” and “S_1” for wireless connections to gateways. A multipath flow is divided into two subflows. Packets transmitted through S_0 represent Subflow 1. Packets transmitted through S_0 represent Subflow 2. Selected parameters for wireless connections are based on 802.11 standard of NS2. The MPTCP agent with File Transfer
Packets transmitted through S_0 represent Subflow 2. Selected parameters for wireless connections are based on 802.11 standard of NS2. The MPTCP agent with File Transfer Protocol (FTP) traffic generator is connected to source node. The source node connects wirelessly to two different gateway nodes, GW1 and GW2, via interfaces S_0 and S_1. Electronics 2021, 10, 711 15 of 27 The gateways are further connected to routers through wired links. Node “D” is the des- tination multihomed node with two wireless interfaces “D_0” and “D_1”. Node “D” was connected wirelessly to two gateway nodes, “GW3” and“GW4”, through D_0 and D_1, Protocol (FTP) respectively. traffic The generator connection is connected between R1 andto R2source showsnode. The source the bottleneck node link connects for Path 1 of wirelessly data rate 3toMbps, two different gateway 50 ms delay, andnodes, queueGW1 limitand GW2, of 100 via interfaces packets. S_0 and R3-R4 shows theS_1. The bottle- gateways are Path neck link for further connected 2 of data rateto1 routers Mbps, 50 through wired ms delay, and links. queueNode “D” limit of is thepackets. 100 destination The multihomed node with two wireless interfaces “D_0” and “D_1”. Node remaining wired connections have data rate 10 Mbps and 10 ms delay. Nodes N1 abd N3 “D” was connected wirelessly are used totoinject two gateway backgroundnodes, “GW3” traffic and“GW4”, to create through D_0 path congestion; and D_1, constant bitrespectively. rate (CBR) The connection traffic generators between R1 and with UDP R2 shows agents the bottleneck are used. The packet linksize for of Path 1 ofand CBR dataTCP rate is 3 Mbps, set to 50 msbytes. 1000 delay,The and maximum queue limitwindow of 100 packets. size forR3-R4 shows theisbottleneck each interface set to 100link for Path packets. Null 2 of dataare agents rateattached 1 Mbps,with 50 ms delay, nodes N2andandqueue N4. The limit UDPof agents 100 packets. The remaining are connected wired to null agents. connections GW1-R1-R2-GW3 have data rate 10Path represents Mbps 1.and 10 ms delay. Nodes GW2-R3-R4-GW4 N1 abd represents N32.are used to inject Path background traffic to create path congestion; constant bit rate In the presented simulation scenarios, no shared bottleneck path is used(CBR) traffic generators because with we UDP agents are used. The packet size of CBR and TCP is set to 1000 are more interested in analyzing the results of our proposed algorithm in uncoupled bytes. The maximum window mode which size for useseach the interface modifiedisVegas set to and 100 packets. modifiedNullFastagents are attached algorithms. with mode, In coupled nodes N2 the and N4. The algorithm UDPaccording works agents aretoconnected MPTCP LIA, to null agents. therefore, theGW1-R1-R2-GW3 behavior is the same represents as that Path 1. GW2-R3-R4-GW4 represents Path of default coupled congestion control of MPTCP. 2. Figure 3. Figure 3. Simulation Simulation topology. topology. To the In implement presentedour proposedscenarios, simulation design, no the shared required modifications bottleneck path is were used done becausein we are more interested ns-defaults.tcl and other inNS2 analyzing the NS2 “c” files. results wasof recompiled our proposed byalgorithm using “make”.in uncoupled Various mode which uses experiments werethe runmodified by usingVegas and modified the same simulation Fast algorithms. topology. The In coupled mode, simulation the time, rep- algorithm resented byworks according “T” was tos.MPTCP set to 50 The CBR LIA, therefore, data rate wasthe behavior initially is 1the set to sameSame Mbps. as that CBR of default coupled start stop congestion pattern was usedcontrol for all of MPTCP. The CBR start stop pattern is shown in experiments. To4implement Figure our proposed and throughput in Figuredesign, 5. We havethe required modifications written different were done awk scripts in ns- to calculate defaults.tcl and other NS2 “c” files. NS2 was recompiled by using the average goodput, average packet drop rate, average goodput, and packet drop vs. “make”. Various experiments CBR datarate. were run by using Simulation the same parameters aresimulation topology. given in Table Thedefault 4. The simulation time, values repre- used for sented by “T” was set to 50 s. The CBR data rate was initially set to 1 modified Vegas and modified Fast algorithms are shown in Tables 5 and 6, respectively. Mbps. Same CBR start stop pattern was used for all experiments. The CBR start stop pattern is shown in Figure 4 and throughput in Figure 5. We have written different awk scripts to calculate the average goodput, average packet drop rate, average goodput, and packet drop vs. CBR datarate. Simulation parameters are given in Table 4. The default values used for modified Vegas and modified Fast algorithms are shown in Tables 5 and 6, respectively.
Table 6. Modified Fast TCP parameters. Parameter Value on Reset 100 Electronics 2021, 10, 711 0.5 16 of 27 MI threshold 0.00075 0.5 Electronics 2021, 10, x FOR PEER REVIEW 18 ofrate Figure 4. Constant bit rate (CBR) traffic generator start stop pattern for each path with CBR data 28 Figure 4. Constant bit rate (CBR) traffic generator start stop pattern for each path with CBR data 1 Mbps. rate 1 Mbps. Figure5. Figure Aggregatethroughput 5.Aggregate throughputof ofCBR CBRtraffic traffic when when source source node node “S” “S” is is using using MPTCP MPTCP with with coupled coupled congestioncontrol congestion control linked-increases linked-increases algorithm algorithm (MPTCP-CC-LIA). (MPTCP-CC-LIA). 4.1. TableSetup One 4. Simulation parameters for wired links. In the first setup, we implemented our multipath uncoupled congestion control al- Simulation Parameter Value gorithm as follows: CBR packet size 1000 bytes (a) Using modified TCP Vegas TCP packet size (MVegas) congestion control on both 1000 subflows; bytes (b) Using modified Fast TCP (MFast) congestion control on both subflows. Queue type (wired links) Drop Tail Queue size (packets) 100 We compared its R1–R2performance with MPTCP-CC-LIA. The results Bottleneck in Figure for Path 1 6 show the average goodput graphs R3–R4 of three different multipath flows. The goodput Bottleneck for Path 2 of a single multipath flow is the aggregate goodput of its individual subflows. The multipath flow with MFast (MPTCP-MFast) achieves better average goodput than MPTCP-MVegas and MPTCP-CC-LIA. The CBR traffic generator starts generating background traffic at t = 5 s and stops at t = 15 s. It again starts at t = 35 s and stops at t = 45 s. The CBR traffic pattern for both paths is the same. In simulation, the FTP traffic for both paths starts at t = 1.5 s and stops at t = 50 s. When FTP traffic starts, then MVegas and MFast keep on increasing their congestion windows until packet drops are observed. The more congested path (Path 2) experiences more packet drops when CBR is started in the background.
Electronics 2021, 10, x FOR PEER REVIEW 18 of 28 Electronics 2021, 10, 711 17 of 27 Table 5. Modified TCP Vegas parameters. Parameter Value on Reset α 1 β 3 γ 1 α0 1 β0 2 Table 6. Modified Fast TCP parameters. Parameter Value on Reset α 100 γ 0.5 MPTCP with coupled Figure 5. Aggregate throughput of CBR traffic when source node “S” is using MI threshold congestion control linked-increases algorithm (MPTCP-CC-LIA). 0.00075 σ 0.5 4.1. Setup One 4.1.In Setup the One first setup, we implemented our multipath uncoupled congestion control al- gorithm In as the first setup, we implemented our multipath uncoupled congestion control follows: algorithm as follows: (a) Using modified TCP Vegas (MVegas) congestion control on both subflows; (a) Using modified TCP Vegas (MVegas) congestion control on both subflows; (b) Using modified Fast TCP (MFast) congestion control on both subflows. (b) Using modified Fast TCP (MFast) congestion control on both subflows. WeWecompared comparedits itsperformance performancewith withMPTCP-CC-LIA. MPTCP-CC-LIA.The Theresults resultsininFigure Figure6 6show show the the average goodput graphs of three different multipath flows. The goodputofofa asingle average goodput graphs of three different multipath flows. The goodput single multipath multipathflow flowisisthe theaggregate aggregategoodput goodputofofits itsindividual individualsubflows. subflows.The Themultipath multipathflow flow with with MFast (MPTCP-MFast) achieves better average goodput than MPTCP-MVegasand MFast (MPTCP-MFast) achieves better average goodput than MPTCP-MVegas and MPTCP-CC-LIA. MPTCP-CC-LIA.The TheCBR CBRtraffic trafficgenerator generatorstarts startsgenerating generatingbackground backgroundtraffictrafficatatt t==5 5s s and andstops stopsatatt t= =1515s.s.ItItagain againstarts startsatatt =t =3535s sand andstops stopsatatt t= =4545s.s.The TheCBR CBRtraffic trafficpattern pattern for forboth bothpaths pathsisis the the same. same. In simulation, the In simulation, theFTP FTPtraffic trafficfor forboth both paths paths starts starts at tat=t1.5 = 1.5 s ands and stops at t = 50 s. When FTP traffic starts, then MVegas and stops at t = 50 s. When FTP traffic starts, then MVegas and MFast keep on increasing theirMFast keep on increasing their congestion congestion windows windows untiluntil packet packet dropsdrops are observed. are observed. The more The congested more congested path (Path path2) (Path 2) experiences experiences more packetmore packet dropsCBR drops when when is CBR is started started in the background. in the background. Averagegoodput Figure6.6.Average Figure goodputresult resultofofSetup Setup1.1. The packet drop starts after t = 5 s and rises sharply. Figure 7 shows the aggregate packet drops of both subflows. On detecting packet loss, the congestion window of that
Electronics 2021, 10, x FOR PEER REVIEW 19 of 28 Electronics 2021, 10, 711 18 of 27 The packet drop starts after t = 5 s and rises sharply. Figure 7 shows the aggregate packet drops of both subflows. On detecting packet loss, the congestion window of that specific subflow is decreased, and therefore the aggregate goodput is also decreased. specific subflow is decreased, and therefore the aggregate goodput is also decreased. Figure 7 shows the average packet drop rate corresponding to each multipath flow. The Figure 7 shows the average packet drop rate corresponding to each multipath flow. The packet droprate packet drop rateofofa asingle single multipath multipath flow flow is the is the aggregate aggregate packet packet dropdrop rate rate of itsofindi- its individual subflows. The average packet drop rate of MPTCP-MFast is also higher vidual subflows. The average packet drop rate of MPTCP-MFast is also higher as com- as compared with MPTCP-MVegas and MPTCP-CC-LIA. pared with MPTCP-MVegas and MPTCP-CC-LIA. Figure 7. Average packet loss rate of Setup 1. Figure 7. Average packet loss rate of Setup 1. The congestion window plots of multipath flows MPTCP-MVegas and MPTCP-MFast The congestion window plots of multipath flows MPTCP-MVegas and are shown inare MPTCP-MFast Figures shown8inand 9, respectively; Figures the congestion 8 and 9, respectively; window the congestion graph window belonging to graph belonging to each subflow is shown. Path 1 is less congested than Path 2. Subflow 1 uti-Path 1 and each subflow is shown. Path 1 is less congested than Path 2. Subflow 1 utilizes Subflow lizes Path 12 and utilizes Path22.utilizes Subflow The results Path 2.show that MPTCP-MVegas The results is less aggressive show that MPTCP-MVegas is less on both subflowson aggressive as both compared with subflows MPTCP-MFast. as compared This behavior with MPTCP-MFast. of behavior This MVegas of is very MVegas beneficial to isavoid packet drop very beneficial on path to avoid withdrop packet moreoncongestion. MPTCP-MFast path with more congestion.behaves aggressively on MPTCP-MFast behaves aggressively more congested on which path more congested results inpath more which results packet in more drops. packet drops. By comparing theBycongestion comparing windowsthe congestion of MFast andwindows MVegasoffor MFast moreand MVegas for congested more path congested (Path 2), we path (Paththat when observe 2), we observe CBR that when CBR in the background in the starts at tbackground = 5 s, thenstarts both at t =windows the 5 s, then both the windows are already in congestion are avoidance phase with the window size of MFast more than double the more already in congestion avoidance phase with the window size of MFast size ofthanMVegas, but double the size of MVegas, but on detecting packet drop, MFast decreases its window on detecting packet drop, MFast decreases its window and reduces it to almost zero at and reduces it to almost zero at t = 7 s, but MVegas reduces the congestion window in a t = 7 s, but MVegas reduces the congestion window in a very slow rate by using cwnd = very slow rate by using cwnd = cwnd − 1. The multiplicative increase in cwnd of MFast cwnd − 1. The multiplicative increase in cwnd of MFast gives rise to more Electronics 2021, 10, x FOR PEER REVIEW gives rise to more packet drops. After t = 15 s, both go into slow start again. 20 of 28packet drops. After t = 15 s, both go into slow start again. Congestion Figure8.8.Congestion Figure windows windows of individual of individual subflowssubflows belonging belonging to to MPTCP MPTCP modified modified TCP Vegas TCP Vegas (MPTCP-MVegas). (MPTCP-MVegas).
You can also read