Internet distance prediction gives pair-wise latency information with limited measurements. Recent studies have revealed that the quality of existing prediction mechanisms from the application perspective is short of satisfactory. In this paper, we explore the root causes and remedies for this problem. Our experience with different landmark selection schemes shows that although selecting nearby landmarks can increase the prediction accuracy for short distances, it can cause the prediction accuracy for longer distances to degrade. Such uneven prediction quality significantly impacts application performance. Instead of trying to select the landmark nodes in some “intelligent” fashion, we propose a hierarchical prediction approach with straightforward landmark selection. Hierarchical prediction utilizes multiple coordinate sets at multiple distance scales, with the “right” scale being chosen for prediction each time. Experiments with Internet measurement datasets show that this hierarchical approach is extremely promising for increasing the accuracy of network distance prediction.
A multitude of overlay network designs for resilient routing, multicasting, quality of service, content distribution, storage, and object location have been recently proposed. Overlay networks offer several attractive features, including ease of deployment, flexibility, adaptivity, and an infrastructure for collaboration among hosts. In this paper, we explore cooperation among co-existing, possibly heterogeneous, overlay networks. We design Synergy, a utility-based overlay internetworking architecture that fosters overlay cooperation. Our architecture promotes fair peering relationships to achieve synergism. Results from Internet experiments with cooperative forwarding overlays indicate that our Synergy prototype improves delay, throughput, and loss performance, while maintaining the autonomy and heterogeneity of individual overlay networks.
We design and evaluate an adaptive traffic conditioner to improve application performance over the differentiated services assured forwarding behavior. The conditioner is adaptive because the marking algorithm changes based upon the current number of flows traversing through an edge router. If there are a small number of flows, the conditioner maintains and uses state information to intelligently protect critical TCP packets. On the other hand, if there are many flows going through the edge router, the conditioner only uses flow characteristics as indicated in the TCP packet headers to mark without requiring per flow state. Simulation results indicate that this adaptive conditioner improves throughput of data extensive applications like large FTP transfers, and achieves low packet delays and response times for Telnet and WWW traffic
Multipoint-to-multipoint communication can be implemented by combining the point-to-multipoint and multipoint-to-point connection algorithms. In an ATM multipoint-to-point connection, multiple sources send data to the same destination on a shared tree. Traffic from multiple branches is merged into a single stream after every merge point. It is sometimes impossible for the network to determine any source-specific characteristics since all sources in the multipoint connection may use the same connection identifiers. The challenge is to develop a fair rate allocation algorithm without per-source accounting as this is inequivalent to per-connection or per-flow accounting in this case.
We define fairness objectives for multipoint connections, and we design and simulate an O(1) fair ATM-ABR rate allocation scheme for point-to-point and multipoint connections sharing the same links. Simulation results show that the algorithm performs well and exhibits many desirable properties. We list key modifications necessary for any ATM-ABR rate allocation scheme to fairly accommodate multiple sources.
Collecting benefits using current FSSA systems is time-consuming, frustrating, and complex for needy citizens and social workers. This requires citizens to visit several offices in and outside their hometowns to receive benefits they are entitled to. In many cases, dealing with this process prevents underprivileged citizens from devoting adequate time to enhancing their prospects for becoming self-supporting with a consequential harmful impact on their health and safety.
Business-to-Business (B2B) technologies pre-date the Web. They have existed for at least as long as the Internet. B2B applications were among the first to take advantage of advances in computer networking. The Electronic Data Interchange (EDI) business standard is an illustration of such an early adoption of the advances in computer networking. The ubiquity and the affordability of the Web has made it possible for the masses of businesses to automate their B2B interactions. However, several issues related to scale, content exchange, autonomy, heterogeneity, and other issues still need to be addressed. In this paper, we survey the main techniques, systems, products, and standards for B2B interactions. We propose a set of criteria for assessing the different B2B interaction techniques, standards, and products.
Overlay networks among cooperating hosts have recently emerged as a viable solution to several challenging problems, including multicasting, routing, content distribution, and peer-to-peer services. Application-level overlays, however, incur a performance penalty over router-level solutions. This paper quantifies and explains this performance penalty for overlay multicast trees via: 1) Internet experimental data; 2) simulations; and 3) theoretical models. We compare a number of overlay multicast protocols with respect to overlay tree structure, and underlying network characteristics. Experimental data and simulations illustrate that the mean number of hops and mean per-hop delay between parent and child hosts in overlay trees generally decrease as the level of the host in the overlay tree increases. Overlay multicast routing strategies, overlay host distribution, and Internet topology characteristics are identified as three primary causes of the observed phenomenon. We show that this phenomenon yields overlay tree cost savings: Our results reveal that the normalized cost L(n)/U(n) is ∞ n0.9 for small n, where L(n) is the total number of hops in all overlay links, U(n) is the average number of hops on the source to receiver unicast paths, and n is the number of members in the overlay multicast session. This can be compared to an IP multicast cost proportional to n0.6 to n0.8.
Although it is well-known that TCP throughput is suboptimal in multihop wireless networks, little performance data is available for TCP in realistic wireless environments. In this paper, we present the results of an extensive experimental study of TCP performance on a 32-node wireless mesh network testbed deployed on the Purdue University campus. Contrary to prior work which considered a single topology with equal-length links and only 1-hop neighbors within transmission range of each other, our study considers more realistic heterogeneous topologies. We vary the maximum TCP window size, in correlation with two important MAC layer parameters: the use of RTS/CTS and the MAC data rate. Based on our TCP throughput results, wegive recommendations on configuring TCP and MAC parameters, which in many cases contradict previous proposals (which had themselves contradicted each other).
We design and implement an efficient on-line approach, FlowMate, for clustering flows (connections) emanating from a busy server, according to shared bottlenecks. Clusters can be periodically input to load balancing, congestion coordination, aggregation, admission control, or pricing modules. FlowMate uses in-band (passive) end-to-end delay measurements to infer shared bottlenecks. Delay information is piggybacked on feedback from the receivers, or, if impossible, TCP or application round-trip time estimates are used. We simulate FlowMate and examine the effects of network load, traffic burstiness, network buffer sizes, and packet drop policies on clustering correctness, evaluated via a novel accuracy metric. We find that coordinated congestion management techniques are more fair when integrated with Flow-Mate. We also implement FlowMate in the Linux kernel v2.4.17 and evaluate its performance on the Emulab testbed, using both synthetic and tcplib-generated traffic. Our results demonstrate that clustering of medium to long-lived flows is accurate, even with bursty background traffic. Finally, we validate our results on the Internet Planetlab testbed.
To date, the measurement of user-perceived degradation of quality of service during denial of service (DoS) attacks remained an elusive goal. Current approaches mostly rely on lower level traffic measurements such as throughput, utilization, loss rate, and latency. They fail to monitor all traffic parameters that signal service degradation for diverse applications, and to map application quality-of-service (QoS) requirements into specific parameter thresholds. To objectively evaluate an attack’s impact on network services, its severity and the effectiveness of a potential defense, we need precise, quantitative and comprehensive DoS impact metrics that are applicable to any test scenario.
We propose a series of DoS impact metrics that measure the QoS experienced by end users during an attack. The proposed metrics consider QoS requirements for a range of applications and map them into measurable traffic parameters with acceptable thresholds. Service quality is derived by comparing measured parameter values with corresponding thresholds, and aggregated into a series of appropriate DoS impact metrics. We illustrate the proposed metrics using extensive live experiments, with a wide range of background traffic and attack variants. We successfully demonstrate that our metrics capture the DoS impact more precisely than the measures used in the past.