|
9K App - Advanced Network Applications
The High Performance Computing community has already
demonstrated that the use of jumbo packets, IP packets exceeding
1500 bytes, offers an undeniable performance advantage (typically a
factor of 2 at 1 Gbps and much more at higher rates), particularly
when transferring large amounts of data. While the use of jumbo
packets might seem a wizard's tuning trick, it represents a logical and
immediately valuable benefit to grid computing. Further, it becomes a
requirement for scalability as networks move to 10 Gbps and beyond.
However, though the entire core and many of our GigaPoP networks
are now jumbo enabled, very few of our campus networks support
jumbo packets. We are creating the case, through performance
analysis, for incrementally jumbo enabling research networks and in
order to offer some insight into the different levels of performance
benefit anticipated for distributed applications, such as collaborative
visualization, massive data transfer, and distributed file systems.
Concurrently we are also working closely with the academic community,
in Canada, and the U.S., from the corporate perspective, to enhance
awareness of subtle path MTU discovery issues.
Our initial pilot project phase one explores the effect of packet size
variation, on key mission critical advanced network applications, for
the HPC community. The primary test is to explore potential ability, to
dynamically interact with visualizations, hosted on remote servers, as
a function of packet dynamics, over a well characterized path. The test
bed includes TCP transport across gigabit Ethernet sites, of increasing
RTT, using SGI's Vizserver platform, on an Onyx 3000 server, with
both SGI and Linux workstations running Vizserver clients. Preliminary
tests indicate network performance in the order of 400 megabits per
second, with significant potential for improvement. Further testing is
required, in order to characterize the effect of modifying parameters,
and identifying the hierarchy of bottlenecks, leading to effective
optimization. Phase two preliminary testing is set to explore the
packet dynamics of NFS version 4, as a function of packet size, and
critical tuning parameters over TCP. Similarly we expect increased
packet size may have a dramatic effect on performance. In a 2004
white paper SGI reports in the order of 25 megabytes per second over
local gigabit Ethernet LAN, using optimized NFS, with an MTU of 1512
bytes. One of the key factors of NFS has been end node
fragmentation and reassembly, which is severely detrimental to
performance. We plan to demonstrate optimized NFS performance
over 9000 byte MTU gigabit Ethernet WAN with a block size of 8192
bytes, leading to increased non fragmentation flow between hosts,
and vastly improved throughput, possibly approaching local SAN over
fibre channel. In addition, we will be looking at the effective utilization
levels of both the network, and the end-hosts, to identify the relative
contributions that each has on the overall apparent application
performance.
The project team would like to acknowledge the generous participation
and supporting infrastructure from BCNET, CANARIE, CA*net4, HEPnet Canada,
IRMACS, Netera, Simon Fraser University, WestGrid, University of
Alberta Subatomic Physics and University of Victoria Physics.
|