Multi-Protocol Level Switching (MPLS) capabilities which may eventually provide bandwidth
management for the UltraLight infrastructure. In 2007 we also began working with Internet2 and
the DCN project and were able to demonstrate similar end-to-end protected connections using
layer-2 "circuits" in place of MPLS tunnels. Demonstrations of this capability including the
UltraLight network between Chicago and Michigan where shown at SC07 in the Internet2 booth
in conjunction with Terapaths. UltraLight's goal is to have the basic network services deployed in
such a way as to empower the end-users (or sites) in arranging the routing on both ends of the
channel. One of the challenges of the project is to combine an intelligent "optical core" with
intelligent Layer 3 edges. The use of a multi-layer protocol helps to interface connection-oriented
networks with routed networks.
UltraLight continues to develop and deploy a Local-host Information Service Agent (LISA)43 as
an additional monitoring and end-host tuning tool to gather information about end-host systems -
many "network" problems are really problems with underpowered or mis-configured hosts. This
agent was again tested and used in SuperComputing 2007 demonstrations and has proved to be a
vital component in managing and tuning the UltraLight network system.
Finally, the Virtual Intelligent Networks for Computing Infrastructure (VINCI) project44 provides
globally aware services for developing and gluing together a managed network. The main
objective of this project is to enable end user applications to efficiently use and coordinate shared,
hybrid network resources, and to correlate them with available processing power in order to
dynamically generate optimized workflows in complex distributed system.
Storage
One of the goals of UltraLight is to enable high performance disk-to-disk data transfers across the
UltraLight network. This is a critical capability for data intensive science and, due to the end-
hosts' resources (CPU power, bus bandwidth, I/O and memory bandwidth on the motherboard)
being shared by both transmission and read/write tasks, there is currently a gap in the state of
development between memory-to-memory and disk-to-disk transfers. We continue to investigate
a number of possible hardware configurations which may be able to achieve our goal of 1
GBytes/sec disk-to-disk transfers across the WAN.
In 2006 a partnership with REDDnet, led by Vanderbilt, was forged. This partnership enables the
L-Store integrated storage system45 to be integrated across the UltraLight network, providing a
single file-system image across many storage "depot" servers (which are themselves distributed
over the wide area network). L-Store caches are deployed at the Caltech, Florida and Michigan
UltraLight sites. L-Store exploits a large level of parallelism, both in data transfers across the
network and in reading and writing to disk. In addition, L-Store can also perform efficient erasure
coding in software with minimal data movement, enabling the construction of high-quality
storage clusters out of commodity parts and, hence, pushing "depot" costs down to around a
thousand dollars per Tera-Byte.
41
On-demand Secure Circuits and Advance Reservation System (OSCARS),
http://www.es.net/oscars/index.html
42
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Peta-scale Computing Research,
http://www.atlasgrid.bnl.gov/terapaths/
43
Local Information Service Agent,
http://monalisa.cacr.caltech.edu/monalisa__Service_Applications__LISA.html
44
VINCI: Virtual Intelligent Networks for Computing Infrastructures,
http://monalisa.cacr.caltech.edu/monalisa__Service_Applications__Vinci.html
45
Logistical Storage (L-STORE), http://www.lstore.org/
140