FTW: International OpenFlow/SDN Testbeds

close
Use Internet2 SiteID

Already have an Internet2 SiteID?
Sign in here.

Internet2 SiteID

An Experience Report on Extending Dynamic Layer-2 Services to Campuses

Time 04/01/15 02:10PM-02:30PM

Room Bayview Ballroom 214

Session Abstract

The objective of this work is to experiment with
the deployment of dynamic Layer-2 (L2) path-based
networking services using OpenFlow/SDN to university
campuses, and to enable and evaluate inter-domain widearea
path-based services. We used the Dynamic Network
System (DYNES) equipment that was deployed as part
of the NSF MRI DYNES project at 40 universities and
11 regional network providers. The per-site DYNES
equipment consists of: File Data Transfer (FDT) server,
Inter-Domain Controller (IDC) host, perfSONAR (pS)1
host, and one Ethernet switch, which was OpenFlow
enabled in the later deployments. The FDT server runs
applications, the IDC host runs the control-plane software,
and the pS host runs active-measurement tools for
monitoring network performance.
For our experiments, we worked with several individuals
at the following universities and used their DYNES
equipment: (i) U. Virginia (UVA), (ii) MAX GigaPoP
(MAX), (iii) Indiana University (IU), (iv) U. Wisconsin,
Madison (UWisc), (v) University of New Hampshire
(UNH), (vi) Internet2 Lab (I2Lab), (vii) Rutgers University,
(viii) U. Colorado (CU), and (ix) U. Chicago,
and at one regional REN, Northern Crossroads (NoX).
Open Exchange Software Suite (OESS)2, which is an
SDN controller for OpenFlow switches, and On-Demand
Secure Circuits and Advance Reservation System (OSCARS)
3, which enables inter-domain dynamic circuit
reservation and provisioning, were installed on the IDC
hosts at each DYNES site.
At each DYNES site, the OpenFlow (or, in some cases
non-OpenFlow) switch, OESS and OSCARS were configured.
For example, IP addresses need to be configured
to enable topology discovery by OESS. The OESS UI
was used to set the remote-link information for the dataplane
port of the peering network. UVA DYNES OSCARS
needed to be configured with a server certificate,
and the certificate owner and issuer information needed to be manually communicated to Internet2’s administrator
for configuration of Internet2’s AL2S OSCARS.
These certificates are used in the authentication process
for inter-domain L2 path requests.
We then requested static VLANs to be provisioned
through routers/switches on the path between the
DYNES switch on each campus and the Internet2 AL2S
switch port to which the corresponding regional provider
network was connected. This required manual interaction
with administrators on campuses and at regional
providers.
After these basic configuration steps were completed,
we used the OESS Web interface to provision VLANs
between the UVA DYNES switch port that was connected
to the FDT to similar ports at the other universities.
Next, we configured VLANs and private IP
addresses at the FDTs, in effect, creating end-to-end
VLANs. Finally, we ran nuttcp and GridFTP tests
between FDTs at 3-4 Gbps across the dynamically established wide-area campus-to-campus VLANs. Since
this traffic ran on campus links that carried production
traffic, rate limits were placed on these VLANs.
Lessons learned include an understanding of the
equipment and configuration steps required to enable dynamic
L2 service in each domain, and an understanding
of how to configure end-hosts to run existing applications,
without modifications, across end-to-end L2 paths.
We also developed methods for debugging path-setup
failures, and identified areas of improvement required in
the controllers and in the administrative processes. Our
data-plane experiments of moving large datasets across
inter-domain L2 paths showed that providers’ policing
mechanisms to control the rate of packet entry into these
L2 paths can have adverse effects on TCP throughput
if the sending rate is not limited at the end host. We
conclude that this service is viable and can be made
available to scientists by following a set of next steps to
extend this service to include the dedicated data transfers
nodes that are typically available in supercomputing sites
for WAN transfers.

Speakers

Speaker Malathi Veeraraghavan University of Virginia

Presentation Media