Fall 2006 Internet2 Member Meeting

Use Internet2 SiteID

Already have an Internet2 SiteID?
Sign in here.

Internet2 SiteID


The following advanced networking applications demonstrations were featured at the Fall 2006 Internet2 Member Meeting.

Demo Location:
Hyatt Regency McCormick Place
Conference Center 12, A-D

Demo Times:
Tuesday, 5 December, 12:00pm-5:00pm,
Wednesday, 6 December, 8:00am-5:00pm

New HD Technology - Cameras and CoDecs - illustrated using ResearchChannel's Global N-Way HD Conferencing system (iHD1500)


Developed by:
The Research Channel

Demonstrators on site:
Michael Wellings
James DeRoest
Mark Essig

Remote Partners:
iHDTV_Trusted_Parnters group

Michael Wellings

iHDTV™ is a software suite consisting of software modules that work with commercially available components to capture, packetize and transport high-definition video in various formats over Internet Protocol networks, spanning the range of HDTV quality levels, with the goal of providing wider access to high-definition content. Developed by ResearchChannel along with the University of Washington, iHDTV™ was first demonstrated in 1999.

Since that time, the software suite been expanded to enable and expedite efficient streaming with data at rates of up to 1.5Gbps, which is sufficient to transfer uncompressed 1080i high-definition video. The iHDTV™ code is designed to require minimum overhead to achieve low latency. Its modular design can easily support new devices and flexible modes of operation, and the code is highly configurable.

ResearchChannel’s iHDTV™ project is designed to explore how research and academic endeavors — and the world in general — would change if studio-quality HD video could be sent over a general-purpose Internet. More fundamentally, the project explores the intersection of network, video and server technologies where near real-time distribution of extremely high-quality images is required.

ResearchChannel has been experimenting with ways to develop and integrate technology and techniques for the distribution of high-quality video over the Internet. And over time, the definition of high-quality, and the corresponding requirements for network capacity, have dramatically increased.

UW's Internet HDTV project is pushing the envelope on several fronts. Multiple system integration challenges have been encountered, from PC performance problems to Gigabit Ethernet incompatibilities. As the project continues, ResearchChannel hopes to gain insights that will help guide current network quality-of-service debates, questions of the relationship between quality and latency and limitations of current PC architectures. ResearchChannel is also exploring applications besides HD television (such as collaboration, telemedicine and interactive visualization) which would benefit from iHDTV™ technology.

The most recent development in iHDTV™ is an open source collaboration with several ResearchChannel participant and industry partners to further develop the capability and scope of the iHDTV™ software tool set

Role of Internet2:
Internet2 Abilene will be used for some of the demo traffic.

Gigaconference and Beyond: Multiple Vendor HD Videoconferencing Interoperability


OSU, Polycom, Radvision, Tandberg, Lifesize, Codian, and others.

Bob Dixon

Jonathan Tyman

The Internet2 Commons and its partners will demonstrate many different production-ready tools that researchers and educators use for collaborating over advanced networks. The Internet2 Commons helps drive adoption of real-time communications technologies while promoting interoperable development. The technologies demonstrate full-spectrum, real-time communications—from presence, chat, voice, and video—to sharing of presentations, large data sets, and computer simulations. This is a chance to see, learn, compare and maybe find the toolsuite that fits your requirements and capabilities best.

Among the most exciting recent technical advances in IP communications are the addition of high definition to H.323 videoconferencing, as demonstrated by the Gigaconference, and the maturation of collaboration toolsuites, as demonstrated by Collaboranza2!

Role of Internet2:
Most of the presenters are involved in the Internet2 Commons and work toward the common goal of raising the technical bar and the adoption rate for videoconferencing and collaboration over IP.



Texas A&M University
Columbia University

Walt Magnussen

Funding Organization:

The NG-911 project is a U.S. Department of Commerce NTIA funded project that establishes an NG-911 test bed for functionality testing of prototype IP-enabled Public Safety Access Points (PSAPs). This two-year project is a collaborative effort between universities, industry, state and local governments, the National Emergency Number Association (NENA) and Internet2. This project involves installing PSAPs in Texas and Virginia and subsequently testing against the evolving NENA i3 (IP) standards. The supporting infrastructure development is underway at Columbia University, while the deployment and testing is taking place at Texas A&M University. NG-911 is a $1.3 million dollar, two-year project made possible through generous contributions from the NTIA, the States of Texas and Virginia 911 offices, Cisco and Nortel. It began in October 2004 and is scheduled to end in October 2006.

An Application for Exploring Volumetric Data Sets


University of Wisconsin - La Crosse

Demonstrators on site:
Steven Senger

Remote Partners:
The SUMMIT Group at Stanford University Medical School

Steven Senger Primary

Using a table top vertical projection system this application creates a stereoscopic virtual environment for visualizing volumetric data sets. The systems implements a novel user interaction paradigm characterized by the local application of high computational cost algorithms under the continuous influence of the user. The system also incorporates haptic feedback of visualized structrues. This paradigm allows users to employ their export knowledge in interpreting and guiding visualization. The applications uses multiple data streams, with different reliability requirements, between the server and client. These streams transport image data, haptic data and geometry data that can be used to interactively construct surface mesh models of visualized structures.

Funding Organizations:
National Science Foundation
National Library of Medicine

Role of Internet2:
This application, by virtue of its tightly-coupled interaction between multiple users and remote computation through multiple data streams, requires the predictable bandwidth and latency characteristics of Internet2.

Delivering Programs via Internet2, CLARA and GDLN (World Bank)

World Bank GDLN

Demonstrators on site:
W. Edward Johansen
Jan Zenetis
Mike McGill
Mike Gill
Arif Khan

Remote Partners:
Egyptian Orthopaedic Association
Argentinian Association of Orthopaedics and Traumatology
University of Athens
University of Toronto
Columbia University (tba)
National Library of Medicine
Egyptian University Network

W. Edward Johansen

This demonstration will feature a series of netcasts. Shoulder arthtroscopic surgery, trauma, road safety, and an archaeological dig at Faiyum are among the featured netcasts. These will be netcast in conjunction with Internet2 Commons at a bandwidth of 2.0 Mbps. Tandberg will lend needed equipment.

Role of Internet2:
Each of these remote sites has participated in previous Internet2 programs. The Internet2 Commons will be used.

A Collaborative Cyberinfrastructure for Event Driven Coastal Modelling


Southeastern University research Association (SURA) in association with its member universities

Larry Flournoy
Gerry Creager
Gabriel Allen
Sara Graves

Remote partners:
Bedford Institute of Oceanography
Gulf of Maine Ocean Observing System
U of Florida
U of Miami

Larry Flournoy

Funding Organizations:

The SURA Coastal Ocean Observing & Prediction (SCOOP) program is building cyberinfrastructure (CI) to enable advanced event-driven ensemble forecasting of the coastal impacts from storms and hurricanes in the southeastern United States, integrating real-time distributed data and computer models for the East and Gulf coasts of the United States. This demonstration showcases the SCOOP system, which employs a service oriented architecture to integrate data archive and transport services, a metadata catalog, resource and application management, distributed computational resources, and the scientific simulation codes modeling coastal phenomena. Triggered by scientist interaction through a Portal and advisories from the National Hurricane Center, this demonstration shows how the SCOOP system can automatically build, schedule and deploy an appropriate ensemble of models for water level and wave properties, driven by real time wind data. The demonstration highlights the ability to re-prioritize resource utilization based on data driven criteria - for example, storm intensity or proximity to the coast. Depending on the urgency of the situation and level of the storm, resources are scheduled with different levels of priority, in the extreme case SCOOP models preempting already running jobs. Various products built from model data and real time observations are shown through aggregating sites such as OpenIOOS.org. A distributed operational system such as SCOOP allows modelers to do multiple ensemble runs in order to increase the accuracy of a storm path prediction as well as providing the forecast earlier allowing for more efficient evacuation and preparation. The demonstration run will last approximately 20 to 30 minutes and use two desktop systems. It will be repeated as necessary.

Role of Internet2:
This distributed application is dependant upon I2 underlying most of the member organizations in order for the ensemble modelling (many iterations)to be done in time to allow emergency services organizations to evacuate or otherwise prepare for hurricanes and significant weather events. Using terascle computing facilities connected by the I2 network allows modelling methods to be employed which have not been possible even 18 months ago. While the actual distributed computations can be done over the regular internet, it is impossible to gather data, distribute data, compute, distribute results, visualize, analyze, and repeat the cycle multiple times on a short enought time cycle to be effective

The Ohio State University Electron Microscope Application

The Ohio State University

Prasad Calyam,
Pankaj Shah

Prasad Calyam

Funding Organization:
The Ohio Board of Regents

OARnet is partnering with The Ohio State University's Center for Accelerated Maturation of Materials (CAMM) to make a set of world's most powerful electron microscopes remotely accessible to researchers and industries over the Internet. Since the electron microscopes are valuable and expensive resources ($450,000 - $4 Million), remote access possibilities have the potential of drastically shortening the development process of new materials and thus significantly reducing costs for industry partners, research labs and students at other university campuses.

Remote access of the OEM Application is an exciting network-dependent application, the likes of which, have never been seen or supported before. The following challenges describe this complex and demanding application: The OEM application requires high resolution video image transfers over network paths between CAMM and the remote end-user sites. These image transfers consume significant computing for real-time video processing and require approximately 30Mbps network bandwidth per session, making this application more demanding than High-definition Videoconferencing or even IPTV. In addition, the OEM application requires stringent and real-time control of mouse and keyboard to manipulate the application controls. Improper mouse and keyboard controlling could result in physical damages to the electron microscope equipment that could cost in excess of $100,000 to fix. Even more demanding is the fact that the end-user Quality of Experience (QoE) is extremely sensitive compared to the QoE demands of any other network-dependent application.

Role of Internet2:
OARnet is ensuring that the networking capabilities of TFN are fully leveraged for successful remote access experience of the OEM application within Ohio. Remote access can also be leveraged by Industry, Research Lab and University partners using the Internet2 network infrastructure anywhere in the US and other parts of the world.

The START Collaboratory: Enabling 21st Century Astronomy Research in the Classroom



The Collaboratory Project, Northwestern University
The Sloan Digital Sky Survey, Johns Hopkins University
Hands On Universe, The Lawrence Hall of Science at the University of California, Berkeley

Demonstrators on site:
Gary Greenberg

Remote Partners:
Jordan Raddick, The Sloan Digital Sky Survey, Johns Hopkins University
Carl Pennypacker, Hands On Universe, The Lawrence Hall of Science at the University of California, Berkeley

Gary Greenberg

Funding Organization:
National Science Foundation
Strategic Technologies for the Internet (STI)

The START Collaboratory integrates access to gigabytes of searchable data and images from the Sloan Digital Sky Survey and SkyServer tools into Web-based collaborative research journals that can be shared and discussed online. From these research journals, students can request observations from Internet controlled telescopes. These observations can be viewed with the START Web visualization and measurement tool. What distinguishes this approach is being able to bring real data and real tools to students in ways that engage them in authentic research; generating useful scientific results just as professional astronomers do--learning science by doing science.

Role of Internet2:
While the user's connection to the START Collaboratory is over the commodity Internet, the START Collaboratory uses Web services to integrate data, tools and applications available on Internet2 into a Web-based collaborative research environment. Processing, queries and calculations are done on servers and returned to online collaborative research journals. Large FITS files are moved across Internet2 from remote telescopes, databases and servers to the START VisTool for Web-based visualization and measurement.

Application Driven Design for a Large-Scale, Multi-Purpose Grid Infrastructure


SURA (Southeastern Universities Research Association)

Mary Fran Yafchak, SURA
Howard Lander, (RENCI) Renaissance Computing Institute
Sarat Sreepathi, North Carolina State University
Mahantesh M Halappanavar, Old Dominion University
Victor Bolet, Georgia State University
Art Vandenberg, Georgia State University
John-Paul Robinson, University of Alabama at Birmingham
Enis Afgan, University of Alabama at Birmingham

Kate Barzee

Funding Organization:
National Science Foundation (OCI-054555)

SURAgrid is evolving into a generalized, regional education and research infrastructure that is positioned to become an essential tool for scientific progress. With support from the National Science Foundation (OCI-054555), SURAgrid is also establishing a representative application set to utilize and demonstrate SURAgrid’s current functionality and help direct and catalyze future design and development. SURA will provide demonstrations of applications that illustrate the diversity of the application set which currently is comprised of GSU Multiple Genome Alignment, UAB BLAST, UNC/SCOOP Coastal Modeling with ADCIRC, NCState EPANET Optimization/Simulation and ODU Bio-Electric Simulator. A summary of these applications is included directly below and additional details are available at

UNC/SCOOP: Storm-Surge Modeling with ADCIRC – ADCIRC is a finite element method shallow water model for computing tidal and storm surge water level and depth-averaged currents associated with these phenomena. Grid environments such as SURAgrid are ideal for the ensembles in applications like ADCIRC, which is one of a set included under SCOOP (SURA Coastal Ocean Observing and Prediction) program that is focused on improving predictions of coastal phenomena.
Project Partners: University of North Carolina Marine Science, Renaissance Computing Institute (RENCI), MCNC, SAIC

NCSU: EPANET Simulation-Optimization for Threat Management in Urban Water Systems – This application incorporates dynamic demand data, in real-time, into a simulation-optimization process for contamination threat management in drinking water distribution systems. The nature of this work is highly compute-intensive and requires multi-level parallel processing via computer clusters and high-performance computing architectures such as SURAgrid. Simulation-Optimization with EPANET is part of a multi-disciplinary, three-year NSF-funded DDDAS (Dynamic Data-Driven Application Systems) research project to develop a cyberinfrastructure system that will both adapt to and control changing needs in data, models, computer resources and management choices facilitated by a dynamic workflow design.
Project Partners: North Carolina State University; University of Chicago; University of Cincinnati; University of South Carolina

ODU: Bio-electric Simulator for Whole Body Tissues – This application is designed to simulate the response of a “whole body tissue” model to potential/current stimulus through direct electrode contact. The Electrical and Computer Engineering and Office of Computing and Communications Services departments at Old Dominion University are using SURAgrid to grid-enable this application to utilize concepts such as work-flow and virtual data methods. The Bio-electric Simulator, which is both computation and data intensive, has been demonstrated to scale with the number of processors and can thus benefit from the accessibility to the additional computational resources of SURAgrid.

GSU: Multiple Genome Sequencing & Alignment – This multiple sequence alignment algorithm application takes a number of genome sequences as input and gives an aligned sequence based on their structure by using a pairwise alignment algorithm. When run on grids like SURAgrid, carefully designed and grid-enabled algorithms like this, which implement a memory efficient method for computation and are also parallelized efficiently so that the workload is well distributed on grids, afford bioinformatics users a performance comparable to cluster environments while giving them added flexibility and scalability.

UAB: Grid-Enabled Distributed BLAST – BLAST is a database search application for matching protein and nucleotide sequences. Maximizing the throughput of searches is key to improving research results. This distributed implementation of BLAST uses the DynamicBLAST Meta-scheduler to select appropriate grid resources for select query strings. Globus is used for job staging, submission and retrieval. ncbiBLAST performs the computations. Jobs are submitted using a web-based interface that leverages campus identity credentials via Pubcookie and manages grid authentication on behalf of the user via MyProxy, providing a simplified user authentication experience.

Role of Internet2:
The applications that will be included in this representative sample are resource and compute intensive. The SURAgrid infrastructure is a grid network of Internet2 connected SURAgrid participants. To perform well in this type of grid environment, the applications that will be included in this demo require the bandwidth of Internet2.