Our World, Welcome to IT

We know you don't get Network Computing for the pictures, but we couldn't publish an entire issue about product testing without sharing diagrams of our own test labs.

September 22, 2003

10 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Syracuse

Our Syracuse labs, on campus at Syracuse University, started as a single room that housed a lone technology writer (Bruce Boardman, now our executive editor) in Machinery Hall (MH) in 1993. This facility was the subject of our first Centerfold, in February 1994 (see "A Blast From the Past,"). Since then, we've expanded significantly, moving to machine-room space on the ground floor of the same building and adding a second general-purpose lab and a Wi-Fi lab, both in the Center for Science and Technology (CST) building. And we continue to grow--we plan to combine these three labs into a single new campus facility in the next year or two (we'll keep you posted).

The two general-purpose labs were designed to test a broad range of products, services and technologies--from anti-spam technology to zero-configuration networking and everything in between. Our CST and MH labs now play host to five full-time technology editors, two contributing editors and a score of freelancers who use our flexible infrastructure to test network- and desktop-management suites, security products, digital-convergence devices, infrastructure equipment, messaging and collaboration software, and other IT goodies.

 

 

A Look at Our Labs

click to enlarge

The CST and MH labs are collocated on the university's 10,000-node network, and we frequently use that extended resource to test particularly demanding products. Each lab has its own Gigabit backbone, and there's a dedicated Gigabit Fibre link between the two facilities. All the Cisco Systems, Extreme Networks, Nortel Alteon and Hewlett-Packard switches uplink to the backbone via 802.1Q trunks on Gigabit interfaces, enabling us to put any device on any subnet simply by changing the VLAN assignment on any port.

These labs house hundreds of general-purpose 2U, two-processor Intel- and SPARC-based servers and hundreds of 1U client machines, as well as specialized test equipment--traffic-generation and WAN-simulation tools, for instance--from Spirent Communications, Shunra Software and Ixia, large storage arrays from Snap Appliance, and several dedicated systems for use in tests that involve Microsoft Exchange, Active Directory, Novell eDirectory, LDAP servers and standards-based e-mail servers. We typically run six to 10 different tests simultaneously, and though we've developed and deployed a number of automated procedures to configure our test equipment--OS installation via Ghost and Microsoft Corp.'s Remote Installation Server, for instance--we're always on the lookout for procedures to streamline these tasks even further.

To test mobile and wireless technologies, we head to our Wi-Fi lab, where we first conduct systematic performance tests using industry-standard tools like NetIQ Corp.'s Chariot. Our greatest challenge here is to ensure that RF interference

doesn't contaminate our test results. To that end, we scan each RF channel using AirMagnet to verify that it's carrying no other wireless LAN traffic. If we detect significant non-WLAN noise, we use an Avcom-Ramsey spectrum analyzer to identify the source. Our systematic approach to locating access points and clients in specific lab locations, coupled with the use of identical client and server devices for all tests, helps ensure comparable results from product to product.

We pay close attention to other details that can impact radio transmission, too. We make sure metal doors are open or closed consistently across all product tests, for example, and we try to keep the same number of people in the room for each test. Such seemingly minor items can make a major difference in the reproducibility of test results. We also use WildPackets' AiroPeek and other wireless protocol analyzers extensively to make sense of anomalous findings.

Our WLAN range testing takes place in the CST building, which was constructed in 1983 and is representative of a typical office building, with concrete reinforced floors and Sheetrock-over-metal-stud walls. By opening or closing the metal doors that separate major corridors, we can systematically alter the RF environment and assess its propagation and multipath characteristics. We always position access points in the identical location (on a small shelf just below suspended ceiling tiles)--one that's typical of a real-world installation. We take RF measurements in several spots and maintain the same physical orientation of client devices, again for consistency. We measure raw RF signal levels, conduct performance tests using Chariot or do ping tests to verify IP connectivity, depending on the nature of the project. When ping testing, we also differentiate between each product's maximum range at its minimum data rate and at a specific performance threshold. When testing 802.11b products, for example, we might lock the device in at 5.5 Mbps and measure the maximum range at that data rate. For 802.11a and 802.11g products, we might choose 12 Mbps as the minimum performance threshold.

For fixed-wireless product testing, we use calibrated attenuators to measure range and verify vendor claims about RF parameters including power output and receiver sensitivity. We also conduct field tests to assess the relative ease of system deployment, though we've found that it's nearly impossible to ensure a consistent outdoor test environment--we get more accurate product-performance comparisons in the lab.

Our original Green Bay lab mirrors a corporate campus/remote site: Half the network represents the company headquarters, the other half a branch office or the Internet at large. We use this facility primarily to test Layer 4 to Layer 7 traffic and network edge devices. Spirent WebReflector and WebAvalanche devices can fill our gigabyte link and push out 27,000 HTTP transactions per second. Our Dell OptiPlex workstations are dual-boot, with Windows 2000 and Red Hat Linux, and serve as Chariot endpoints, IOMeter clients and RadView Software WebLoad agents for SSL and multiprotocol traffic generation.

Patch panels on each side of the lab provide access to 128 runs of Cat 5e/6 cable we have hidden in the ceiling, so we can easily configure devices on either side of the network without relocation. This also provides a mechanism for dual-homing our white-box test client machines and pumping out traffic at a device from either side of the network. And though our Gigabit fibre link is almost always in use, the T1 provided by our pair of Cisco 7200 VXRs lets us test acceleration, caching and other devices intended to enhance low-bandwidth links. We often use the T1 for bandwidth management as well.

A 100-Mbps link with a Shunra Storm inserted is an excellent mechanism for testing hardware and software reactions to packet loss, latency and congestion.

We also do extensive storage testing in the Green Bay lab, where we have a file-share network that uses network-attached storage devices as well as standard servers. We have a small switched Fibre Channel SAN with 1 TB of RAID 5 storage and SCSI tape backup here, too, to facilitate testing of SAN hardware and software. Last year, we spent a considerable amount of time and money creating a lab within a lab in Green Bay. We use this new facility primarily to test business applications in a 24/7 environment for a fictional widgets manufacturer we call NWC Inc. --so it's the only one of our labs that isn't subjected to continual repurposing.

Active Directory serves as NWC Inc.'s corporate directory; Microsoft Exchange 2000 provides mail and calendaring. Our critical database is IBM DB2 7.2 running on Windows 2000, with Oracle9i and SQL Server 2000 supplying ancillary functionality. These core applications run on Dell 2650 servers on both Windows and Red Hat Linux platforms. They communicate with each other over a copper Gigabit backbone and with the rest of the world over a Cisco Catalyst 4500 and 7401 ASR router.

NWC Inc.'s customer-facing Web application is implemented in PHP and is served by an Apache Web server running on Red Hat 7.3 on a Dell 1650 server. Our IBM WebSphere 4.01 Application Server runs on Windows 2000 and provides core transaction functionality. Web services are provided by Cape Clear Software's CapeConnect. Our corporate CRM (customer relationship management) application is Accpac 5.5 CRM running on IIS 5.0 and Windows 2000 Server.

Chicago

Back in the 1990s, our lab at Neohapsis headquarters housed just enough gear to run periodic tests, primarily of security products--intrusion-detection systems, firewalls and vulnerability-assessment suites, for instance. We kept a minimal set of devices up and running full time, but reconfigured most of our equipment for each project to avoid logistical nightmares. This approach was simple and relatively easy to manage, but it didn't scale--the more complex the technology got, the clearer it became that our existing setup just didn't do the trick. Each test took an inordinate amount of time, and concurrent tests often collided with each other. We had to make some major changes.

First, we committed to keeping more static gear running continually. Second, we added the infrastructure to support more self-contained multiple-project networks, each of which provides basic Internet connectivity, name-service support and a firewall for two or more projects while keeping each test network on its own segment. Third, we started using drive-imaging software for rapid system deployment (mostly Windows NT and Linux). Finally, we built a distribution rack that let us attach (literally) any two devices in the lab to each other with a single patch panel.

By striking a balance between static gear--Check Point Software Technologies and Cisco firewalls; Cisco routers; Cabletron, Cisco and Lucent Technologies switches; and Net Optics taps and other hardware, plus Layer 7 benchmarking tools like Spirent's products--and dynamic device pools--groups of switches, workstations and servers that can be repurposed--we're able to reduce setup time and increase efficiency. Today, we have a full-scale testing environment capable of supporting at least four tests on a variety of products simultaneously.

Our goal for the coming year: continue to diversify our production gear and decrease our provisioning times.

Ron Anderson is Network Computing's lab director. Before joining the staff, he managed IT in various capacities at Syracuse University and the Veteran's Administration. Write to him at [email protected]. Post a comment or question on this story. Nearly 10 years ago, we ran an article on Syracuse University's transition from a mainframe network to a client/server network. Little did we know how fortuitous that connection would be. Thanks to the relationship that developed between the editors of Network Computing and the IT professionals at the university, we now have three product-testing labs on the Syracuse campus, with additional expansions and upgrades in store.

Here's a flashback, to put things in perspective. Technology sure has come a long way, baby.

The Syracuse University Network: Moving To Client/Server
By Linda Nicastro
Feb. 1, 1994

The computer network at Syracuse University, a private institution in Syracuse, N.Y., is designed to serve the needs of a typical academic community. Home to more than 15,000 students, 1,000 faculty and 3,000 nonteaching staff, the network supports instruction, research and administration and is undergoing a transition from mainframe to client/server computing.

The Client/Server Challenge The plan to replace two campus mainframes with client/server systems and re-engineer legacy administrative applications by 1997 or 1998 challenges Syracuse today. Other changes include a pilot project that delivers Ethernet services to students' residence hall rooms.

Beyond the dorms, the university is addressing the remote-access needs of faculty, staff and commuter students with more sophisticated and secure services, including multiprotocol, remote-node dial-up systems. In time, new applications requiring higher bandwidth will challenge the university's basic network infrastructure, but Syracuse is ready to move with the market. One recently constructed campus building, for example, is equipped with fiber to the desktop to support FDDI. Syracuse also is partnering with Nynex Corp. to deploy NYNET, a wide-area ATM network.

The Syracuse Network Today

Much of the network is centralized in one building, Machinery Hall, and managed by the university's network systems group. The network is routed through a collapsed Ethernet backbone and fiber hubs, a system designed for manageability and cost-efficiency. More than 100 miles of fiber cabling provide connectivity among 60 buildings, while a uniform wiring plan brings Level 3 or Level 5 unshielded twisted-pair outlets to most campus offices.

In addition to the mainframes, network systems include an IDX 3000 Data PBX, an 800-node Systems Network Architecture (SNA) network, Unix time-sharing servers, SQL database servers, Network File System (NFS) servers running over TCP/IP and Novell servers supporting TCP/IP, IPX and AppleTalk protocols. Scattered across the campus are 14 networked microcomputer laboratory clusters housing Macintoshes, PCs and Unix workstations. Dial-in access is provided through a bank of 80 V.32bis dial-up modems. Academic computing resources are varied and include a new campuswide information system called SyraCWIS.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights