By Susan Fogarty

Gas prices are going through the roof. Most of us cringe at those words, but they cause Spectrum ASA to spring into action. When the market is high, the seismic imaging company must act quickly to scout and analyze new locations where oil and natural gas could occur below the earth’s surface.

For Spectrum, action means processing huge volumes of data. “Our job as a company is to do [oil] surveys in frontier areas,” says Andrew Cuttell, executive vice president of data processing at Spectrum. “We’ll do a survey in a particular area, process the data and sell the results to the oil and gas companies, who are deciding if they want to bid on licenses to explore these areas. Right now the oil price is high, so oil companies have money to spend, and they are quite encouraged to look at frontier areas to find the next big oil fields.”

Spectrum’s geophysicists are known for their success in handling difficult and challenging datasets from all over the world. Based in Oslo, Norway, Spectrum has eight offices in four continents, and is growing at a rapid pace. The company performs geophysical tests including land and marine processing, pre-stack migration in depth and time, AVO and AVAZ analysis, and inversion studies.

Intense data demands

All these tests produce reams of data, requiring a high-powered IT infrastructure that is centralized in the company’s data center in Houston, Texas. “We start out with very, very large volumes of data,” explains John Lyons, vice president of information technology at Spectrum. That data originates in the field, and most is relayed to Houston’s computing cluster for processing. “Every time we process a job, we’re creating thousands and thousands of data points that are put through several algorithms in order to produce an image of what the subsurface of the earth looks like. At least a few terabytes of data are passed around the cluster, have calculations done to them, and then have a new dataset generated of equivalent size,” Lyons says.

Spectrum needs to be able to process their data quickly and accurately to satisfy clients like Exxon, Shell, and Chevron, says Cuttell. When the petroleum market surged ahead in 2011, however, the company found that part of its data center was creating quite a bottleneck.

Although the cluster had the ability to perform and had been going through a rolling upgrade to keep it working well, the network infrastructure sending data to the cluster and between nodes could not keep up with processing performance.

“The cluster CPU [central processing unit] was being starved of data and we weren’t getting the maximum performance out of the systems,” says Lyons. “We are expanding very rapidly and need to expand the clusters to meet that.”

High-performance network needed

Part of Spectrum’s expansion has included installing new ultradense servers, says Lyons. The newer racks in the clusters are built with Dell ™ PowerEdge™ C6100, four-node chassis servers. Each node in a chassis has two Intel® Xeon® X5675 processors, 96GB memory, three 600GB SAS hard drives, and two 10Gb Ethernet ports. “A fully populated rack provides up to 1152 CPU cores,” Lyons calculates. “We currently have 12 racks in the cluster room.”

Spectrum needed a network that could support all of that computing power, and that could grow with the company’s needs. “ What I was looking for was an environment that would significantly increase the throughput of the existing cluster but offer us the potential to scale up going forward,” says Lyons. Spectrum also uses Dell workstations and clients to help render data on the front end.

Lyons understood the problem and had clear goals he wanted the new network to achieve. The former system was based on a single core network switch with limited capacity, causing Spectrum’s data to bottleneck.

Network requirements defined

Lyons defined Spectrum’s new network backbone as one that would provide greater capacity, resiliency, and scalability. He found the best deployment in a non-blocking architecture based on Dell ™ Force10 ™ Z9000 core switches. Two Z9000 switches are connected in a distributed design along with four new Dell Force10 S4810 top-of-rack switches. Most of the legacy rack switches will gradually be replaced over the next few years, says Lyons. The plan also includes supplementing existing hardware as needed to meet demand.

Meeting demands for energy exploration

Increased capacity

The refreshed backbone delivers capacity in 40Gb connectivity between the core switches and top-of-rack switches, and between the core switches and a pair of switches that connect to storage, explains Lyons. The S4810 switches connect to the servers at 10Gb, eliminating throughput issues. “We were already buying cluster nodes with 10 Gb connectivity. Having the performance there was very key to us,” Lyons affirms.

Improved resiliency

The redesigned network uses a distributed architecture that spreads out traffic loads and also provides redundancy for the system. According to Lyons, a failure in either of the Z9000 core switches will cause a slight performance drop, but will not affect routine business. In addition, the S4810 switches have cross-connections that support any-to-any connectivity between server nodes at line-rate and are designed to fail over if a fault occurs. The distributed core architecture also allows one node to be brought down or replaced without having any impact on the overall switch fabric.

 

Scalability and open design

Lyons explains that the pair of Dell Force10 Z9000 core switches are designed for scalability and easy growth, and that was critical factor in selecting the technology. “For every rack that we’re putting in, we have 320-gigabit capacity down to each rack. We can support eight racks from one of those Z9000s. In the future, by simply adding more Z9000s, we can scale the whole thing up, with no additional bottlenecks being introduced — all we’re doing is adding more paths,” he says . The distributed design approach is interoperable with all existing IP and Ethernet technologies and also allows the use of any standards-based Layer 3 protocols such as OSPF or BGP.

 

Overall, Spectrum’s IT environment is running at peak and meeting their technical and business goals. Lyons reports that data processing time has been cut in half because his servers are no longer waiting for access to data.

 

Cuttell agrees, noting, “Reliable equipment that works well means that we don’t have so many errors, we don’t have delays, and we can then get the data out as quickly as possible.”

Susan Fogarty works for Dell as the editor of Catalyst magazine.

 

Download the white paper on distributed core architecture design:
http://bit.ly/vq7mXK