The shift to 10G networks is under way. According to the Network Observations blog (see link below), over half of enterprises (2500+ users) will have made the shift to 10G networks by the end of 2008. The trend is not just limited to the United States, as it is also reported that close to 25% of global businesses will join the race to 10G this calendar year.

While these numbers are relevant to larger businesses and corporations, smaller companies will also soon require such extensive bandwidth to manage daily IT and network operations. In preparation, vendors have begun to drive demand through the use of aggressive marketing and price reductions.

With reduced prices on 10G equipment, many organizations are choosing to upgrade their bandwidth immediately for new technology purchases. After all, why purchase older, slower technology at comparable prices, when your organization can simply begin to prepare for the future now?

THE CHALLENGE: MONITORING 10G Given the current state of the economy, network operations teams are being challenged to do “more with less,” a phrase that has become pervasive enough to take on the look of an industry theme of late. This trend is showing up in 2009 budget estimates, which are expected to fall by an average of 2.5% from 2008 levels, according to Gartner Research. In response, decision makers are forced to more thoroughly evaluate all capital purchase and make hard decisions about canceling / delaying some transactions.

10G projects are not immune to the budget crunch. Although the cost of 10G equipment has come down recently, it is still selling at a premium to 1G tools. At the same time, enterprises are faced with the daunting task of monitoring 10G networks to ensure that their business critical applications are secure and running at acceptable performance.

With the move to 10G, many IT strategists are concerned about whether they will need to upgrade the many different types of network and application monitoring tools that they have already purchased. These business critical tools include: application monitors, intrusion detection systems, compliance tools, data recorders, VOIP monitors, and protocol analyzers. Few organizations have the budget to upgrade some, let alone all of these tools.

THE SOLUTION: TOOL AGGREGATION Imagine a world where you can use your 1G tools to monitor a 10G network. It can be done due to two important enablers:

1. Most tools only need to see a small fraction of the network traffic to do their jobs. In fact, sending more data than is required actually degrades efficiency, because tools cannot keep up.
2. Tool Aggregation, a new industry trend, enables traffic to be filtered and dynamically directed to the correct tools. With this technique, you can increase monitoring coverage and save money.

Tool Aggregation enables traffic to be received at 10G bandwidths and filtered on Layer 2/3/4 criteria. In most cases, traffic from a 10G link can be reduced to 1G or less by filtering out data that a tool does not need to see, so your existing 1G tools can still be used. If the filtered traffic is over 1G, then operators can still use their 1G tools by load balancing the traffic to two 1G tools using Tool Aggregation. With proper filtering, multiple 10G links can be monitored with a single 1G tool in many cases.

So exactly how should traffic be filtered? It depends on the tools you are using, the applications you are monitoring, and your business objectives. For example, a typical application performance monitoring tool only needs to see TCP traffic from the specific application ports that it is monitoring. Likewise most VOIP monitors only need to see certain protocols such as SIP, SCCP, and MGCP. Tools work most efficiently when they are sent only the specific traffic that each tool needs. Only then can 1G tools can be used to monitor 10G links.

FILTERING: THE KEY INGREDIENT Filtering may seem like a straightforward concept, but in reality, there is more to it. If not done correctly, incomplete filtering can compromise network coverage.

There are three key areas where Tool Aggregation and similar products differ: ease of use, accuracy, and self-maintenance.

Ease of use: Does the system offer an intuitive interface / GUI?

Some available systems require the user to enter many lines of complex and cryptic filtering rules via a command line interface (CLI). Other systems offer drag and drop GUIs that cut the required management time for the system from hours to minutes. Your network operation team is already being stretched to do “more with less,” so your chosen solutions should be as easy to use as possible.

Accuracy: Does the system automatically handle overlapping packets?

Overlapping packets meet the filter criteria of more than one tool and therefore need to be sent to multiple tools so each tool can do its job. This case can be easily overlooked, but in reality, overlapping packets occur widely in most data centers. If overlapping packets are handled incorrectly, your tools will not see all the right packets, and your monitoring coverage will be severely compromised. Why invest in purchasing and deploying powerful and expensive tools if you do not send them all the packets they need to monitor?

Typical filters run in sequence. Sequential filtering processes the required filter for the first tool, and then sends the remaining data along for subsequent tools. The problem with this approach is that downstream tools fail to get the full set of data that they need to monitor. For systems that use a CLI to manage filters, correcting this problem is excessively difficult and taxing on the operator-it is not uncommon for overlapping packet filters to require coding of over one hundred of lines of complex rules. In a down economy, who has the budget to add headcount so you can have an expert in the filter-coding language on staff?

Insist on solutions that automatically and accurately handle the filtering of overlapping packets. The user simply specifies the data you want each tool to receive and the system takes care of the complexity.

Self-Maintenance: Does the system automatically adjust your filters when changes occur in your network configuration?

Overlapping packet filter rules are not just difficult to set up initially with a sequential CLI-based filtering system. They also have to be continually maintained each time a change is made in the network, the tool itself, or the filter settings. And let’s face it…your network is continuously changing. Failure to keep up with manual maintenance of filters via a CLI results in significant compromises in coverage when tools do not get the data they require to do their jobs. Yet, IT departments do not have the resources to keep a dedicated filtering expert on staff. If you seek to maximize monitoring coverage accuracy as well as operational practicality, do yourself a favor and look for a solution which will automatically maintain filters as your network changes.


– Use 1G tools to monitor 10G links

– Use 100MB tools to monitor 1G links

– Filter traffic so each tool gets only the data it needs, enabling it to operate at full efficiency, even in mixed 10G / 1G environments

– Reduce costs for deploying, managing, and operating monitoring tools

Other Key Benefits



By yanam49

Leave a Reply

Your email address will not be published.