StarBase Blog

 

 

Ferrari's on the Autobahn

Ferrari’s on the Autobahn

Author: Alan Mouldale, Technical Director, StarBase

Network Virtualisation is an enabling technology that makes performance tests more realistic and their results more valid to base business decisions on. I am going to make the case that performance testing should always include network virtualisation.


 

Network performance is all about bandwidth! Isn’t it?

A crude analogy is bandwidth is like a road, and the cars on it are the data being transferred.

A 4-lane motorway will allow more cars to travel on it than a country road, but each car cannot exceed the speed limit for the road (in my analogy at least) therefore there will always be a minimum journey time for each car travelling from A to B.

In network performance terms the journey time is the latency and this is dictated by the distance from A to B and the speed limits on each road along the journey. The change in speed from one road to another is referred to as jitter.

If more cars join the road, the journey times for each will remain the same until there are too many cars trying to use the road. When this congestion happens the journey time for each car will increase. In terms of network performance if more data is being transferred than the bandwidth of the link allows then the latency and jitter will increase.

Your application will send data to and receive data from its users and other applications. The geographical distance between these applications and the speed of the network between them will govern the minimum time this takes and therefore will contribute to the performance of the application. If your application sends a lot of messages, such as a high volume trading application, or large amounts of data, such as a document management system, then distance and speed will have the greatest impact.

So, latency and jitter, not just bandwidth, will govern the performance of your application.

Add to this the fact that not all the data will complete its journey. Data/packet loss is common, especially on saturated networks and mobile cellular networks, and data can get damaged/malformed along the way. This means the data needs to be resent and the latency increases further.Modern-IT-Application-complexity-diagram

How does this relate to performance testing?

Typically a performance test lab will be built on the same network as the application that is being tested. Sometimes even in the same rack of servers. For network performance this is the equivalent of using Ferraris on their own private autobahn!

And this is good! Performance tests will be able to highlight the performance characteristics of the application and identify performance defects at the highest possible usage without the network constraining the tests.

However, what will the true performance of the application be when it is on its real network?

A common practice amongst performance testers is to quote response times ‘at the edge of the application’ and state that network times will vary.

Another common practice is to generate the load from different points in the network, and from other remote locations. This can be somewhat effective, as the data will be travelling across the real network, and from a ‘political’ point of view you can tell troublesome users you tested it from their office. However it has its drawbacks:

  • Does:
    • testing have to be out of hours?
    • load need to be restricted?
    • testing suffer from unpredictable network traffic?
  • How can you:
    • test against network peaks and events?
    • benchmark performance against bandwidth and connection type changes?

This is where Network Virtualisation comes in.

The modern network virtualisation tools allow you to model your real network in the performance test lab. Complex network hierarchies can be introduced, each with their own bandwidth, latency and jitter characteristics.

When you performance test with network virtualisation, previously undiscovered performance characteristics about your application come forward, such as if data is taking longer to transfer then connections are being held open for longer, consuming more resources on the application and potentially leading to performance issues.

Without network virtualisation these performance characteristics would only become apparent when the application is put live.

Network virtualisation also allows you to answer important what-if questions such as:

  • Would increased bandwidth improve the application’s performance?
  • What benefit would we see if we change the WAN network to MPLS?
  • What impact on the application’s performance would we suffer if the network backup executes against the online day?
  • Does the application perform on 3G and 4G?

A smart approach to test would be:

  1. Test the application without network virtualisation – this will identify performance defects, provide performance characteristics for the application and allow it to be tuned for maximum performance
  2. Introduce network virtualisation and re-test the application – this will identify further, network-related, performance defects and allow further tuning of the application for specific network uses
  3. Optionally run single users in remote locations against the network enabled performance tests. This will allow you to see if those remote locations match the performance characteristics of the virtualised networks, and will give a level of reassurance to the previously mentioned ‘troublesome’ users

So network performance is governed by more than just its bandwidth, you have to consider latency, jitter, data and packet loss, and malformed or damaged data to get a true picture of how your network performs. The network can have a considerable impact on the performance of your applications, especially high transactional or content rich applications, which let’s face it most applications are becoming. Traditional performance testing will not highlight application performance issues caused by the network, you make the most of your investment in performance testing you should include network virtualisation.

Find out more about the author Alan Moulsdale, Technical Director

Posted by:alan
alan
Technical Director, StarBase
Alan has worked for StarBase since 1997. He now has responsibility for identifying and formulating new products, services and solutions; and for evangelising StarBase’s solutions to existing and prospective clients. Alan architected StarBase’s Performance Testing Methodology. In addition to this being the foundation of many successful StarBase client projects, elements have also been adopted by blue chip organisations including companies in the financial sector and leading system integration specialists.

 Leave a Reply

Your email address will not be published. Required fields are marked *


Security code Cannot read the characters? Refresh

Client Testimonials