It’s execution time!
We’d not really thought about it whilst we were creating the scripts and in Vugen we were running them with think time turned off. Put the scripts in the controller and one script took over 3 hours to run one iteration! Our first test is to run each script in turn, 3 iterations – this was going to take over 18 hours for 6 scripts!
We discussed this with the Subject Matter Expert and cut the think times down to 120 seconds to read a report and 180 seconds to do the input, it increases the load but in the “safe” direction, we just cut the number of users a bit. In fact we trimmed most of the scripts so that they each ran in less than 30 minutes per iteration.
This raised another issue, nothing to do with BPC or LoadRunner, our RDP based injectors close their sessions (close rather than disconnect) after an hour of inactivity – which seems to be interpreted as no keyboard input – so running un-monitored tests was not practical. We’re getting this changed for the next testing cycle but this cycle will be completed before the CR gets actioned (update: the change was made in 2 days rather than the 15 of the SLA so we don’t have to check the injectors manually any more).
The main aim of our testing is to measure the report refresh times. The Subject Matter Expert has a big fear that too many requests will cause time outs. By default a time out is after 5 minutes – 5 minutes from submission so multiple users requesting the same report may create queues so some requests may not even get started before they time out. We actually found one process that timed out during our recording process so that was the first issue identified before we even got started in our testing phase: always nice to prove your worth early. A configuration change fixed this but it turned out that all testing up until then had been on sub 100 row data requests, we were at full power with several hundred thousand rows (an interesting question to muse upon: is 400,000 a few hundred thousand or several hundred thousand?)!
More Disks Please!
The data loading also resulted in us running out of disk space so more disks were ordered. This would have manifested itself within the production go-live cycle so there’s another good reason to performance test with production volumes. The Subject Matter Expert was rather pleased to have found that one before go-live, the three week lead time on new disk requests would have been very embarrassing and would probably have led to a delay in go-live.
The infrastructure is hosted and we’re not allowed access to the server performance metrics other than by requesting it on a daily basis, and then it’s only reported in 15 minute intervals. This has limited our monitoring and reporting, we’re strictly limited to reporting the response times, and change in response times due to load. This, sort of, makes the analysis easy (there’s not much to analyse) but makes any diagnosis of issues difficult, we’ve got a good Subject Matter Expert though and he’s on top of all that stuff, BPC itself seems to give him everything that he needs. We’re also running a session with SAP’s VTO team and they should be able to give us a few pointers to performance tweaks.
Part 5 will discuss our testing, results, conclusions and what we’ve learnt about BPC testing. We need to finish the testing first though!