It isn't easy, but it is easier than w/o measuring (none / 0) (#7)
by tmoertel on Sat Sep 22, 2001 at 07:23:50 PM EST
How do you test for concurrent performance penalties
without modifying the timing of those concurrencies?
You drive the loads from remote hosts and take the measurements from
those points. Typically you measure HTTP response time: You send an
HTTP request and start the stopwatch. When you get the HTTP response,
record the elapsed time. Distribute thousands of such
measurements over hundreds of virtual users, each an independent thread issuing requests
w.r.t. to a usage profile you have developed from analyzing the logs
(i.e., generate a real-world load that you can "dial up"). Repeat any
number of load tests (typically 30 minutes to several hours each), increasing the load in between, and statistically
analyze the resulting measurements.
Resource-consumption measurements (CPU, page faults, ...) and
implementation-specific timings must be taken on the hosts under test,
but they are usually negligible. Remember, you're not looking for
rare race conditions as much as hideously mistuned or missing database
indexes, disk contention, thrashing, resource bottlenecks, and so
forth -- all fairly easy to spot under heavy load (if you're taking
 Analyzing the logs, writing programs to drive the virtual users, and compiling volume data to feed the virtual users with the logins, IDs, and other inputs they need to appear as independent users crawling through your application -- this is nasty, hard work. But you must do it if you want realistic load tests, and that's why I put load testing last on the list. Hope you don't need to go that far to tune your app.
My blog | LectroTest
[ Disagree? Reply. ]
[ Parent ]