Functional testing is making sure the application functions properly and according to specifications. Performance testing is making sure the site works just as well with 1000 users as it does with one. To that end, our team will create software that simulates concurrent user traffic in variety of randomized use cases to mimic expected load in production and beyond.
Performance testing is the most reliable way to be sure your application can meet the concurrent demand of your users while still delivering a pleasant user experience. When integrated in the development cycle, performance testing identifies any performance bottlenecks before they affect your users, and with appropriate sprint planning, allows time for remediation in order to correct these issues before go-live.
Ensures that productivity does not decrease due to poor application performance under load.
Ensures that hardware is sized correctly; this is especially important in cloud environments, as over estimation of hardware requirements can lead to unnecessary expenditures.
Decreases the number of issues the help desk must field.
If functional testing verifies that the system is “fit for use” according to requirements, performance testing assures that the system is fit for use by many users simultaneously.
There are several kinds of tests that may be called performance tests, but essentially they are all working toward the same goal, and that is to ensure a system performs as designed at a predicted level of usage.
Performance tests simulate multiple users completing actions in a production level environment and can take on many forms, as listed below:
The types of tests executed depend on the application under test and the needs of the business. Experienced performance test engineers will have the ability to work with business and product owners in order to design a suite of various performance test types in order to ensure successful operations after deployment to production.
Testing is part of the accepted software development cycle, and performance testing is a specialized and valuable type of software testing to validate that applications not only work properly, but work properly under a heavy user load. Our performance test engineers are ready to supplement your team or develop and execute a test plan from the ground up.
Software testing, including performance testing, never takes place in isolation, and in addition to testers, VPS can also field teams to assist at all points in the SDLC, from requirements to release.
Please contact us to learn more about how you can leverage VPS’ industry experience and knowledge to drive a successful software delivery.
To be successful and efficient at scripting performance tests, planning and preparation must be the cornerstone of your methodology. It is essential for the performance engineer to have a solid high-level understanding of the System Under Test (SUT).
What is the system’s intended purpose and what are the main use cases?
Having achieved an understanding of what the system does, it is next critical to understand the system’s user ecosystem, including the different user roles and their assigned capabilities, common use cases of the end users (both internal and external), the projected total number of users, projected number of users in each role, etc.
Performance testing every piece of functionality within in a system of any size is impossible, so once a holistic picture of the application is obtained, it becomes necessary to target specific functionality.
Typically, in this scenario we will apply the Pareto principle, which is an accepted rule of thumb that 80 percent of the effects witnessed in a system are caused by 20 percent of the functionality available within the system. In other words, we focus on the functionality that will be used by the largest number of users. Additionally, input is obtained from the developers and product owners who would have insight into which functionality should be targeted for testing.
With the tools available today to assist in the creation of performance test scripts, creating a basic script is as easy as manually following the steps of the use case in a browser, and recording the request/responses generated.
However, to create a realistic and fully automated test script that models the complex behaviors that typical users perform is much more challenging. With each additional configuration parameter of a performance script, such as the user throughput, think timers, conditional execution based on business rules, the complexity of the script increases.
Identify use cases and business assumptions for data load
Manually click through browser to understand use case and create the basic script
Transform this to dynamic script
Take script and turn into a valid test scenario – timers, target user load – set thresholds
Introduce this script into master test plan
Review result and perform analysis
Present results to business and make recommendations
VPS executes performance tests using servers on our global cloud infrastructure
Allows tests to scale to infinite number of concurrent users
Servers can be configured in any region to simulate global traffic
Guaranteed network throughput, ensuring that network latency does not create false positives during testing
Steps in Script Execution
Configuring Test Boxes (local and cloud)
Ensuring enough System Resources
Memory, Ram, bandwidth, CPU
Configuring Each Test Case
Combining Test Cases into a Test plan
Executing Test cases across multiple test boxes
Performance Test Portal, or PTP for short, is an application conceptualized and built by Vision Point Systems in order to expedite the analysis of performance test results and deliver test results in a manner that is easily interpreted by business and stakeholders, while still providing valuable data to technical personnel.
This application provides a central, searchable repository for performance test data for the life of the project and beyond. This includes the properties of the test such as, the number of users, ramp time, sample time, start time, and end time. Additionally, two performance tests can be selected for comparison and the application will return the deltas between the tests.
Using PTP to process analysis of performance tests is simple:
Execute the test.
Download the raw results.
Log In to PTP
Create a new test.
Upload the results.
From the raw results, the application creates a report that breaks out any request that fails to meet defined SLAs, displays a graphical representation of the requests generated per second throughout the life of the test, and includes a table which contains all the results captured by the test.
This screen can be exported to PDF for reporting purposes. Furthermore, a user can drill down by clicking the label of any request listed. At this point the application presents various graphs of the data compiled for that specific request/action. These views can also be exported to PDF.
In the field, PTP has proven valuable by identifying issues in applications and environments that otherwise were not initially visible to users. For example, one application test showed that during the sample time of the test there was a 4 minute period where the request per second throughput would be reduced to 25% of the target. Upon investigation PTP revealed that for each request targeted in the test, average response times went up by more than 400% during the same 4 minute period. It was immediately apparent that the issue lay in the environment and it was not an anomaly in the test itself, this gave the the team a starting point for the investigation and remediation for this issue.
PTP is included with any VPS performance test engagement. If you would like your current performance test team to utilize PTP, licensing is available.
Drastically reduces the effort to process raw performance test results
Deep dive into results data
Provides views into data not previously available to testers
Visual representation of results allow testers to quickly identify patterns and anomalies.
Performance tests can produce a lot of data, but determining what the data says is a different task. Once a performance test is complete, the next phase of performance testing begins. This phase consists of analysis, investigation, and remediation. Analysis of the test results leads to an investigation of the root cause. Once the root cause has been identified remediation efforts can begin.
Process the raw results in order to create usable data for investigation and remediation.
Identify performance bottlenecks in the use case that was tested.
Results of analysis drive investigation and remediation.
Analysis identifies http requests with poor response times
Investigation identifies the particular bottleneck for a specific request - for example, a long running SQL query, inefficient code, etc
Collaboration with Developers and Product Owners facilitates successful investigation
Investigation looks at Application/Server logs in the System Under Test during the test period
Results of investigation lead to remediation
During remediation the performance test team will offer options to optimize code or work with the dev team to assist in optimizing the application for performance
This phase of analysis, investigation, and remediation is where an experienced performance test team is critical. In the analysis phase a seasoned performance engineer can discern whether or not the test is valid based off of the number of samples and throughput. During investigation an experienced performance test team will have a good idea of where to begin based off of analysis of the results. Finally, the most effective performance engineers have a software engineering background. This is vital during remediation. A performance engineer with a foundation built upon software engineering has the ability to offer insight and recommendations for remediating the issue once it has been identified.
In an iterative software development process, performance testing accelerates the team’s progress towards the goal of high quality software.