The Appium test framework provides the flexibility to choose the test development ecosystem of our choice. We opted for the Java ecosystem. The web automation framework for playback was already using TESTNG and Maven, and it was working quite well for our needs.
TESTNG provides the testing framework to manage test cases, test scripts, and test suites effectively (goal 3 from the previous blog) (http://testng.org/doc/index.html). MAVEN is the build automation tool which helps in managing test frameworks build and reporting (https://maven.apache.org/).
Adding Debugging Capabilities to the Test Framework
Below two debugging capabilities were added to the test framework for catching bugs efficiently:
a) Ooyala SDK Event logs generated upon running Test Automation Scripts.
b) Snapshot feature to visually inspect the device screen when a test fails.
Report generation after Test Execution
Needless to say, there are plenty of open source reporting tools available. I choose the Allure reporting tool and integrated the tool into our test framework (http://allure.qatools.ru/).
We hacked into the Allure tool to show both the debugging features (snapshot and SDK Event log) for each test case.
This helped in having a single go-to place for debugging and viewing overall test execution reports.
Below snapshots shows allure report with device snapshot and Ooyala SDK log for each test case executed.
The text link ‘appium-android-test-2-dev #1298’ next to allure text/symbol in above snapshots are the corresponding Jenkins job link.
Below snapshot shows Allure summary report for a particular Jenkins job number
Developing Quick Look’s and Deep Dive’s:
We divided the automation scripts to be developed into two suites for each sample app (Android and iOS).
1) Quick Look
Check if the most basic functionalities in the sample app are working correctly. For example, when clicking of the play button, the "playStarted "event is triggered, pausing of the video, the pause event is triggered, etc.
2) Deep Dive
Deep dive checks how the sample app features behave when interacting with the device ecosystem as a whole .
Example: If a user goes back to the home screen when sample app video is playing, then comes back to the sample app video screen, verify that the video continues to be in the "play" state.
Scaling up Test Execution
Multiplier benefit of test automation can be gained by executing automated tests on different devices, and OS version combinations, given how diverse the device ecosystem is as explained earlier.
Scaling up test execution in the Android ecosystem
The Appium tool provides the ability to run test scripts on multiple devices, on different ports in a single test machine in parallel. The number of USB ports available per test machine is a limiting factor.
To overcome this issue, we use programmable/non programmable USB hubs with which we can use to increase and determine the exact number of USB ports available per test machine at any given time. This helps in leveraging maximum benefit from a single test bench.
Scaling up test execution in the IOS ecosystem
In the iOS ecosystem, the limitation is that test scripts can be run on only one device per test machine.
This limitation is again overcome by using a programmable USB hub. Using the hub, a single machine can physically have multiple iOS devices attached to it. Only one particular device can be made visible to the test machine at a time to run automation.
Putting it all together through Jenkins Jobs
Building sample apps
Both Android and iOS Ooyala SDK comes with sample apps for customer use. We have created Jenkins jobs to make sure that all the sample apps build successfully. If a regression is created and any one of the apps fail to build, we flag it with the developers.
Running Quick Look and Deep Dive
On the sample apps which build successfully, another Jenkins job runs to do a basic sanity check.
For thorough testing, we run DeepDive Jenkins jobs.
Designing and building of an effective dashboard is an essential part to any test automation project to give us data based feedback about the test executions.
We initially did a POC for the dashboard using Django web framework using Mongodb as the database. Even though the POC worked well, we realized that it might take some effort to build and maintain.
We were looking for a more cost effective solution. That is when we decided to make use of Google Spreadsheet which proved very cost effective. After each Jenkins execution, JSON was constructed with relevant test data and posted to a tab of a Google Spreadsheet. The nice part of the sheets is that it allows us to generate relevant charts based on the posted test data.
Snapshot below shows the data populated from Jenkins job in the spreadsheet:
Snapshot below shows charts populated from posted test data:
It was a great experience to architect and build the automation framework for playback Android and iOS SDK from scratch. As part of the project, we were empowered to try different ways of solving problems, and chose the solution that best enhanced the user experience.