My thoughts on hiring web application testers

in #testing7 years ago

This is an excerpt from my new book on building software teams, I appreciate any feedback!

There are only so many tests that are relatively easy to automate. For the ones that can’t really be automated, the extra layer of testers will give your team a real advantage. Let me briefly cover what tests can be automated and what testers should cover, based on my own experiences in front-end development. For back-end and other tech areas, your mileage may vary. A word of warning, there is sometimes a bit of overlap between these three types of testing as the definitions are not set in stone. For the following three types of automated testing, I recommend having as much as possible delegated to your developers.

Screen Shot 2018-02-05 at 16.22.53.png

Unit tests

These are small parts of your application that can be tested in isolation from the rest of your application. Normally these would be JavaScript files that can be imported into your tests without many external dependencies and then tested with assertion libraries such as Jasmine or Mocha. In React applications this would include such things as Reducers and shallow rendered components. In Ember, this would be things like Ember controllers, models and routes running in the ember QUnit tester in isolation. In VanillaJS this would be importing your javaScript files and testing their public APIs. You could also class rendering VanillaJS components to the page unit testing but some could also call this integration testing. Can you see how its kinda hard to pinpoint the difference, get used to this grey area!

Integration tests

These are where the aspects of your application are tested along with other parts of the application, typically mocking aspects of it. For example, in React you would mock the store and API calls and do more deep mounting using a library like Enzyme. In Ember, you also have their own equivalent integration test runners. The application logic that you test in here can also overlap with acceptance tests, so you have to decide if you want to overlap the test logic or split it up

End to End Testing or Acceptance testing

This is where you test the application running in its native environment with full features. For example, opening the website in the browser and interacting with it using butting clicks and other user events. There are two lines of thoughts regarding who should write them. Some companies prefer to delegate the test of writing acceptance tests to dedicated QA people and some that prefer the developers to write them. I am personally of the opinion that developers should write them, as I don’t see the point of the separation. It was probably better for QA people to write these tests when selenium was the industry standard, but now that we can now write excellent tests with JavaScript with the likes of Nightwatch (wraps selenium and web driver) and especially TestCafe that doesn’t need Java, I feel this is the way to go.

I actually wrote a Chrome extension that assists developers in writing acceptance tests in order to encourage more developers to write acceptance tests. It supports various acceptance test runners including Nightwatch and Ember-CLI. It works by listening to users interactions in the app in a functioning correct state, or happy state, and generates the test code to playback these interactions It also generates some assertions based on mutations. You can check out the code here: https://github.com/QuantumInformation/test-recorder.

Feel free to try it out and let me know how it goes on the issue tracker or send over some pull requests.

Manual testing

This is the domain where humans test the application using a checklist of criteria. The more that your company has automated tests, the less work this will be for your testers and the less you will have to employ. However, the greater the number of devices you need to test on, and especially older versions of IE, the more of the testers you need. Otherwise, your Jira issues will take longer to get past your QA process. I’ve not come across companies where I have worked that use pixel to pixel testing, where you compare the rendered form of the application to a stored image. I just wouldn’t want to have to set this up, but it will be done out there in the wild by some companies. Testing the rendered forms of web pages across different devices and browser widths is just easier when you do it by hand. Can you imagine trying to write tests that evaluate the position of all your elements at different screen widths? Neither can I.

The human eye can quickly scan up and down your page to check if your media queries and other CSS rules are working to produce your design at different widths. If you don’t have a design team that gives you designs for different browser widths then either train them or hire ones that can. This tool is pretty useful in that regard https://pinegrow.com/

Hiring Tenacious testers

QA testing is hard work, so you should pay them generously. They will likely get a lot of stick from developers, getting in the way to merging features. However, don't hire testers that will respond to developer annoyance in kind. This will just annoy the developers even more and create friction in your team. Testers are there to reduce bugs from getting into production, not to add to the stress developers feel when under deadlines. Testers should have the patience of an Oak tree to deal with the inevitable clash that they will get from developers. However, if they can keep their cool, they will eventually earn the respect from the developers they deserve.

They also need to accept that their job will be extremely repetitive. There's no way I could do it as a developer. They have to be willing to test the same app and same devices for every story they are assigned. You should make it clear to your developers just how demanding this is.

Tenacious testers also need to realise they are often the first to get the blame who don't catch bugs that get into production, just after the product manager gets the blame.

Conclusion

So in general, your testers are to be considered a premium resource and paid accordingly. This is a high-stress position that demands great skills. Not many people can do what they do, and they are your last defence from bugs getting into production.

Sort:  

hi @quantuminfo thank you for sharing this piece. i like the excerpt.
I am a tester my self and would like to mansion a process flow which i find more engaging and more fruitful .
As a QA i get involved at the very beginning of the process (Requirement gathering). Tests gets developed right after the requirements are frozen and it is sent to the developers to serve as a blueprint of the feature. Unit tests are written and perform by the Developers. You have mension that E2E Tests should get handled by the Developers but i think End to End flow tests are good if it's in Testers hand as developers work only on the specific or portion of the requirement but QA is involved in entire software life cycle.

Would love to here your Views. Thanks.

Thanks mate, for your advice, happy to hear more about your process.

I have been on steemit for a few days now. Finally a post about software testing. You made my day today buddy... :) Thanks again.

So when you write your tests before the developers start, how do you write them? Selenium?

First wire frames and UX gets developed and then Manual test gets written based on them. This manual tests are sent to the developers and in parallel Automation test cases are written in BDD using cucumber (Based on manual tests).

As you can see Manual tests are of paramount importance and each and every feature gets tracked to BDD tests these tests are in sync with manual tests which in return are linked to the requirement.

Hope this helps. Sorry for the delay. :-)

I haven't used anything for UI comparison. UI/UX team is generally involved in UI testing. They tests Html Ui with the actual design which are prepared in Photoshop(or any other tool). pixel to pixel comparison is too much of a hassle. It's good if design team test it.