Image for post
Image for post

When I taught a Software Testing course

This Spring (2016) I taught an upper-level undergraduate Software Testing course (CIS640) at Kansas State University. And, here are my observations.

No one textbook covers’em all: While designing the course, I identified the following topics to cover in the course: basic concepts in testing (including specifications/requirements), concepts in xUnit testing frameworks, unit testing, property-based testing, coverage criteria, test quality, TDD, ATDD, and BDD. (Nope, we did not cover all of these topics.)

While there were books dedicated to each topic (e.g., TDD: By Example, xUnit Test Patterns, ATDD by Example, BDD in Action), there was no book that covered all of the identified topics. Also, there was no book that covered topics such as property-based testing. Further, general software testing textbooks such as Software Testing: Concepts and Operations and Software Testing: A Craftsman’s Approach seemed either too deep for upper-level undergrads or a bit dated in their content.

So, I prepared slides containing the content distilled from various sources. For topics such as property-based testing, I used blogs and videos, e.g., an introduction to property-based testing, choosing properties for property-based testing, and race conditions, distribution, interactions — testing the hard stuff and staying sane.

While the slides can certainly be improved, they seemed to work well with auxiliary material, live coding sessions, and the Socratic method of teaching.

Python is a good language to teach testing: While Java and C# have good support for unit testing, they both lack easy and accessible tool support for property-based testing. In comparison, Python has some good tool support in the form of nose (unit testing), pytest (unit testing), hypothesis (property-based testing), pytest-cov (coverage), lettuce (BDD), and mutpy (mutation testing) libraries. In most cases, these tools were easy to use and well documented. So, we used Python along with these libraries (with the exception for lettuce) in the course.

The decision to use Python worked well for most part as some students were familiar with Python, some students enjoyed learning Python, most students did not have issues with the language and tools, and we could automate most of the evaluation of assignments. One drawback with this decision was the students had to write test cases to compensate for the lack of static typing in Python. In other words, a small fraction of the test suite in every assignment was dedicated to testing type correctness; with a statically-typed language like Java or C#, we could have avoided these tests by relying on the compiler.

Live coding sessions can enhance classroom experience: In most of the classes, I started with a code snippet for which we collectively wrote tests — students identified the aspects of the code to be tested and suggested the test logic while I wrote the code and ran it. When discussing topics such as coverage metrics, we did the same while I wrote on the whiteboard (instead of tapping on my laptop).

As everyone was trying to “fix” a common code snippet while everyone observed the fix, this facilitated discussion about alternative fixes and collective “debugging” of issues. On the downside, it entailed some coordination and coaxing overhead, e.g., five simultaneous fixes, no fixes.

Socratic method works well: In almost all classes, I would put up the names of new terms/concepts or intent for code and ask students to define them based on their current knowledge and past experience. Then, we would collectively group the answers and explore and evaluate each answer group for correctness and precision. This involved me (or even students) posing questions about the responses: is it correct?, why is it correct?, what is missing from it?, how can we fix/improve it?, how does this answer compare to that answer?, should we combine answers A and B?, etc.

Initially, most of the students were reluctant to participate in these discussions. It took some time to overcome the all-answers-have-to-be-correct barrier and get the students to voice answers independent of their correctness, discuss them, and understand why certain answers were more “correct” than others. Towards the end, most students warmed up to this method and were comfortable in voicing their answers for discussion and participating in the discussions with questions.

While this method does require quite a bit of participatory effort, I think it does help students cultivate an exploratory attitude towards learning, which is a pre-requisite when dealing with topics involving uncertainty and when they graduate into the real world.

Requirements needs more focus: The first homework in this course was a simple programming problem. Almost all students made assumptions about the problem but very few bothered to validate the assumptions with the client (me). When we discussed the problem in the class (with Socratic method), many were shocked to learn that their assumptions were not valid according to the client and even others :-O

The second homework was a simple example-based (black-box) unit-testing problem with explicit instructions to validate any assumptions with (elicit missing requirements) from the client. While more students contacted the client on this homework, many of them had a hard time eliciting requirements (and systematically exploring possible assumptions).

When we moved on to property-based (black-box) unit testing, many students had a hard time uncovering properties of the stack data structure. While the students clearly knew the operations supported by a stack and their properties, many of them had a hard time constructing properties involving multiple operations of a stack. Further, they had a hard time generalizing examples (e.g., if we push 5 followed by 3 onto a stack, then we will get 3 followed by 5 when we pop the stack twice) into properties (e.g., if we push c1, c2, .. cn onto a stack, then we will get cn,…,c2,c1 when we pop the stack n times).

[Over subsequent exercises, the students got better at eliciting and validating requirements (and assumptions) and abstracting properties from examples.]

While this seems like an observation about the students, it is really an observation about our courses and curriculum. Clearly, the students lacked training in eliciting, stating, and validating requirements. Further, it seemed like a software testing course was introducing them to the same. Both big no-nos. Instead, we should have at least one course that focuses (not just provide lip-service) on eliciting, stating, and validating requirements.

Looking back, here’s a wishlist for future editions of the course.

  • Use a decent property-based testing tool for Java and/or C#. junit-quickcheck could be a possible candidate.
  • Add a course about eliciting, stating, and validating requirements as a pre-requisite for this course.
  • Find and recommend a book that covers current software testing techniques (e.g., property-based testing, random test input generation) and tools as the course textbook.

P.S.: If you have recommendations for textbooks or tools, then please leave a comment with your recommendations.

Programming, experimenting, writing | Past: SWE, Researcher, Professor | Present: SWE

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store