- Using a term without introducing it.
As a Android non-expert, when I read “We will focus on activities in an Android apps.”, I quickly search for the words “activity” and “activities” in the document to find an introduction to the term “activity”.
Fix: Introduce the term in place or earlier, e.g., “We will focus on activities in Android apps — activities are the UI components of Android apps.”
- Using an acronym without providing its expansion.
When I read “CTR>TDD” in a figure, I wonder if CTR means Click-Thru Rate or Control or some other metric. Since the article is about Test-Driven Development (TDD), I will likely rule out Click-Thru Rate. However, I am not sure if it means “control set” or some other metric.
Fix: Introduce the acronym with its expansion in the body of the article before the figure or in the caption of the figure. A reader can find this easily both by visual or electronic search.
- Framing a two-sided research question and providing a one-sided answer.
When I read the research question “Does the adoption of TDD affect the number of issues reported for the project?” and the answer “TDD repositories seem to have no more issues filed against them than the general repositories represented in the control set.”, I wonder “do TDD repositories end up with fewer filed issues than general repositories? If so, isn’t TDD be better?”
Fix: Either a) pose a precise research question or b) provide a precise answer that covers all facets of the posed research question. Ideally, do both. In the above example, “Does the adoption of TDD increase the number of issues reported for the project?” would have been a more precise research question that aligns well with the answer. As an alternative, if supported by the data, “number of issues filed against TDD repositories is comparable to the number of issues filed against general repositories represented in the control set.” would have been a more precise answer to the original question.
- Stating a data set is limiting and using it for analysis.
When I read “We found that Java TDD projects are relatively rare.”, I think “So, the authors will have used non-Java projects to evaluate TDD”. Upon reading further, I realize the authors have used Java projects to evaluate TDD. Now, I am doubtful of the validity of the findings.
Fix: Use an appropriate data set. As an alternative, read next peeve.
- Stating a data set is limiting and not mentioning it as a threat to the validity of the results.
When I read “We found that Java TDD projects are relatively rare.” and I later find the authors have used Java projects to evaluate TDD. I am looking for some explanation of how did the earlier statement about rarity affect the findings.
Fix: Describe if and how identified limitations/observations are (not) threats to validity of results. If a limitation poses a threat to the validity of results, then mention possible ways to address the limitation.
Note: I plan to update this post as I stumble upon other peeves.