SmartQA Community


SmartQA Digest

Great code occurs due to a brilliant confluence of clean code mindset, good heuristics/tips and  healthy software engineering habits. This week’s beEnriched section article 10 mindsets, 10 tips & 10 habits to clean code connects these using articles that were published earlier.
It is not a great working solution, it is not brilliant technology that makes deployment of an enterprise solution  successful, it is ‘operationalising’ that is key to success says Zulfikar in the smartbits video “Operationalising is key“.
Oh, this week’s SmartBites is “10 Thoughts” on Agile mindset, metrics, AI, good vs bad code, “what is technical after all” and others from ten wonderful practitioners.
Hope you have checked out the new SmartQA web site at  Guess you have noticed that all prior digests are also available here !



Black box thinking

Learning from failures .The inside story of how success really happens and how we cannot grow unless we learn from our mistakes.

Read More »





TWO tools to aid smart understanding

T Ashok @ash_thiru


Doing SmartQA is about great mental clarity of visualising what is intended, what is present, what-may-be-missing that could-be added to enhance the experience. The intent to seek this clarity result in good questions that help us understand better and therefore test well is the objective of this article. Two tools for the mind “Landscaper” and “Deep diver” that can help are outlined here.

Prevention occurs due to good understanding. Detection occurs due to good understanding. Understanding of what is needed, what is stated and what is implemented.

Doing SmartQA is about great mental clarity of visualising what is intended, what is present, what-may-be-missing that could-be added to enhance the experience. The intent to seek this clarity is what one drives to question well, build better, prevent and detect issues.

The act of testing is really discovering what-should-be-there but-not there, what-is-there but not correct, what-should-be-there but should-not-be-there. Finally it is about understanding the impact of something that been changed, be it in the system or outside the system.

The key to this is understanding the product/application/solution. Understanding from different points of view- end users, construction, technology, environment, development & deployment. 

Smart Understanding is about scouring the ‘landscape’  to understand overall context and the static structure of how it is built and then ‘deep-diving’ to understand  the intended dynamic behavior. Landscape and Deep dive are great tools for the mind to explore the system rapidly to do SmartQA. The associated picture illustrates these two thinking tools well. 

Tool#1 Landscaper

Given an entity to test, be it a small component or a big business flow or the entire system, the first thing to do is examine it well by performing a ‘landscape’ . 

  1. Start with understanding who the end users are, then identify the entities (the system offerings), the connect to who uses them and how much and then go on to figure out as to what they may expect from these. 
  2. Now switch to a deployment view to understand how is deployed and other systems it is linked with.
  3. Now from the construction/development point of view understand what is built new (via fresh code or glue code that integrates components) and what is being modified. 
  4. Continuing to look from the structural view, understand how these entities may be coupled, to appreciate the interactions and what may be affected due to the new/modified entities.
  5. Go a little deeper to understand the system is is architected and technology stack(s) used to implement this.

What we are doing is doing a tour from different points of view and attempting to understand the ‘whole’. During this process many questions arise which enables better clarity of problem.

Tool#2 Deep diver
Having a good holistic picture of the system under test, it is only natural to to dive deeper to understand in detail an entity. Deep diver is the second tool that helps you do this.

  1. First understand the various inputs to this entity. What are they, where do they come from, what is their format and spec, rates and volumes too? Are there any interesting values ?
  2. Next, understand what the various outputs are and may be. What are the normal outputs and what are those in situations of error? Do check if ‘all’’ possible outputs have been indeed been considered.
  3. Finally it is time to understand the intended behavior by discovering conditions that transform the inputs to intended outputs. Note that some of the behaviour conditions could be based on system state and not based on inputs only.

Smart understanding is key to doing less, doing well and accomplishing more. This is a seriously mental activity and doing it well has great paybacks! The two mental tools “Landscaper & Deep diver” enable a logical approach to decomposing the system(i.e. problem) well so that you have great clarity on what, how-to-do and how-much-to-be-done.


Design for testability

In this smartbits, Girish Elchuri outlines about Design for testability.

Another practice that I follow is, during the testing times, the code is made to behave differently, though I am not altering the functionality. For example, in a particular workflow, I add a mobile number or an email id and I validate them later. There may be certain functionalities that would be possible only when you verify the mobile number and email id. These are the mundane things for which you cannot do the test automation. 

So what we do internally is when we are in the test mode and as soon as the mobile number is added or the email is added, we treat them as verified, we put that flag, so that we can do further testing in a much more efficient manner. That is just an example of how you can make the product behave differently for testability which is a much more efficient way of doing it, this is the first aspect.

Secondly, in a workflow-based product, you do certain things and for the third step of the workflow, you might need some information that is generated in the first and second steps. Normally, this will be visible to the user on the GUI but then on GUI-less test automation, it becomes difficult. So what we do is, in the testing mode, we actually generate this data as a file and then using the data which is generated as a file, we do the next steps in the workflow. This is another way that we make the code behave differently in a testing mode.

Thirdly, in a small percentage of cases, when the test is being run for subsequent steps, the code itself generates the test automation scripts. So what I do is instead of just outputting the data, I output a callable script call into the file and in the end, I just execute that file.

Since all these behaviors occur during testing, we can have a flag there saying “I am testing” and then what we can do additionally is make sure that these things are not run in production mode. These are some additional checks that we are doing. Again, that is where I see a developer helping the testing folks by putting this additional code to facilitate the testing.