SmartQA Community

#34 – A special on “Automation in Agile Dev”

SmartQA Digest

In today’s world of continuous testing, test automation plays a vital part in ensuring the health of released code. It is no more about testing, it is about automating the whole lifecycle of build-test-deploy in the world of Agile/DevOps.
Shivaji Raju outlines the shifts in automation in Agile development and what it takes to accomplish in this week’s SmartBites Video “Automation in Agile Dev“.
 
The article 15 categories of tooling for digital test automation in the ‘beEnriched’ section outlines an interesting visual on automation and tooling and outlines FIFTEEN broad categories that make up the modern digital testing landscape.
 
Have you used Sketchnote to take notes that is more effective and fun? Checkout the nanoLearning section where Anuj Magazine talks about visual thinking and Sketchnotes. And oh, in the “stayInspired” section is THE book on Sketchnote by Mike Rohde. An amazing book that is written by hand using Sketchnotes!

beEnriched

Featured image of article 15 categories of tooling for digital test automation

15 categories of tooling for digital test automation

In this article I have tried picture(ise) the landscape of the plethora of tools for testing software which has moved away from just testing to build-test-deploy in a continuous manner. Keeping the interesting visual I have listed the FIFTEEN broad categories of tools that make up the modern digital testing landscape.

Read More »

expandMind

Featured image for article "Sketchnote"

Sketchnote

Sketchnotes are purposeful doodling while listening to something interesting. Sketchnotes don’t require high drawing skills, but do require a skill to visually synthesize and summarize via shapes, connectors, and text. Sketchnotes are as much a method of note taking as they are a form of creative expression.

Read More »

SmartBites

||VIEWS FROM INDUSTRY LEADERS||

smartbits

||NUGGETS OF LEARNING||

#33 – A special on “Build with Quality”

SmartQA Digest

In this edition of SmartBites Video, Girish Elchuri, Founder & CEO of Smuuth Innovative Solutions shares what it to takes to “Build with Quality” rather than ‘test-out’ quality, He shares his perspectives on design, code, process, organisational attitude and the need to execute every line of code as part of early test.
 
In ‘beEnriched’ section the article What does it take to Build In Quality? outlines set of brilliant ideas curated from four articles on with the first suggesting ten ways to build high quality into software, second one from Scaled Agile framework outlining the clear definition of Done, the third highlighting how lean thinking and management helps, and the last outlining how Poka-Yoke can help in mistake proofing.
 

Enjoy the poster “It is not about finding bugs, it is about being sensitive 

how they can creep in that matters.”
 
In ‘nanoLearning’, Raja Nagendra Kumar outlines four key “Habits to Clean Coding”:  ‘Understand that DONE MOVES'(1), ‘There will be bugs, and for heaven’s sake, LEARN and ADJUST’ (2), ‘It is not just about functionality, constantly focus on NFRs’ (3) and ‘Continually re-factor right so that you don’t get into ‘fire’.(4)

beEnriched

expandMind

SmartBites

||VIEWS FROM INDUSTRY LEADERS||

smartbits

||NUGGETS OF LEARNING||

Four key habits to clean coding-(Raja Nagendra Kumar)

Raja Nagendra Kumar outlines FOUR habits to clean coding 

Firstly, whenever we say ‘DONE’, it is not actually from the business angle. Expect a lot of problems to come in, and handle them i.e. what we should keep as the scope of ‘done’, rather than the project manager declare it with whatever information I give.

Most of the time what actually happens is – from the definition of the process once it is done, there is a lot of bugs, regressions and fires. All these are actually telling you where the gap is in the ‘DONE’. The philosophy of a clean coder should be “ Don’t enjoy fire and never be in in fire”.

The best way to clean coding, is to understand issues as opportunities, treating bugs as inputs to make code better. At some point you will achieve, ‘Nirvana’ where you see the code you’ve written is actually scaling well, performing well, is able to adopt to new changes very well. That is my definition of clean code, rather than trying to measure it as a metric.

So, as long as you’re able to control the fire by a certain structure of the code, you are achieving clean code and there is a great benefit for a product to evolve faster.

Summarising Habit #1 ‘Understand that DONE MOVES’.
Habit# 2  ‘Well, there will be bugs, and for heaven’s sake, LEARN and ADJUST’
Habit #3 ‘It is not just about functionality, constantly focus on NFRs’
Habit #4 ‘Continually re-factor right so that you don’t get ‘fire’.

click to video

Automation in isolation is more of a problem!

by Vijay Kumar Gambhiraopet

Businesses view software testing as a physical activity of executing tests. Test execution, being the interfacing activity between software development and business, is most visible and often perceived as the primary reason for delivery delays. Hence, any solution to expedite this activity is readily accepted. 

Run, run, run!

Automating test execution is seen as a silver bullet to address this issue. The tests can be generated by script-less tools, which do not require scripting and those with AI embedded promise to generate numerous tests in no time. Once the tests are generated, their execution automated, the tests can be run on-demand with no dependency on humans and at a fraction of time taken by them, eventually producing a defect-free product. 

Going the open source & script less 

Leaders are demanding solutions based on open source due to the ease of use, flexibility to integrate with enterprise solutions and availability of skills. Open source tools are often coupled with robotic solutions to offer a comprehensive solution for enterprise systems based on technologies accumulated over the years.

The race to automate in order to align to Agile practices has triggered a demand for plethora of script-less test automation tools. Percentage of tests automated is a high focus metric in governance reports. However, the automated scripts generated by these script-less tools are not portable – to protect the business of solution providers! 

The catch-up game

Even as Agile is being embraced by customers, the automated execution of scripts is, often, limited to the regression test suite. The functionality developed in sprint is tested manually and upon successful testing a representative set is identified to be included in the regression suite, which are the candidates for automation. Hence, the automation is usually catching-up, spanning sprints.

Complete automation of a regression suite is targeted for the release schedule. While the velocity of a team is expected to improve over time, the effort to identify test scenarios, sourcing test data, test environment setup, defect cycle are ignored. The effort for these tasks accumulates over sprints leading to team burn-out and the eventual casualty is automation. 

Testability is key to being Agile

An ideal automation strategy, thus, is to automate progressively, for a program to be truly agile. The strategy should include how to identify test cases for automation, adding testability a mandatory requisite to develop a functionality, procedures to ensure a dedicated and on-demand test environment and test data with commitment secured from the stakeholders.

The last best experience that anyone has anywhere, becomes the minimum expectation for the experience they want everywhere

Bridget van Kranlingen – IBM Leader

“NEW” expectations from QA

To meet this insatiable demand for quality, the responsibility on tester community is ever increasing. The testers must look beyond their confines of the team and ensure participation in product meetings, agile ceremonies and present a user perspective on the requirements, convey user priorities to make quality intrinsic to the requirements. Apart from looking deeper into functionality, a tester should start looking higher into the business objectives. Testers should make quality as everyone’s responsibility.

Summary

Automating test execution in isolation ends up being more of a problem than a solution. Any automation solution either to enhance quality and to improve test cycles should encompass the tasks across test discipline. Automation should be considered a lever to meet the business objectives and not an objective itself. 


About the author

Vijay works at IBM as Test Automation leader for North America. He has been engaged with multiple clients across geographies and domains.  His professional profile is at  https://www.linkedin.com/in/gambhiraopet/

#32 : A special on “Digital Testing Automation”

SmartQA Digest

In this edition of SmartBites Video, Shivaji Raju Expert Architect at Allstate Solutions helps us understand as to what test automation is in the new digital world and highlights the key shifts.

“Automating test execution in isolation ends up being more of a problem than a solution. Any automation solution either to enhance quality and to improve test cycles should encompass the tasks across test discipline. Automation should be considered a lever to meet the business objectives and not an objective itself.” More in the interesting article Automation in isolation is more of a problem! in ‘beEnriched’ section by Vijay Kumar Gambhiraopet, Test Automation Leader for North America at IBM.

Approximate thinking” is a very necessary skill that allows one to rapidly work out facts quickly and rapidly understand. When I ask the simple question “How many hairs do you have on your head?” to workshop participants, the answers have varied from 5K-5M! Crazy variation, right? “The Art of Profitability” is a brilliant business book that inspired me to delve deeper into this. Read how the book inspired me in ‘expandMind’ section.

In ‘nanoLearning’, Raja Nagendra Kumar outlines four key “Habits to Clean Coding”: ‘Understand that DONE MOVES'(1), ‘There will be bugs, and for heaven’s sake, LEARN and ADJUST’ (2), ‘It is not just about functionality, constantly focus on NFRs’ (3) and ‘Continually re-factor right so that you don’t get into ‘fire’.(4)

beEnriched

Featured image of article "Automation in isolation is a problem!"

Automation in isolation is more of a problem!

Automating test execution in isolation ends up being more of a problem than a solution. Any automation solution either to enhance quality and to improve test cycles should encompass the tasks across test discipline. Automation should be considered a lever to meet the business objectives and not an objective itself.

Read More »

expandMind

SmartBites

||VIEWS FROM INDUSTRY LEADERS||

smartbits

||NUGGETS OF LEARNING||

Design for Testability – An overview

T Ashok @ash_thiru

Summary

This outlines what testability, the background of testability from hardware, the economic value of DFT,  why is testability important,  design principles to enable testability and guidelines to ease testability of codebase. This draws upon five interesting articles on DFT and presents a quick overview of DFT.


Introduction

Software testability is the degree to which a software artefact (i.e. a software system, software module, requirements or design document) supports testing in a given test context. If the testability of the software artefact is high, then finding faults in the system (if it has any) by means of testing is easier.

The correlation of ‘testability’ to good design can be observed by seeing that code that has weak cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test. A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all.
(From [1] “Software testability”  )

Testability is a product of effective communication between development, product, and testing teams. The more the ability to test is considered when creating the feature and the more other team members ask for the input of testers in this phase, the more effective testing will be.

(From [2] “Knowledge is Power When It Comes to Software Testability” )

Background 

Design for Testability (DFT) is not a new concept. It has been used with electronic hardware design for over 50 years. if you want to be able to test an integrated circuit both during the design stage and later in production, you have to design it so that it can be tested. You have to put the hooks” in when you design it. You can’t simply add testability later, as the circuit is already in silicon; you can’t change it now. 

DFT is a critical non-functional requirement that affects most every aspect of electronic hardware design. Similarly, complex agile software systems require testing both during design and production, and the same principles apply. You have to design your software for testability, else you won’t be able to test it when it’s done. 
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” )

The Economic Value of DFT 

Agile testing covers two specific business perspectives: (1) enabling critiquing the product, minimising the impact of defects’ being delivered to the user. and (2) supporting iterative development by providing quick feedback within a continuous integration process.

These are hard to achieve if the system does not allow for simple system/component/unit-level testing. This implies that Agile programs, that sustain testability through every design decision, will enable the enterprise to achieve shorter runway for business and architectural epics.  DFT helps reduce the impact of large system scope, and affords agile teams the luxury of working with something that is more manageable reducing the cost of delay in development by assuring assets developed are of high quality and needn’t be revisited.
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe“)

Why is testability important?

Testability impacts deliverability. When it’s easier for testers to locate issues, it gets debugged more quickly, and application gets to the user faster and without hidden glitches. By having higher testability, product/dev teams will benefit from faster feedback, enabling frequent fixes and iterations.

Shift-Left – Rather than waiting until test, having a whole-team approach to testability means giving your application thoughtful consideration during planning, design, and development, as well. This includes emphasising multiple facets such as documentation, logging, and requirements. The more knowledge a tester has of the product or feature, its purpose, and it’s expected behavior, the more valuable their testing and test results will be.
(From [2] “Knowledge is Power When It Comes to Software Testability” )

Exhaustive Testing

Exhaustive testing is practically better and easily achievable if applied in isolation for every component on all possible measures, this adds to its quality instead of trying to test the finished product with use-cases that tries to address all components. This raises another question, “Are all components testable” ? The answer is “build components highly testable as much as possible”.

However in addition to all these isolated tests an optimal system level test also should be carried out to ensure the End-To-End completeness.

Exhaustive testing is placing right set of tests at right levels i.e., more isolated tests and optimal system tests.

VGP

(From [4] “Designing the Software Testability” ]

“SOLID” design principles

Here are some principles and guidelines to can help you write easily-testable code, which is not only easier to test but also more flexible and maintainable, due to its better modularity. 

(1) Single Responsibility Principle (SRP) – Each software module should only have one reason to change.

(2) Open/Closed Principle (OCP) –  Classes should be open for extension but closed to modifications.

(3) Liskov Substitution Principle (LSP) – Objects of a superclass shall be replaceable with objects of its subclasses without breaking the application.

(4) Interface Segregation Principle (ISP)- No client should be forced to depend on methods it does not use

(5) Dependency Inversion Principle (DIP) – High-level modules should not depend on low-level modules; both should depend on abstractions. Abstractions should not depend on details. Details should depend upon abstractions.

[SOLID = SRP+OCP+LSP+ISP+DSP]
(From [5] “Writing Testable Code” )

Law of Demeter (LoD)

Another “law” which is useful for keeping the code decoupled and testable is the Law of Demeter. This principle states the following: Each unit should have only limited knowledge about other units: only units “closely” related to the current unit. Each unit should only talk to its friends; don’t talk to strangers. Only talk to your immediate friends.
(From [5] “Writing Testable Code” )

Guidelines to ease testability of codebase

(1) Make sure your code has seams – A seam is a place where you can alter behaviour in your program without editing that place.

(2) Don’t mix object creation with application logicHave two types of classes: application classes and factories. Application classes are those that do real work and have all the business logic while factories are used to create objects and respective dependencies.

(3) Use dependency injection
A class should not be responsible for fetching its dependencies, either by creating them, using global state (e.g. Singletons) or getting dependencies through other dependencies (breaking the Law of Demeter). Preferably, dependencies should be provided to the class through its constructor.

(4) Don’t use global state
Global state makes code more difficult to understand, as the user of those classes might not be aware of which variables need to be instantiated. It also makes tests more difficult to write due to the same reason and due to tests being able to influence each other, which is a potential source of flakiness.

(5) Avoid static methods
Static methods are procedural code and should be avoided in an object-oriented paradigm, as they don’t provide the seams required for unit testing.

(6) Favour composition over inheritance
Composition allows your code to better follow the Single Responsibility Principle, making code easy to test avoiding class number explosion. Composition provides more flexibility as the behaviour of the system is modelled by different interfaces that collaborate instead of creating a class hierarchy that distributes behaviour among business-domain classes via inheritance.
(From [5] “Writing Testable Code” )

References

[1] Software testability at  https://en.wikipedia.org/wiki/Software_testability

[2] “Knowledge is Power When It Comes to Software Testability” https://smartbear.com/blog/test-and-monitor/knowledge-is-power-when-it-comes-to-software-testa/

[3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” at https://www.scaledagileframework.com/design-for-testability-a-vital-aspect-of-the-system-architect-role-in-safe © Scaled Agile, Inc.

[4] “Designing the Software Testability” at https://medium.com/testengineering/designing-the-software-testability-2ef03c983955

[5] “Writing Testable Code at ” https://medium.com/feedzaitech/writing-testable-code-b3201d4538eb


Approximate thinking

by T Ashok @ash_thiru

Many years ago I read the book “The Art of Profitability” a brilliant business book that beautifully outlines TWENTY THREE profit models in any business. I was blown away then by the style this was converted.

It is in the style of a provocative dialogue between an extraordinary teacher David Zhao and his protege. Each of the twenty three chapters presents a different business model.

So what inspired me and connected this to with QA/Testing? In the chapter on “Entrepreneurial Profit” the protege is amazed at how fast David calculates and spins out numbers. He asks as how he is able to calculate blindingly fast with any calculator, to which David says “I cheat”.

David poses the question “How many trucks will it take to empty Mt Fuji if it is broken down” and illustrates how he could calculate the answer quickly.

“Imagine Fuji is a mile high. That is wrong, but that does not matter. We will fix that later. Now imagine it’s a cone inside a box one mile on each side. To figure out the volume of the box instead of 5280 feet on each side use 5000. So colure is 5000 cubed. = 125billion cubic feet. If Mt Fuji fills about half the cube then it is ~60 billion cu ft. If each truck can transport 2000 cu ft, then it will require 30 million trucks! Now that you know how to do this, refine the figures. Fuji is more like two miles. Redo the arithmetic”. The protege is blown.

That is when it hit me that he was teaching “Approximate thinking” of how to rapidly approximate and get facts to analyse further. I have used it many many times In the context of QA, estimating load, estimating data volumes is best by approximate thinking and refinement. Just guessing does not cut.

I wrote the article “How many hairs do you have on your head” to illustrate this. You will enjoy the read!

I love reading different kinds of books and each one of gives a interesting insight and I connect those ideas to what I do i.e. Scientific Testing.

Read this book, it will certainly change how you think , it will also teach you quickly understand value and profitability.

Cheers.


#31 : A special on “Design for Testability”

SmartQA Digest

In the beEnriched section is an interesting article “Design for Testability- An Overview” that outlines what testability is, background of testability from hardware, economic value of DFT,  why is testability important,  design principles to enable testability and guidelines to ease testability of codebase, drawing upon five interesting articles on DFT.
 
In this edition of SmartBites Video, Girish Elchuri illuminates us on how Design for Testability is useful in building with quality.
 
“The Art of Profitability” is a brilliant business book. However, I learnt “Approximate thinking” of how to rapidly approximate and get facts to analyse further. Read how the book inspired me in “expandMind”.
 
In nanoLearning, Dr. Arun Krishnan explains why stopping using human intellect would be a mistake in any field. While he is all for AI helping testing, he believes there is  still a role for human intellect.

beEnriched

expandMind

SmartBites

||VIEWS FROM INDUSTRY LEADERS||

smartbits

||NUGGETS OF LEARNING||

Role of human intellect in QA ( Arun’s view)

Question: In this age of Automation and AI what do you believe is the role of human intellect for QA?

Arun – I always maintain that analytics is a platform, AI or ML is a platform that is going to enable humans to make decisions. For example, there are already models that can predict based on looking at X-rays, the propensity of somebody having cancer for instance, but would we completely stop using human intellect? I think that would be a mistake, in any field. A recent case in point is the air crash that took place in Ethiopia, where the plane completely controlled by an algorithm. If only the humans had disengaged this, the crash may have been averted. A recent Twitter spat between Elon Musk and Mark Zuckerberg was if AI will be beneficial or pose an ethical issue. Well, I am the side of Elon Musk, while Zuckerberg has a very rosy vision, which I don’t think it is at all.

I grew up reading Asimov, the robot series and the three laws of robotics got into me when I was a kid. In the book, those laws of robotics were circumvented in very unique ways in certain circumstances. I read that Google is starting to think about the ethics of AI, which means you do not only build in the ethics programmatically but also have the human override. While I am all for AI helping testing, I think it still is a role for the human intellect. It might sound a little wishy-washy but I think you still have to ensure that human intellect has a veto power so that you can shut off the AI switch if you think it is isn’t telling, if it can be catastrophic.

I think that fear is real, I don’t think a lot of people realise how soon we’re going to lose many jobs, people relate to the industrial revolution. When the automobiles came, the guys who were shoveling horse manure moved into the production line, but that’s very different because the training cost for that was very minimal. To train somebody to be an AI expert is not easy. It’s not going to happen. So what do we do, if we move away from testing?

I think that fear is real, all I’m saying is if you think about whether it can be completely divorced from human intellect, and the ability of humans to influence what the final outcome should be, we are a little far from that. Not saying it won’t happen, but we are a little far, I think.

click to video

Dissecting the human/machine test conundrum

T Ashok @ash_thiru

Summary

It is common to see testing discussions veer into a dichotomy of “manual vs automated testing” and how the latter is indeed the order of day today. Sadly I find the discussed seriously flawed. In this article I dissect these the way we test as being human-powered and machine-assisted and outline an interesting way as to how the role of power of human and machine assistance is paramount to do testing smartly, rapidly and super efficiently.

Introduction

The way we test has been trivialised into two buckets of manual and automated test by general milieu. Firstly the phrase “manual testing” seems to connote a menial job, intensely labour oriented which it is not and therefore the phrase highly incorrect. Secondly the notion of automated testing seems to connote writing scripts and running it frequently to detect issues. What is forgotten is that once a test script uncovers an issue which when fixed makes this script as a health ascertainer rather than a ‘bug finder’ and that test cases/strategy needs to be constantly updated to newer more ubiquitous issues.

Human-Machine testing NOT Manual-Automated testing

The more appropriate word would be HUMAN testing , as it connotes a combination of BODY-ily activity wth INTELECTual thinking powered by MIND. MACHINE as term is probably more appropriate in signifying an aid to test in a holistic manner rather than AUTOMATED testing, which seems to to connote only build and execute.

HUMAN Testing = intellect+body+mind

Philosophically human is seen a composition of Body, Mind and Intellect. Using the same idea context of testing, I see the act of physically observing, hearing, doing, feeling BODY-ily activity while INTELLECTual activity powers some of the key activities of testing while MIND enables appropriate thinking.

Human Based Testing = INTELLECT + BODY + MIND

The backdrop to dissection

To dissect the various the activities and understand what needs to be done HUMAN and what can be by a MACHINE, I am going to use the Test Activity list that outlines key activities in the lifecycle of testing:

  • Understanding the System under test
  • Strategising and planning the test
  • Designing test scenarios, cases, data sets
  • Executing tests including automation
  • Reporting – issues, progress
  • Analyse – issues, test progress, learnings

Now we will analyse the various HUMAN-powered and MACHINE-assisted activities for each key activity.

HUMAN-MACHINE Test Map

The complete HUMAN-MACHINE conundrum dissected here:

Note that this is not be intended to be comprehensive filled with all tool aids, nor all the human activities as this map would lose its utility then! Use this as as aid to understand the HUMAN-MACHINE test conundrum and for heaven’s sake STOP USING the phrases “MANUAL and AUTOMATED testing”.

“It is time we recognised that it takes smart HUMANs assisted by MACHINES (really tools/tech) to test less, test rapidly and accomplish more”