SmartQA Community

Errors are useful, innovate using them.

T Ashok @ash_thiru  on Twitter

As software test practitioners we revel in finding bugs,  and as managers and engineers we are focused on fixing these. Steven Johnson in his book “Where good ideas come from-The seven patterns of innovation” devotes a chapter on errors as a source of innovation. This article summarises key ideas from this outlining how erroneous hunch changes history, how contamination is useful, how being wrong forces you to explore, how paradigm shifts with anomalies and how error transform into insight. 

Brilliant software testing is interesting, kinda like Star Trek, going into the unknown. It requires discipline, creativity, logic, exploration skills, note taking, observation, association, hypothesising, proving and  juggling multiple hats frequently. It is not a mundane act of finding bugs and getting them fixed. It is just about scripting and running it to death. It is about exploring the unknown, driving along buggy paths, discovering new ideas, suggesting improvements, enhancing experience, embedding testability and having the joy of creating beautiful software by looking at the interesting messiness.

When I read the book “Where good ideas come from – The Seven Patterns of Innovation” by Seven Johnson, I was delighted to know that one of the patterns was “ERRORS”. An entire chapter on this was a treat to read. As a test practitioner I have always meandered along the paths of anomalies, understood implementation and intentions deeply, and have come up with interesting suggestions to improve and enhance experience. Errors have been my best partner in perfecting what I do and coming up with interesting ideas. 

In this article I have summarised key facets from this chapter “ERRORS” outlining how erroneous hunch changes history, how contamination is useful, how being wrong forces you to explore, how paradigm shifts begins with anomalies in the data, how paradigm shifts with anomalies and how error transform into insight. 

An erroneous hunch changed history
The strange correlation between the spark gap transmitter and the gas flame burner turned out to have nothing to do with the electromagnetic spectrum. The flame was responding to the ordinary sound waves emitted by spark gap transmitter. But because De Forest had begun his erroneous notion that the gas flame was detecting electric signals, all his iterations of Audion involved some low pressure gas inside the device that severely limited its reliability. It took a decade for researchers at GE to realise that triode performed best in vacuum, hence the name vacuum tube. The vacuum tube, the precursor to the stunning electronics that would sweep the world n in the next few decades was born from an error. De Forest watching the gas flame shift from red to white when he triggered a surge of voltage through a spark gap forming a hunch that gas could be employed as a wireless detector could be more sensitive that anything than anything Marconi or Tesla had created to date. An erroneous hunch changed history.

Contamination is useful
Alexander Fleming discovered the medicinal virtues of penicillin when the mould accidentally infiltrated a culture of Staphylococcus left by an open window in his lab. Antibiotic was born. A bunch of iodised silver plates left in a cabinet packed with chemicals by Louis Daguerre formed a perfect image when the fumes spilled from a jar of mercury. Photography was born. Greatbatch grabbed the wrong resistor from the box while building a oscillator and found it was pulsing a familiar rhythm. Pacemaker was born. Contamination is useful.

Being right keeps you in place. Being wrong forces you to explore.
The errors of great mind exceed in number those of the less vigorous one. Error often creates a path that leads you out of comfortable assumptions. DeForest was wrong about his utility of gas as a detector, but he kept probing at the end of edges of the error, until he hit upon something that was genuinely useful. Being right keeps you in place. Being wrong forces you to explore.

Paradigm shifts with anomalies. 
Paradigm shifts begin with anomalies in the data, when scientists find that their predictions keep turning out wrong says Thomas Kuhn in the ‘The structure of scientific revolutions’. Joseph Priestly thought a plant would die when kept in a jar depriving it of oxygen, it turned out to be wrong, discovering plants expel oxygen as part of photosynthesis. Being wrong on its own doesn’t unlock new doors in the adjacent possible, but it does force us to look for them. Paradigm shifts with anomalies.

Transforming error into insight
Anro Penzias and Robert Wilson thought noise in the cosmic radiation was due to faulty equipment until a chance conversation with a nuclear physicist planted the idea that this may not be the result of faulty equipment, but rather the still lingering of reverberation of big bang. It changed the opinion that the telescope was the problem. Coming at the problem from a different perspective with few preconceived ideas about what the correct result could be can enable one to conceptualise scenarios where the mistake might be actually useful. Transforming error into insight.

Noise free environments end up being more sterile 
A few decades ago Prof Charles Meneth began investigating the relationship between noise, dissent and creativity in group environments. When his subjects were exposed to inaccurate descriptions in the slides, they became more creative. Deliberating introducing noise forced the subjects to explore more in the adjacent possible enabling good ideas to emerge. Noise free environments end up being more sterile and predictable in their output. The best innovation labs are always a little contaminated.

Error is what made humans possible in the first place.  
Without noise, evolution would stagnate, an endless series of perfect copies, incapable of change. But because DNA is susceptible to change – where mutations in the code itself or transcription mistakes during replication – natural selection has a constant source of new possibilities to test. Most of the time, these errors lead to disastrous outcomes, or have no effect whatsoever. But every now and then, a mutation opens up a new wing of adjacent possible. From an evolutionary perspective, it’s not enough to say “to err is human”. Error is what made humans possible in the first place.  

Mistakes are not the goal, they are an inevitable step in the path of innovation.
When the going gets tough, life tends to gravitate towards more innovative reproductive strategies, sometimes by introducing more noise into the signal of genetic code, and sometimes by allowing genes to circulate more quickly through the population. Innovative experiments thrive on useful mistakes, and suffers when demands of quality control overwhelm them. Mistakes are not the goal, they are an inevitable step on the path of true innovation

Truth is uniform and narrow, error is endlessly diversified.
“Perhaps the history of the errors of mankind, all things considered, is more valuable and interesting than of their discoveries. Truth is uniform and narrow; it constantly exists, and does not seem to require so much an active energy, as a passive aptitude of soul in order to encounter it. But error is endlessly diversified.”

Benjamin Franklin

Necessary but not Sufficient

I have been a great fan of Dr Goldratt having read all this books, my favourite being his first book “The Goal”. This book “Necessary but not Sufficient” is written as a “business novel” and shows the fictional application of the Theory of Constraints to Enterprise resource planning (ERP) and operations software and organizations using that software. 

Here is an interesting comment by Alistair MacDonald (from Goodreads) “The stance of the book on the value of software is that “software is necessary but not sufficient”, Ie: software is a necessary evil. I think this is an accurate view of software: it’s valueless without the ability to reprogram humans to use it correctly. The book applies this concept to change in general; Ie: providing a systems approach to fixing a human problem is only half of the solution, you also have to change the mindset of the users so they are able to buy in to the paradigm shift that the system enforces. There is a hidden world of beauty among all of this, which is that the original meaning of software was “people to run the hardware” (prior to hardware having the ability to operate on procedural instructions from memory). So, “we need the software”, but “we can’t expect results without changing the users”.

An excerpt from Jack Vinson’s blog on this : “With regard to the story in the book, I enjoyed it for what it was.  It follows the usual path.  The vendor and implementer see they have a problem meeting their forecasts.  They come upon the idea of selling bottom line value, rather than the usual justifications that their industry offers.  And they discover just how hard it is to turn “visibility” into a number that means anything to the bottom line.  Eventually, they hit upon a way to think about their software in a new way – a way that is inspired by Theory of Constraints.”

Black box thinking

Marginal gains and the Secrets of high performance

“Unilever had a problem. They were manufacturing washing powder at their factory near Liverpool in the north-west of England in the same usual way – indeed, the way washing powder is still made  today. Boiling hot chemicals are forced through a nozzle at super high levels of pressure and speed out of the other side ; as the pressure drops they disperse into vapour and powder. The problem for Unilever was that the nozzles didn’t wrk smoothly, they kept clogging up.

Unilever gave the problem to its crack team of mathematicians, they delved deeper into problems of phase transition, derived complex equations and after a long time came up with a new design. But it was inefficient. Almost in desperation Unilever turned to its biologists, who had no clue of phase transition or fluid dynamics! Well they solved it!

The biologists took ten copies of nozzle, applied small changes in each and subjected them to failure by testing them. After 449 failures they succeeded.”

From Black Box Thinking –Marginal gains and the Secrets of high performance

Progress had been delivered not through a beautifully constructed masterplan (there was no plan!) but by rapid interaction with the world. A single outstanding nozzle was discovered as a consequence of testing and discarding 449 failures. 

It is not coincidental that biologists chose this strategy – Evolution is a process that relies on a ‘failure test’ called natural selection. 

The strategy is a mix of top-down reasoning and fusing of knowledge they already have with the knowledge that can be gained by revealing the inevitable flaws.


A brilliant chapter titled “The nozzle paradox” from the book “Black box thinking – Marginal gains and the Secrets of high Performance” by Matthew Syed.

This book is a compelling read on innovation and high performance across many industries: sports, healthcare and aviation amongst others, all approached from an unusual starting point – failure.

“Learning from failure has the status of a cliché. But it turns out that, for reasons both prosaic and profound, a failure to learn from mistakes has been one of the single greatest obstacles to human progress. Healthcare is just one strand in a long, rich story of evasion. Confronting this could not only transform healthcare, but business, sports, politics and much else besides. A progressive attitude to failure turns out to be a cornerstone of success for any institution.”

From the extremely moving first chapter to the very end, Matthew Syed tells the inside story of how success really happens and how we cannot grow unless we learn from our mistakes. (From

The Laws of Medicine: Field Notes from an Uncertain Science

T Ashok @ash_thiru

“It is easy to make perfect decisions with perfect information. Medicine asks you make perfect decisions with imperfect information”.

Siddhartha Mukherjee

In this wonderfully thin book “The Laws of Medicine: Field Notes from an Uncertain Science”, Siddhartha Mukherjee investigates the most perplexing cases of his career ultimately identifying three principles that govern modern medicine.

Book cover of “The laws of medicine” by Siddartha Mukherjee

Law One: A strong intuition is much more powerful than a weak test.
“A test can only be interpreted sanely in the context of prior probabilities”
The answer to why a dignified fifty-six-year-old man, from a tiny Boston neighbourhood, who was suffering from weight loss and fatigue was solved by simply getting to know the patient better!

Law Two: “Normals” teach us rules; “outliers” teach us laws.
“Rather than figure out why a drug failed, he would try to understand why it occasionally succeeded “
The strange case of “patient 45,” who miraculously responded to an experimental drug for bladder cancer (patients 1 through 44 weren’t as lucky) was studied deeply by Solit resulting in a discovery of an unusual genetic marker that could identify which future patients could also be helped.

Law Three: For every perfect medical experiment, there is a perfect human bias.
“The greatest clinicians have a sixth sense for bias. What doctors really hunt is bias”
Countless biases pervade the medical literature, even when studies have been randomized and controlled to eliminate prejudices.

 Read this brilliant book to understand counterintuitive thinking!

Approximate thinking

by T Ashok @ash_thiru

Many years ago I read the book “The Art of Profitability” a brilliant business book that beautifully outlines TWENTY THREE profit models in any business. I was blown away then by the style this was converted.

It is in the style of a provocative dialogue between an extraordinary teacher David Zhao and his protege. Each of the twenty three chapters presents a different business model.

So what inspired me and connected this to with QA/Testing? In the chapter on “Entrepreneurial Profit” the protege is amazed at how fast David calculates and spins out numbers. He asks as how he is able to calculate blindingly fast with any calculator, to which David says “I cheat”.

David poses the question “How many trucks will it take to empty Mt Fuji if it is broken down” and illustrates how he could calculate the answer quickly.

“Imagine Fuji is a mile high. That is wrong, but that does not matter. We will fix that later. Now imagine it’s a cone inside a box one mile on each side. To figure out the volume of the box instead of 5280 feet on each side use 5000. So colure is 5000 cubed. = 125billion cubic feet. If Mt Fuji fills about half the cube then it is ~60 billion cu ft. If each truck can transport 2000 cu ft, then it will require 30 million trucks! Now that you know how to do this, refine the figures. Fuji is more like two miles. Redo the arithmetic”. The protege is blown.

That is when it hit me that he was teaching “Approximate thinking” of how to rapidly approximate and get facts to analyse further. I have used it many many times In the context of QA, estimating load, estimating data volumes is best by approximate thinking and refinement. Just guessing does not cut.

I wrote the article “How many hairs do you have on your head” to illustrate this. You will enjoy the read!

I love reading different kinds of books and each one of gives a interesting insight and I connect those ideas to what I do i.e. Scientific Testing.

Read this book, it will certainly change how you think , it will also teach you quickly understand value and profitability.



What are Sketchnotes?

Sketchnotes are purposeful doodling while listening to something interesting. Sketchnotes don’t require high drawing skills, but do require a skill to visually synthesize and summarize via shapes, connectors, and text. Sketchnotes are as much a method of note taking as they are a form of creative expression.

Craighton Berman at Core77 does a nice job of describing sketchnotes as:

Through the use of images, text, and diagrams, these notes take advantage of the “visual thinker” mind’s penchant for make sense of—and understanding—information with pictures.

Where are they used?

Friends in the sketchnoting community constantly share how they use sketchnotes to document processes, plan projects, and capture ideas in books, movies, TV shows, and sporting events. From Mike Rohde’s The Sketchnote Workbook.

(Courtesy : )

Here is a example of a Sketchnote on a talk on “Continuous performance testing through the user’s eyes” at Agile Testing Days 2018 conference by Katja taken from the article here.


Want to explore and expand your creative side? Then, here is a lovely book that will make you fall in love with SketchNotes!

CLICK HERE to read a free sample of this book.

Want to buy it? Then click here to buy at

The power of checklist

Recently I read the book “The Checklist Manifesto” by Atul Gawande. 

“An essential primer on complexity in medicine” is what New York Times states about his book whilst The Hindu states this as “An unusual exploration of the power of to-do list”.

As an individual committed to perfection, in constant search of scientific and smart ways to test/prevent and as an architect of Hypothesis Based Testing, I was spellbound reading this brilliantly written book that made the lowly checklist the kingpin, to tackle complexity and establish a standard for higher baseline performance.

The problem of extreme complexity The field of medicine has become the art of managing extreme complexity. It is a test whether such complexity can be humanly mastered: 13000+ diseases, syndromes and types of injury (13000 ways a body can fail), 6000 drugs, 4000 medicines and surgical procedures each with different requirements, risks and considerations. Phew, a lot to get right.

So what has been done to handle this? Split up knowledge into various specializations, in fact, we have super specialization today. But it is not just the breadth and quantity of knowledge that has made medicine complicated, it is also the execution of these. In an ICU, an average patient required 178 individual interactions per day!

So to save a desperately sick patient it is necessary to: (1) Get the knowledge right (2) Do the 178 daily tasks right.

Let us look at some facts: 50M operations/year, 150K deaths following surgery/year (this is 3x #road-fatalities), at least half of these avoidable. Knowledge exists in supremely specialized doctors, but mistakes occur.

So what do you when specialists fail? Well, the answer to this comes from an unexpected source, nothing to do with medicine.

The answer is: THE CHECKLIST

On Oct 30, 1985, a massive plane that carries 5x more bombs roared and lifted off from the airport in Dayton, Ohio and then crashed. The reason cited was “Pilot error”. A newspaper reported, “this was too much airplane for one man to fly”. Boeing the maker of this plane nearly went bankrupt.

So, how did they fix this issue? By creating a pilot’s checklist, as flying a new plane was too complicated to be left to the memory of any one person, however expert. The result: 1.8 million miles without one accident!

In a complex environment, experts are against two main difficulties: (1) Fallibility of human memory, especially when it comes to mundane/routine matters which are easily overlooked when you are strained to look at other pressing matters of hand (2) Skipping steps even when you remember them, as we know that certain steps in a complex process don’t matter.

Checklists seem to provide against such failures and instill a kind of discipline of higher performance.

Peter Provonost in 2001 decided to give a doctor’s checklist a try to tackle central line infections in ICU. So what was the result after one year of usage? Checklist prevented 43 infections and 8 deaths and saved USD 2M! In another experiment, it was noticed that patients not receiving recommended care dipped from 70% to 4% and pneumonia fell by a quarter and 21 fewer parents died.

In a bigger implementation titled the “Keystone Initiative” (2006) involving more hospitals of 18-month duration, the results were stunning- USD 17M saved, 1500+ lives saved!


So where am I heading? As a Test Practitioner, I am always amazed at how we behave like cowboys and miss simple issues causing great consternation to the customer and users. Here again, it is not about lack of knowledge, it is more often about carelessness. Some of the issues are so silly, that they can be prevented by the developer while coding, and certainly does not demand to test by a professional. This is where a checklist turns out to be very useful.

In an engagement with a product company, I noticed that one of the products has a product backlog of ~1000 issues found both internally and by the customer. Doing HBT level-wise analysis, we found that ~50% of the issues could have been caught/prevented by the developer preventing the vicious cycle of fix and re-test. A simple checklist used in a disciplined manner can fix this.

So how did the checklists help in the field of medicine or aviation? They helped in memory recall of clearly set out minimum necessary steps of the process. They established a standard for higher baseline performance.


So how can test practitioners become smarter to deliver more with less? One way is to instill discipline and deliver baseline performance. I am sure we all use some checklist or other but still find results a little short.

So how can I make an effective checklist and see a higher performance ? Especially in this rapid Agile Software world?

This will be the focus of my second part of this article to follow. Checklists can be used in many areas of software testing, I will focus in my next article on ‘How to prevent simple issues that plague developers making the tester a sacrificial goat for customer ire by using a simple “shall we say unit testing checklist”.