SmartQA Community

Approximate thinking

by T Ashok @ash_thiru

Many years ago I read the book “The Art of Profitability” a brilliant business book that beautifully outlines TWENTY THREE profit models in any business. I was blown away then by the style this was converted.

It is in the style of a provocative dialogue between an extraordinary teacher David Zhao and his protege. Each of the twenty three chapters presents a different business model.

So what inspired me and connected this to with QA/Testing? In the chapter on “Entrepreneurial Profit” the protege is amazed at how fast David calculates and spins out numbers. He asks as how he is able to calculate blindingly fast with any calculator, to which David says “I cheat”.

David poses the question “How many trucks will it take to empty Mt Fuji if it is broken down” and illustrates how he could calculate the answer quickly.

“Imagine Fuji is a mile high. That is wrong, but that does not matter. We will fix that later. Now imagine it’s a cone inside a box one mile on each side. To figure out the volume of the box instead of 5280 feet on each side use 5000. So colure is 5000 cubed. = 125billion cubic feet. If Mt Fuji fills about half the cube then it is ~60 billion cu ft. If each truck can transport 2000 cu ft, then it will require 30 million trucks! Now that you know how to do this, refine the figures. Fuji is more like two miles. Redo the arithmetic”. The protege is blown.

That is when it hit me that he was teaching “Approximate thinking” of how to rapidly approximate and get facts to analyse further. I have used it many many times In the context of QA, estimating load, estimating data volumes is best by approximate thinking and refinement. Just guessing does not cut.

I wrote the article “How many hairs do you have on your head” to illustrate this. You will enjoy the read!

I love reading different kinds of books and each one of gives a interesting insight and I connect those ideas to what I do i.e. Scientific Testing.

Read this book, it will certainly change how you think , it will also teach you quickly understand value and profitability.

Cheers.


#31 : A special on “Design for Testability”

SmartQA Digest

In the beEnriched section is an interesting article “Design for Testability- An Overview” that outlines what testability is, background of testability from hardware, economic value of DFT,  why is testability important,  design principles to enable testability and guidelines to ease testability of codebase, drawing upon five interesting articles on DFT.
 
In this edition of SmartBites Video, Girish Elchuri illuminates us on how Design for Testability is useful in building with quality.
 
“The Art of Profitability” is a brilliant business book. However, I learnt “Approximate thinking” of how to rapidly approximate and get facts to analyse further. Read how the book inspired me in “expandMind”.
 
In nanoLearning, Dr. Arun Krishnan explains why stopping using human intellect would be a mistake in any field. While he is all for AI helping testing, he believes there is  still a role for human intellect.

beEnriched

expandMind

SmartBites

||VIEWS FROM INDUSTRY LEADERS||

smartbits

||NUGGETS OF LEARNING||

Role of human intellect in QA ( Arun’s view)

Question: In this age of Automation and AI what do you believe is the role of human intellect for QA?

Arun – I always maintain that analytics is a platform, AI or ML is a platform that is going to enable humans to make decisions. For example, there are already models that can predict based on looking at X-rays, the propensity of somebody having cancer for instance, but would we completely stop using human intellect? I think that would be a mistake, in any field. A recent case in point is the air crash that took place in Ethiopia, where the plane completely controlled by an algorithm. If only the humans had disengaged this, the crash may have been averted. A recent Twitter spat between Elon Musk and Mark Zuckerberg was if AI will be beneficial or pose an ethical issue. Well, I am the side of Elon Musk, while Zuckerberg has a very rosy vision, which I don’t think it is at all.

I grew up reading Asimov, the robot series and the three laws of robotics got into me when I was a kid. In the book, those laws of robotics were circumvented in very unique ways in certain circumstances. I read that Google is starting to think about the ethics of AI, which means you do not only build in the ethics programmatically but also have the human override. While I am all for AI helping testing, I think it still is a role for the human intellect. It might sound a little wishy-washy but I think you still have to ensure that human intellect has a veto power so that you can shut off the AI switch if you think it is isn’t telling, if it can be catastrophic.

I think that fear is real, I don’t think a lot of people realise how soon we’re going to lose many jobs, people relate to the industrial revolution. When the automobiles came, the guys who were shoveling horse manure moved into the production line, but that’s very different because the training cost for that was very minimal. To train somebody to be an AI expert is not easy. It’s not going to happen. So what do we do, if we move away from testing?

I think that fear is real, all I’m saying is if you think about whether it can be completely divorced from human intellect, and the ability of humans to influence what the final outcome should be, we are a little far from that. Not saying it won’t happen, but we are a little far, I think.

click to video