What does it take to do SmartQA? Thoughtful pause, multidimensional thinking, sensitivity and awareness and designing for robustness/testability. A short crisp article continuing from the prior article outlining five more thoughts on what it takes to do SmartQA. Doing SmartQA is about visualising the act in one’s mind and taking steps to being robust and enabling rapid easy validation, outlined in this crisp article FIVE *more* thoughts on ‘Doing SmartQA’ in the expandMind section.
In this edition of SmartBites, listen to two great pieces of advice from Vivek and Shivaji on ‘reinvent yourself’ and ‘staying in sync’ in today’s rapid dev as “Smart Advice #2“. In the nanoLearning section Jawahar Sabapathy helps us understand containerisation & microservices and its role in today’s architecture.
What does it take to do SmartQA? Thoughtful pause, multidimensional thinking, sensitivity and awareness and designing for robustness & testability. A short crisp article continuing from the prior article outlining FIVE MORE thoughts, on what it takes to do SmartQA.
Shivaji Raju Expert Architect, Allstate Solutions in a conversation with T Ashok
Summary
Shivaji Raju outlines four key aspects of “What is Digital Testing” as testing on multiple browsers/mobile devices/IOT devices, testing services and non-functional attributes of cloud deployed applications.
#1 One of the aspects of digital testing is obviously testing on browsers and mobile devices. It’s not just about testing on one browser type or maybe few devices. The need has changed significantly now as products need to work on a plethora of devices. So you need to ensure that the product is validated against all browser types and all device types.
#2 The second aspect is in testing services, given the micro-services coming and service based architecture. It is therefore needed to ensure that services are validated, so you need to have an approach to test these services.
#3 In the digital world where you have various connected devices that may be on vehicle, home, or security devices. These need to tested with the endpoints, which could be services or backend components.
#4 The other aspect is cloud, with applications that are deployed on the cloud. So functionality testing might not vary significantly when compared to traditional on premise vs cloud, but non-functional attributes have to be validated. Attributes like your load, performance and security maybe compatibility need to be specially considered when it comes to the cloud deployed apps.
What does it take to do SmartQA? How can I do less and accomplish more? What parts of this are human-powered & machine-assisted? A short crisp article outlining some thoughts, five for now on what it takes to do SmartQA.
It takes a brilliant mindset, intelligent exploration, diligent evaluation, keen observational skills, tech savviness and continual adjustment. It is about being logical yet be creative, it is about being disciplined yet be random, it is about exploring the breadth and depth, it is about understanding deeply and also finding blind spots about being bundled by time but be unlimited/unbounded with the possibilities. Doing SmartQA is about doing mindfully, in a state of brilliant balance.
#1 What does it take to do SmartQA?
A deductive ability of a mathematician, creativity of an artist, mind of an engineer, value perception of a businessman, technical savviness, empathy, doggedness and nimbleness, all finely honed to do less and accomplish more.
#2 Humans & Machines : Doctors & diagnosis
In today’s medicine, we know that machines play a huge part in diagnostics and treatment. They help us see internals more clearly, enable us to get to the hard-reach parts, perform rapid tests to analyse problems , monitor tirelessly to help us correct our actions. So is the doctor’s role redundant? Ouch no! The skill of the doctor in diagnosis and treatment be it via medicine or surgery is far more required now in the complex world of disease, business and law. To assist in this ever increasing complexity, machines are becoming integral for the job.
Much like this is software testing, the act of diagnosing of software for issues. Tools/automation are integral to testing that a skilled test/QA/SW engineer uses. It is about “doing SmartQA” which is a brilliant combination of “human powered and machine assisted” . The WHAT to-do is human while HOW-to-do is when’re machine helps.
Doing SmartQA is about intelligent/smart WHAT-to-test/WHAT-to-test-for/WHERE-to-test-on with smart enablement of HOW-to-test using machine/tools. It ain’t automated testing or manual testing and getting rid of the latter or machines find issues on itself.
#3 On Minimalism
I have always practiced minimalism, of doing least work with superior outcomes. Have never been a fan of more tests and therefore needing to tools to accomplish these in the context of software. Of course I exploit tools to brilliant work. Let us apply this to ‘doing SmartQA’.
We talk about left shifting, of TDD, of wanting to find issues earlier. We emphasise units tests and automating these. Are we shifting the objective to doing more unit tests? The purpose was to produce ‘cleaner units’, implying heightened sensitivity to issues via TDD and writing good code in the first place, of adopting cheaper static means to uncover these may be missed before resorting costlier automated unit tests.
Doing SmartQA is really about ’not doing’, well doing minimally really. It is not doing more and therefore needing to adopt tools. So heighten your sensitivity, build good code habits, use mental aids like smart checklist and of course exploit software tools to do the heavy lifting of tests. After all, don’t we all want to adopt wellness rather than expend effort to diagnose potential illness? And I bet you want an doctor to check out before they ‘outsource’ the finding to machines.
#4 Brilliant engineering
Is testing a mere act of uncovering bugs? I think not. It is really a mindset to clarify a thought. When we develop something, in this case code, a smart testing mindset enables us to step outside of being the producer into
the shoes of end users, empathise, see their point of view, appreciate what their environment looks like and understand what all can go wrong so that we produce clean code. At the worst case, put hooks inside code to give us more information as to how code is being buffeted, so that we may examine later and refine the code.
Doing SmartQA is not just about finding bugs but getting into the mindset of ‘Brilliant engineering’.
#5 See better, cover more, test less
A good ‘world view’ enables us to ensure great coverage in testing. Is coverage limited to execution only or does it allow us to see better?
Coverage is about enabling us to see better from all angles. It is not about merely ascertaining if test(s) could-be/are effective. Enabling a viewpoint from USERS, ATTRIBUTES, ENVIRONMENT, CODE, ENTITIES allows us to see from multiple angles, sensitising us to deliver brilliant code with less testing/validation. Of course it does help us significantly judge the quality of test cases and testing also.
Doing SmartQA is more than just doing, it is about significant enablement to see better, heighten sensitivity and accomplish more.
You may find this article “50 Tips to SmartQA” interesting. Check it out!
What does it take to do SmartQA? How can I do less and accomplish more? What parts of this are human-powered & machine-assisted? It takes a brilliant mindset, intelligent exploration, diligent evaluation, keen observational skills, tech savviness and continual adjustment. It is about being logical yet be creative, it is about being disciplined yet be random, it is about exploring the breadth and depth, it is about understanding deeply and also finding blind spots about being bundled by time but be unlimited/unbounded with the possibilities. Doing SmartQA is about doing mindfully, in a state of brilliant balance, outlined in this crisp article five thoughts on ‘Doing SmartQA’ in the expandMind section.
“A typical accident takes seven consecutive errors” states Malcolm Gladwell, this notion is reflected in Mark Buchanan’s book “Ubiquity”too. The article in the beEnriched section “Seven consecutive errors = A Catastrophe” dwells upon ‘How do you ensure that potential critical failures lurking in systems that have matured can still be uncovered?’
What does it take to do SmartQA? How can I do less and accomplish more? What parts of this are human-powered & machine-assisted? A short crisp article outlining some thoughts, SIX for now on what it takes to do SmartQA.
Marginal gains and the Secrets of high performance
“Unilever had a problem. They were manufacturing washing powder at their factory near Liverpool in the north-west of England in the same usual way – indeed, the way washing powder is still made today. Boiling hot chemicals are forced through a nozzle at super high levels of pressure and speed out of the other side ; as the pressure drops they disperse into vapour and powder. The problem for Unilever was that the nozzles didn’t wrk smoothly, they kept clogging up.
Unilever gave the problem to its crack team of mathematicians, they delved deeper into problems of phase transition, derived complex equations and after a long time came up with a new design. But it was inefficient. Almost in desperation Unilever turned to its biologists, who had no clue of phase transition or fluid dynamics! Well they solved it!
The biologists took ten copies of nozzle, applied small changes in each and subjected them to failure by testing them. After 449 failures they succeeded.”
From Black Box Thinking –Marginal gains and the Secrets of high performance
Progress had been delivered not through a beautifully constructed masterplan (there was no plan!) but by rapid interaction with the world. A single outstanding nozzle was discovered as a consequence of testing and discarding 449 failures.
It is not coincidental that biologists chose this strategy – Evolution is a process that relies on a ‘failure test’ called natural selection.
The strategy is a mix of top-down reasoning and fusing of knowledge they already have with the knowledge that can be gained by revealing the inevitable flaws.
—
A brilliant chapter titled “The nozzle paradox” from the book “Black box thinking – Marginal gains and the Secrets of high Performance” by Matthew Syed.
This book is a compelling read on innovation and high performance across many industries: sports, healthcare and aviation amongst others, all approached from an unusual starting point – failure.
“Learning from failure has the status of a cliché. But it turns out that, for reasons both prosaic and profound, a failure to learn from mistakes has been one of the single greatest obstacles to human progress. Healthcare is just one strand in a long, rich story of evasion. Confronting this could not only transform healthcare, but business, sports, politics and much else besides. A progressive attitude to failure turns out to be a cornerstone of success for any institution.”
“A typical accident takes seven consecutive errors” states Malcolm Gladwell, this notion is reflected in Mark Buchanan’s book “Ubiquity” too. This article dwells upon ‘How do you ensure that potential critical failures lurking in systems that have matured can still be uncovered?’
—
“A typical accident takes seven consecutive errors” quoted Malcolm Gladwell in his book “The Outliers”. As always Malcolm’s books are a fascinating read. In the chapter on “The theory of plane crashes”, he analyses the airplane disasters and states it is a series of small errors that results in a catastrophe. ” Plane crashes are much more likely to be a result of an accumulation of minor difficulties and seemingly trivial malfunctions” says Gladwell. The other example he quotes is the famous accident – “Three Mile Island” (nuclear station disaster in 1979).
It came near meltdown, the result of seven consecutive errors – (1) blockage in a giant water filter causes (2)moisture to leak into plant’s air system (3) inadvertently trips two valves (4) shuts down flow of cold water into generator (5) backup system’s cooling valves are closed – a human mistake (6) indicator in the control room showing that they are closed is blocked by a repair tag (7) another backup system, a relief valve was not working.
This notion is reflected in the book “Ubiquity” by Mark Buchanan too. He states that systems have a natural tendency to organise themselves into what is called the “critical state” in what Buchanan states as the “knife-edge of stability”. When the system reaches the “critical state”, all it takes is a small nudge to create a catastrophe.
Now as a person interested in breaking software and uncovering defects, I am curious to understand how I can test better. How do you ensure that potential critical failures lurking in systems that have matured can still be uncovered?
Let us look at what we do- We stimulate the system with inputs (correct & erroneous) so that we can irritate latent faults so that they may propagate resulting in failure. When the system is “young”, the test & test cases we come up are focused on uncovering specific (singular) faults. i.e a set of inputs that can irritate singular faults and yield possibly critical failures. This is possible because the “young system” is not yet resilient and therefore even a singular fault bumps it up! We then think that our test cases (i.e. combinations of inputs) are powerful/effective. But these test cases do not yield defects later as the system becomes resilient to singular faults.
As the system matures we need to sharpen the test cases to irritate a set of potential faults that can create a domino effect to yield critical failures. Creating test cases to uncover singular faults in a mature system may not useful. It is necessary that test cases be at a higher level of system validation (i.e have long flows) and have the power to irritate a set of faults.
Should we resort to uncovering critical failures only via testing? By creating test cases at higher levels that have the power to uncover multiple types of faults? Not necessarily. We can apply this thought process at the earlier stages of design/code too. Using the notion of sequence of potential errors and understanding what can happen.
If your drive in India you know what I mean … the potential accident due to a dog chasing a cow, which is charging into the guy driving the motorbike, who is talking on the cell phone, driving on the wrong side of road, encounters a “speed bump” , and screech *@^%… You avoid him if you are a defensive driver. Alas we do not always apply the same defensive logic to other disciplines like software engineering commonly enough…
Raja Nagendra Kumar outlines the role of refactoring, unit testing in producing clean code. He states this very interestingly as “Technical debt is fat, clean code is liposuction” and crisply explains the act of producing clean code.
The video of this “nano learning” smartbits video is available here.
Question: In our conversations you said “Technical Debt is Like Fat, Clean Code is Liposuction ”. A very interesting phrase it is, could you please elaborate?
This phrase has come out with the intention of producing world-class products from India, where every developer as he gets married to the profession, his duty is to produce products of world-class.
In that context, every day whatever code he writes, he is trying to produce a product baby and as more and more code gets added like the way the baby learns each day. The large code also will start accumulating a lot of fat. If the future of the baby has to be better, you should know how to grow the baby or constrain the baby in a way and that’s where the clean code practices starts coming.
As more and more large code starts coming into the product, on one side product growth happens and on the other side, the clean code practices should know which is relevant now, then start cutting the fat. The beauty of this approach is that it is not like doctor operating on somebody, here the professional himself is doing to yesterday’s code whatever he has written is able to remove that code which is not relevant or modify the code to make it scale better.
So, when anything is growing there is also a fat and that fat has to be transformed in a way where it is taking advantage of that and is leaping forward much faster. Otherwise, then the two options are : living with fat and not being able to run or becoming better professional by identifying the fat on time and trying to run faster.
What do mean by clean code?
When we are trying to achieve something there are a lot of abstractions coming on the way, which actually no code is communicating to you. For example, you want to write a feature X and you have done it in X way and feature Y comes in, now to position this feature “Y, you need to have a creative thinking rather than fitting that along with X.
Now the product will have both X and Y. Instead of fitting them you need to make sure what is affecting Y to be there in concurrence with X, the engineer must listen to these clashes and see how to refactor it so that Y can go smoother. Unless we try to be a listener to our own challenges, what code is speaking, clean code will not come.
Most of the time people try to fit x y z as silos, they may work independently, but not coherently. My meaning of clean code is “ listen to the last code while adding new code” and that’s where people will start talking about what is the need or purpose of refactoring, what the purpose of unit testing is. They’ are not compliance. They are the ways for you to bring out a world-class product from your own daily experiences.
Tathagat Varma beautifully expounds as to what is Agility is. He says agility is actually the ability of an organization, in some sense taking a biological definition, somebody or a unit which has an ability to respond to the external stimuli and ensure that their own survival is assured.
The video of this “nano learning” smartbits video is available here.
Question: During my discussions with companies who say they practice Agile development, they quickly add as to what they practice is ‘my Agile’. Now I am lost. Are we abusing this term and using Agile as a fashion statement?
Agility to me, is an organization’s innate capability to survive and thrive in the long run. If we take a very biological definition of that, every organism is at different stage of its evolution. As a human being, if I am a toddler, I am in a crawl-walk-run stage. My agility when I’m crawling on the floor will be very different than my agility when I am running or walking or even flying for that matter. So, I think we have to really take the ‘horses for courses’ strategy.
Some people might abuse it and that’s why they say, this is ‘my’ agile. If I take a very holistic perspective on that, if there are five people or five teams with different capabilities, but they all follow the same process and say this is ‘our’ agile process, I would say something is wrong with that. What is being agile to one may not be of a similar or comparable perspective to another one.
So to me, in a very simple sense, agility is actually the ability of an organization, in some sense taking a biological definition, somebody or a unit which has an ability to respond to the external stimuli and ensure that their own survival is assured. So agility is actually an ability of a body to respond commensurate to the external stimuli and make sure they remain alive, they are able to deal with the issues, they are able to grow.
Now in the context of a company, we have heard of so many stories, for example, Kodak story is very popular. Where at one point it was the world leader and I believe had 1,500 patents and so on. Then the external changes started happening. Though they were actually the ones who invented digital photography, they were not able to leverage it. They had the capability to lead the technology in the next wave, but they did not have the inner capacity to take decisions and deal with the changes inside the organization. The same thing happened to Blockbuster.
Organizations face the same kind of challenges, and ‘agility’ is simply their ability to understand and make a meaningful interpretation out of those external stimuli, and decide how they are going to respond back. For a large part that kind of works for me. In some cases, a visionary kind of a company which actually are not responding to it, but are initiating the change. They are the ones who are saying “we will set the pace there”. For example, I would say Tesla. Nobody is asking for a Tesla. When Tesla started making the cars, no government legislation is mandating it, no customers are asking for it, but they are setting the pace for it. iPhone changed the whole pinch and zoom and other kinds of features, it changed the definition of what the smartphones is all about. They were not following the trend. They were not responding to external changes. They were setting the change.
To me, the highest form of agility would actually be the companies that have so good understanding of the market, very strong grip on the technology and are actually setting the pace for rest of the herd to follow. So agility, I would take is that kind of thing.
Now, some companies might say “Hey, our definition of agility is so and so”. The way I look at it is if you are improving quarter on quarter, or year on year, you are agile by definition. One doesn’t have to be apologetic about not using the standard vocabulary.
To that extent, I would agree that it is not a fashion statement but ‘horses for courses’. Yes, there is a lot of abuse we see in the industry where people don’t have a very systematic or intentional approach, and in order to not be very forthright about it, they just say ‘this is our blend of agile’, which doesn’t really mean anything because they, in all honesty, are not doing service to themselves.
Unilever had a problem. They were manufacturing washing powder at their factory near Liverpool in the northwest of England in the same usual way – forcing boiling hot chemicals through a nozzle at super high levels of pressure. The problem was that the nozzles didn’t work smoothly, they kept clogging up. A crack team of mathematicians, dug deep into problems of phase transition, derived complex equations and after a long time came up with a new design. But it was inefficient. Then the company turned to its biologists, who had no clue of phase transition or fluid dynamics, but they solved it!
The biologists took tens copies of nozzle, applied small changes in each and subjected them to failure by testing them. After 449 failures they succeeded. Progress had been delivered not through a beautifully constructed master plan but by rapid interaction with the world. A single outstanding nozzle was discovered as a consequence of testing and discarding 449 failures. Check out the book “Black box thinking” in the expandMind section.
“A typical accident takes seven consecutive errors” states Malcolm Gladwell, this notion is reflected in Mark Buchanan’s book “Ubiquity”too. The article in the beEnriched section “Seven consecutive errors = A Catastrophe” dwells upon ‘How do you ensure that potential critical failures lurking in systems that have matured can still be uncovered?’
In this edition of SmartBites, listen to “A mosaic of testing” from NINE practitioners around the world on failures, tools, unit test, clean code, Agile, TDD, feeling & relationship.
In the nanoLearning section Raja Nagendra Kumar outlines the role of refactoring, unit testing in producing clean code. He states this very interestingly as “Technical debt is fat, clean code is liposuction” and crisply explains the act of producing clean code.
“A typical accident takes seven consecutive errors” states Malcolm Gladwell this notion is reflected in the Mark Buchanan’s book “Ubiquity”. This article dwells upon ‘How do you ensure that potential critical failures lurking in systems that have matured can still be uncovered?’
TEN tips for a developer to enable delivery of brilliant code and TWELVE tips to become a modern smart tester is what this article is about. Curated from two earlier articles that I wrote.
What are my TEN tips for dev to deliver brilliant code? Here it is visualised as mind map!