SmartQA Community

How to reduce waste(bugs)?

(In this SmartBits, Tathagat Varma outlines How to reduce waste(bugs)?“. The video is at the end of this blog)

It is a holy grail of software development. We have seen from CMM days that defect prevention has been the holy grail. We fail to recognise that a software in itself is not isolated and independent. The context in which it is operating is changing all the time. Be it evolving systems, hardware, software, network or the evolving user behavior.

Ten years back when iPhone was launched, people found pinch and zoom feature very novel to use. Today it is a standard. The context in which people are changing is changing all the time. Definitely that makes sense. It is a good old saying a stitch in time saves nine and we have seen during the waterfall days how the cost of defect amplifies if not prevented. Today, even if we say we are agile, if we were to fix a defect in the production, it still costs  a lot. Definitely prevention will have its own role to play.

We are doing a lot of software reuse, whether it is third party or open source. We don’t always have control over a lot of components which we use. We get a component from somewhere and just use that. Do we really have insight into what went into when it was being designed? Do we have control over what kind of defect prevention steps were taken into that? This may or may not be known. We will still need some way of qualifying the components and putting them together and the speed at which we are aiming to really push our systems into production, it will require a lot of us to reuse.

In some sense software construction is being reduced to plumbing the whole thing. It is like the job of an architect when constructing a hundred storey building. We don’t make brick or steel pipes anymore. These just get sourced and are put together and then a system gets created in which we are able to perform incoming check criteria whether this is of the acceptable quality to us. Here some level of a component testing is done to see whether they meet our requirements.

Secondly, we put them together and see whether they really don’t burn each other out.  If we put this into the context that we have to release this into production hundred times a day, it’s kind of inevitable that we will end up doing a lot of automated testing. At some point we have to understand that if we are blindly relying on automated testing to solve or uncover every problem, that might be a little dangerous statement to make. We should use it really to make sure that whatever was the behaviour in the last build remains unchanged and unbroken and that becomes a very good verification. If new features/behaviour have been introduced, delegating that  to automated testing at the first instance might need a lot of subjectivity. We need to be a little careful. There is no definite yes and no answer but “a balance” probably is the best answer.

Role of human intellect in QA

(In this SmartBits, Shivaji Raju outlines Role of human intellect in QA“. The video is at the end of this blog)

Tools & technology certainly bring in a lot of efficiencies and improves user experience, but I believe that human intellect is definitely required for SmartQA. Let’s take an example of test coverage. We definitely require intellect to validate coverage to ensure that we have the optimal set of test cases. If we were to relate this to automation, we need intellect to ensure that we have an optimal distribution of test cases across different layers as tools would not be able to do it. We as a human or an engineer would validate what is a right fit, what is the right distribution between UI or services testing.

The second example is that we need human intelligence to devise strategies to build frameworks. When we build frameworks, usage of design patterns or best practices, human intellect is required. The third example is exploratory testing, where we uncover some really interesting defects in addition to the running the scripts.

Management expectations of CIO & IT team

In this smartbits video “” Zulfikar Deen

( In this SmartBits, Zulfikar Deen outlines    Management expectations of CIO & IT team “. The video is at the end of this blog)

Whether the end-user organization is small or large, the challenges remain the same for both of them. For a large multinational multi-billioncorporation, or a smaller organization challenges are very similar. They could be related to security, adoption, consumer understanding, delivery, timeline or quality.

The difficulty for a smaller organization is that IT team is much smaller, not an army of people. They do not have a huge budget to ensure the same challenges are tackled better. With small and shoestring budgets, it is difficult to bring in newer technology and solutions into operations. Not having an appropriate budget is an important challenge, but that doesn’t mean that they will be left far behind. They still have to adapt, invent and have to move at the same speed.

The next challenge is that business leaders come across sound bites (for example Blockchain) from meetings they attend and are keen to implement. Our role is to ensure they distill it correctly and make sure they are used appropriately in context to the business, in relation to the readiness of the systems and ensure they don’t fall too far behind.

Another aspect to be looked at, is the level of board and the top management  CXO organization’s ability to look at technology. Often, CXO’s would neither be a risk-taking forerunner nor they want to lag behind. We need to understand what level of comfort the management has and then we need to play along with that. One needs to watch for a technology shaping up and as soon as it is ready for usage in the system they should start putting it in place.

DFT & Automation

In this smartbits video “Design for Testability& Automation” Girish Elchuri outlines how design for testability aids in test automation.
 The transcript of this video is outlined below.

There are three aspects to be looked at when we talk about test automation. The first one is running the test cases, the second, invoking the functionality that needs to be tested and the third, asserting the outcome tests as success or failure. We can talk about test automation only if we can automate all these three functions.

Test execution
Most of the time, running the test cases is perceived as automation, but ideally it has to invoke the other two as well. With reference to running the test cases, there are enough tools that can be used and invoked, but in case of invoking the functionality, a developer can make a big difference.

Backdoor invocation
Normally when a product is being developed, the product functionality is accessible only through GUI. Developers should also provide a backdoor to reach the functionality so that one can actually test the entire product functionality in a much more efficient way without having to invoke the GUI. This is how developers can help in terms of test automation.

Test outcome assessment
In the third aspect of asserting the outcome as success or failure, sometimes it is not clear whether it has succeeded or failed because of some small state changes that we do not know how to check. So a suggested way is to have extensive logs, these are also called as the structured logs. While logging we put debug messages, information messages and error messages. There is another category that needs to be added, these are called test messages. In a structured log with test messages, it becomes easy for us to go and check the log and ascertain whether a particular test case has passed or failed.

These are ways how a developer can help testability in test automation – by facilitating invocation and assistance in assertion of outcomes of test result.

Digital test automation

(In this SmartBits, Shivaji Raju outlines Digital test automation“. The video is at the end of this blog)

There is greater focus on services testing with lot of applications being built with service based architecture, that is one of the things that is changing significantly.  From a framework standpoint, there are approaches like BDD and some custom Java-based solutions that people are trending to use when compared to the traditional keyword driven approaches that we were using in the past. The scale at which we test on different types of devices has also increased vis-a-vis when we used to test only on one browser, like IE. Now, the expectation is that we test the product on the web, on native, on different types of devices, the combinations increases.

The other change is in terms of using testing platforms. We don’t want to host mobile devices on premise.  We test on Sauce Labs, Perfecto, or LambdaTest, these are some custom solutions that are available which can be used to scale up testing. Even lot of projects have moved from waterfall to Agile based implementations.

The other big thing is the DevOps. In context of testing, continuous testing is something that needs to be automated. So, other than we automate our execution aspects where we automate smoke,regression or different types of tests, whether it is UI services; we need to also ensure that the infrastructure and the test data elements are also automated. When we say Infrastructure,  can we automate the deployment of builds into test servers or can we provision the instances on the fly instead of having some manual dependencies to ensure we get an end-to-end continuous pipeline.

on Coverage

(In this SmartBits, Girish Elchuri outlines ” on Coverage “. The video is at the end of this blog)

During development,100% unit testing is needed, anything less is useless. Look at it this way; when you are crossing the chasm, if you jump even 99%, you still die. So it has to be 100%. The way I approach this is, every single line of code that I execute, I execute it as part of the code itself or execute it separately to make sure that the function is behaving the way that it is expected to behave for the parameters being passed.Most of the times the help text provided by the man-pages for the functions may not be 100% complete.From that perspective firstly you have to test the code by yourself. Secondly, certain functions even after being tested have to be again tested as a  part of the product and for that I do write a lot of scaffolding code. It is a must. This is not something that I preach, but I practise as well. At this stage, a lot of development of my product is done by myself and I do practise 100% of unit testing and in my opinion, there is no shortcut.

Key trends in automation

(In this SmartBits, Shivaji Raju outlines ” Key trends in automation “. The video is at the end of this blog)

Key trends in automation are significant focus on services testing with the architecture changes coming in. With service-first approach and microservices , there is a lot of emphasis on testing the services. The other aspect is that traditionally we had a lot of test cases or automation scripts on the UI. That seems to be changing. We are trying to bring in quite a balance between different layers of testing rather than just focusing on UI.

The other trend is on the tools. We were predominantly using tools like HP UFT, Rational tools, or Selenium to some extent. The trend now is shifting towards script-less solutions like Tosca, Functionize, Mabl which has ability to build scripts faster. Some trends have been noticed on the framework front too. We traditionally had QA-driven approaches which were quite heavy. Now the shift is to use lightweight approaches or frameworks, especially in the Agile context and frameworks that integrate to the toolsets as part of the DevOps pipeline itself. The need is to ensure that the framework integrates with different build tools like Maven, JIRA or Artifactory. Those are some of the expectations when building a solution for the framework.

Again, DevOps is a significant trend now, so the expectation is to see how we could automate the continuous testing pipeline, considering that it’s not just about automating the execution piece, automate even test data or probably automate the infrastructure piece. So for provisioning infrastructure from the clouds, there are certain tools to do that and then there are trends around virtualizing services, even virtualizing databases also to ensure that some automation is brought into the data and the services layer and that is how we would achieve an end to end CI or CT automation.

The challenges of legacy code

(In this SmartBits, Zulfikar Deen outlines “The challenges of legacy code “. The video is at the end of this blog)

We move quite slowly compared to how the industry moves. It is like a different cog in a gearbox where one actually moves very slowly and the other one is moving really fast, but there is a need to have a proper connection in place. If you are looking at the quality of the whole system, you need to look at holistically. When integrating with the slow-moving system, how do you ensure the data moves in the right speed, right velocity and it is tested correctly?

The biggest challenge would be that, in the existing system, the data may not be good. There could be a  lot of garbage in the data because of usage of system for several years. When you are integrating with the newer technology, or a wonderful new system and post plugging in, if that expects good clean data, it is not going to happen. It’s going to be very challenging in terms of fidelity, velocity and data cleanup.

Partner talks about jQuery or possessing technologies like  ML engine. My internal team may not be able to connect with them. There are so many technologies out there, my internal team is focused on making sure my system is running operations, so that becomes the primary focus. 

Technology may be very interesting, but they may not be able to connect to what you are speaking. As a partner, we need to make sure that it is holistically tested. It is very critical to hand-hold the internal team when doing the integration pieces, during the testing time, operationalizing time and support time.

The evolution of dev

Transcript of discussions with

(In this SmartBits, Srinivasan Desikan outlines “The evolution of dev”. The video is at the end of this blog)

If you look at industrial evolution, you can classify this into four dimensions. The first evolution that happened is on the infrastructure side where individual people used to have their own PC’s and they wanted to share those individual PC’s. That’s when the Test lab was created. if you really see that dimension was really initiated by test engineers, they did the consolidation of all the machines and that was called a test lab, then over a period of time, they added some compliance requirements to it became data centre when they added to the data centre, they wanted to have more operating systems than what they have, so they created virtualisation. 

The testing folks said, let us have a quick control of all these resources, whether it is virtual or whether it is real and that became data centre. Then looking at the reliability of the test lab data centre companies decided let us host our services on those data centres so that our customers can use it using the very own processes which were created by the test engineers. That became the industrialized data centre and then cloud evolved from it. 

The cloud is nothing but something which is elastic, you put a small data centre somewhere and then you keep elongating it based on the needs based on utilization. Elasticity does not mean only expanding but also whenever there is no load bring it down that is what is cloud and after Cloud now people are actually talking about, having virtual images within virtual images which are called Dockers, containers and Kubernetes that’s one side of the dimension. That is the dimension one, which was initiated by the test Engineers. 

The second dimension which obviously was not created by the testing teams, but it has evolved over a period of time. Those days we used to have desktops then it became laptops, smaller in size then it became palmtops even smaller in size. Then it became mobile and today we are actually talking about IOT devices which go and sit in each and every appliance at home that is a second dimension of the evolution. 

If you look at the third evolution of dimension- the process delivery model,  waterfall model where strict coding is completed and only then testing was done and after that people wanted to parallelize testing and development team then became V model and V model became spiral model then spiral model became agile model. Then the agile model is becoming super lean model, which is called lean development, processes and  related things that is on the process delivery model that is the third dimension to it.

The fourth dimension to the evolutions what I have seen in the last 30 years is the customer payment model. Initially when all the desktops and other things were available in the office, they were all purchased by paying full cash. Which is a hundred percent Capex and 0% Opex. The electricity was paid by the same company. The maintenance was paid by the same company. There were no operational expenses. Everything is a capital expenditure and slowly they moved into optimal capex and Opex, they wanted to outsource some of the maintenance part of those machines therein they brought in some element of operational expenses other than a capital expenditure. Then  went to another model where there is less capex but more opex they want ,started renting out the machines from outside and after that consumption-based volumetric model evolved where , you buy a product from us and depending on the number of users you have or depending upon the number of amount of utilization those users can, we will charge you that became the consumption-based volumetric model .

The last one which is becoming predominant, now is pay-per-use ,you do not invest anything that is zero cape and everything is opex.  It is going in Uber or Ola and now the insurance is paid, the car is bought by the company and you just go use it and for the amount of time or amount of a distance you have used, you pay them.  Whenever you want, you can do it you can rent out a Mercedes today, use it as it is your own vehicle but you still need not own it.

These are the four different dimensions in the industry has evolved in my opinion.The intersection of all these four is the sweet spot all the test engineer should work on. The dimension one and the fourth the infrastructure as well as the process delivery model. Both of them are invented by the key engineers. Moving to the waterfall model to V model is actually done by key engineer. Moving from the individually kept machines to a test lab was actually initiated by that testing teams Majority of the evolution what you see in the industry. The starting point is testing and the sweet spot for the test engineers is to look at all these four dimensions and how they are going. What I told you is only the 30-year story it will continue for another 30 years or another 60 years, but these dimensions will stay but the evolutions will continue to happen. The sweet spot here is which tells you how fast we should evolve, so speed is very important.

The second thing, no investment. Everything is becoming virtual, companies will become virtual, the offices will become virtual, people can work from home and people can take jobs workloads which you call it as work load from the internet and you may not have any manager. You may not have any infrastructure your capital expenditure, maybe zero sitting at home and you will be still delivering the products and most of it what you do sitting at home would be apt testing. That is  where I think the industry is evolving.

Lean thinking and agility

In this SmartBits video “Lean Thinking & Agility” Tathagat Varma outlines how Agile inspired by lean thinking is accomplishing agility in SW development. 

The transcript of the complete video is outlined below.
I would agree that a lot of foundations of Agile have come from the lean world, starting from the very famous Harvard Business article in 1986-” the new product development game where we learnt a lot about how the companies were applying a very different way of doing things and basically  that started the whole thinking . But let me also just make a very sutle distinction- if we take a manufacturing metaphor, by definition it means. I have a design in mind, I have a car that is there, the design has already been done . And now what am I doing? What is the next step in that? It is a production process. I am going to make 10,000 cars or a million cars. 

If we go back into the history of lean, how Toyota started the Toyota production system. Toyota realized  that Japan is a small country, unlike in the US where people had a buying habit of going to the dealership, there are 500 cars available , they could  just pick up whatever they want and drive out with that. They realized that in Japan it is not going to be possible because people do not have that kind of money. Nobody has that kind of a real estate where they can put that and as a young economy post-second World War, they didn’t have enough resources to essentially make so many cars and put them up in the dealership.

Toyota then came up with this whole idea that they will show a catalog to the people and when they show the catalog, people pay the money, cash down and then they will start making it.The reason was the constraints that were there which Toyota and Japan together inherited in the post second world war economy. What they were trying to do was they were trying to make it very inventory efficient, very real estate efficient, very time efficient so that   their own production process could take care of it. And when you are doing the same production process run for ten thousand times or a million times every microsecond saved basically in the long run saves a lot. At a very high level the lean philosophy is all about reducing the waste from a process which you are repeating “n” number of times. 

Now let us come back to the software. It has two components, one is the whole design component, the second is the whole production component. What do I mean by that? When I talk about design, I am taking a problem space and I am saying how do I really solve it? What is the right way of doing it? How will the user respond to that? There is a lot of experimentation back and forth. 

Software development is a discovery process where the outcome of that is really better insights about how humans are going to solve a given problem.  Once I have found the right way and I am actually creating a software program to do that, then let us say I am deploying that over net today, we are deploying a lot of our applications on the web or on the Internet. You are basically scaling it across thousands of servers and you have to have a very systematic way in which in a very error-prone manner you are pushing the code into production and making sure that it behaves the same way whether it is latency, whether it is error or whether it is usability 

Obviously a lot of principles that we learn from reducing the waste in variability in the performance when it comes to the production, it’s a very natural fit. I do not want my latency to be even half a second people may not agree in today’s world. People are getting used to literally, 50 millisecond or maybe microseconds.  Obviously you need to remove the variability in the process. 

if I am doing a push into production and every time the build breaks something is going wrong there. I should be able to remove all those vagaries of my process. Lean has a very strong fit into that part of the whole thing ,where it will allow me to remove the waste in the system and the waste could be in terms of failed builds. It could be in terms of bugs that I am finding. If I can create a better sandbox where I can test it before the production, then I am reducing the waste there. I am also reducing the waste in terms of features that nobody is going to use.

We have seen in the software industry time and again that probably one third of the features get used the most and 2/3 of the features don’t get used or never  get used. There are  enough studies that have shown that. One of the lean principles is also in terms of reducing the number of features that are not being used, because that is an overproduction in some sense and it is not just writing code. It is writing code, then it is doing the testing, it is putting the resources to available.  If I can bring a lot of lean principles, I can reduce it and really focus on what is the right value for the customer.

That is to me the whole production part of it where the lean thinking fits extremely well, now when it comes to the design part of the whole thing, where I am starting with a very fuzzy idea there and I am saying, how am I going to even solve it let alone the problem of how I am going to deploy it in the field. I do not even know in what shape or form it is going to look like . The lift and shift of the lean thinking does not make complete sense there because I am not repeating that operation n number of times. It’s very easy for me at the end of the process to say, we should have removed this waste but when I am starting the process from left to right, I don’t know if this is going to lead me to a waste or not. 

In a Six Sigma kind of a parlance if I have to walk from right to left, I can always knock off the unnecessary processes, but if I am in a discovery process, if I am in a creative process, I cannot say that  let us not do it because there is no guarantee, we are  solving it for the first time. The lean principles per-se are not a direct lift and shift. However, a lean mindset is an important part of the whole thing. What does it mean to me?

What it means to me is that when I am solving the problem as we saw during the dot com days, we have heard of horror stories of how people would build a startup and for one year, two years, they are in stealth mode and they would then say, this is the next big shiny thing and then people realize that this is a big mismatch between what the people are waiting for.

The whole thinking of Eric Ries where he has married the ideas of lean thinking into a start-up kind of context, that is where we are bringing the lean mindset into that where we are saying, we do not know what is going to work for us. So why don’t we really create a closed loop management system where we take a tiny bit of a problem and what is that problem? It is a very risky hypothesis. If we go down that path, maybe there is a 90% chance that it will fail but we do not know which 90% will fail.  Now what is the best you can do with that kind of scenario, you try to take baby steps. You reduce the problem into a very small thing, you test it as soon as you can with a hypothesis and then once you have a very confirmatory tests available, then you basically tweak on that, either you pivot there or you build on top of it. 

That thinking, the whole lean mindset is very important in the software design and construction which is essentially a design and discovery problem, but when it comes to the production a lot of these can help in terms of removing the wastage from the system, standardize it like coding guidelines. We have been using the coding guidelines for last 40 years or 50 years probably, which is a very standard thing in a 5S kind of a context if I see lean because we are also trying to make a standard way of doing it.  We should really be able to separate that out and say that these are lean thinking or lean mindset and these are lean practices which really help us in that.