SmartQA Community

The evolution of test

( In this SmartBits,  Srinivasan Desikan outlines “The evolution of test“.  The video is at the end of this blog)

Earlier days we used to have three different teams – configuration management team to build the product, development that develops and hacks in software and test team that tests the system. The configuration management team will make the build, do whatever checks that are necessary, build it and then give only the binary version to the testing team. This was called as the build ball, they roll with it, then test it and report problems.

Earlier days 30% of the problems were mainly because of the configuration management itself. It was not the developers who were introducing the defects. It was the configuration management team, which was introducing defects by wrong builds. The build was not validated properly, build validation evolution started from the testing team. Configuration management team asked the test team to test whether the build was correct or wrong? The testing team gave them a suite which was called a sanity suite or build verification test and the build team used it, validated the build and gave it to the testing team which resulted in quality improvement.

Whenever there are boundaries there will be more problems lurking around the boundaries. Always the problems occur because of the intersection between the development and configuration management, configuration management and testing and also the boundary between development and testing.

These problems brought a thought process about a new evolution which is called DevOps. Which means starting from the development all the way to the operations, everything should be automated so that there are no boundaries left. If one has to automate something it has to start with test automation, again the evolution started with the test engineers. The build got automated scripts, the binary ball they got automated, the testing that ensures the build is done properly was automated and the functional testing as well as the build of the developers also got automated. Good amount of code that is coming nowadays are auto-created code. Here they check the return values and add more exceptions code to it. Eventually that also got automated.

We are entering the DevOps model where good amount of that is automated and people are really needed only when it breaks and only when more value needs to be added to the automation. We do not deliver any more to code or we do not deliver any more to build. We don’t deliver any more to testing but we really deliver to the purpose of the whole thing, which is the end result. That’s where DevOps is taking us to.

Challenges in testing big-data applications

( In this SmartBits,  Arun Krishnan outlines “Challenges in testing big-data applications“.  The video is at the end of this blog)

“I understand that big data is characterized by three V’s volume, velocity and variety with the data formats classified into different categories of structured, semi-structured data and unstructured data, and these are acquired from a variety of sources. What are some of the challenges or issues that these pose to validation?”  Dr Krishnan’s answer to this question is below.

The Three V’s Volume, Velocity and Variety actually depends on who you are. There are 4V’s and 5V’s as well. Some of the definitions around Big Data are  volume, variety, velocity but in a true sense all these are relative. 

There are people who say 1TB of data and above is Big Data. The best definition as of today is one bit more data than your system can handle. If there is a system with 8 GB of memory and there are 8 GB and 1 bit of data if this can’t be loaded into memory that is the Big data and then it needs to be  broken into chunks.

When we talk about Big Data, we need to understand that it is relative. What Big Data is to a retail chain where there is a point of sale data coming in every second or every minute need not be the same for a company which tests the software where the focus is on looking at test results coming in every few minutes or every hour.

HR data from a retail chain perspective is not big data, but from an HR perspective, yes they do, they have a variety of  data sets coming in and they got to pull it all together The trick is in bringing all the data together and then get deeper into it. It’s not about the data quantity. 

Data are in different forms like structured, semi-structured or unstructured. How we tie it all together and how we gain insights from them, is analytics. Another example is one of my students had been to an internship at an Indian public sector unit, and there he was asked which are the best colleges to hire from. This is a huge amount of data that one could gather. This student did something really simple and straightforward. He took the average scores for every College on performance and he took the average scores on that amount of time that college folks have spent in that organization. Plot the data with  Y-axis as performance and X-axis the amount of time spent. Then arbitrarily, take two values one parallel to the x-axis one parallel to the y-axis. It suddenly has four quadrants and interestingly enough all the IIT’s came in the bottom quadrant, which is low retention and low performance. Very simple things can be done in analytics but the idea here is tying these two pieces of data together.

Even for testing it is important to figure out how we can tie real-time data coming in from devices, what we are getting in from server logs, as well as what Developers might be putting as comment,  and then use that to infer what would be the issue and then build the test cases.

What is big data/analytics?

( In this SmartBits,  Arun Krishnan outlines “What is big data/analytics?“.  The video is at the end of this blog)

Analytics is a buzzword these days. But quite honestly,  we have been doing analytics for a long time. If you think about it, the animals flight or fight response is analytics. It uses its past experience, like if the grass in Savannah is moving, is it because of the lion or wind?

The reason it has become so buzzwordy is primarily because of digitisation and the explosion in terms of compute power that we now have at our fingertips, and also the ease with which the data gets processed. Twenty years ago grid computing concept was quite prevalent which then slowly morphed into Big Data and now the buzzword is AI. Everybody wants to use that term and so it is essentially a way to look at patterns within data and take meaningful decisions. This is how analytics can be looked at in the current scenario.

We all think and assume that businesses have the best systems, but that’s not the truth. Even the biggest organizations are still very spreadsheet oriented, though there may be specific departments like Sales or Finance that make use of analytics to a larger extent. Again specific Industries like Banking and Finance sector have been using mathematical models and tools to forecast.

There is a lot of talk but it is more than what’s on the ground. Personally I am a big believer that the more we crunch data, the more we can identify patterns and the better outcomes we get, and some companies do it really well.  Classic examples would be Amazon, Google or Facebook. They are good at it. But there are industry segments where there is a lot of scope to use analytics for betterment.

QA skills for digital world

( In this SmartBits, Sriramadesikan Santhanam outlines “QA skills for digital world“.  The video is at the end of this blog)

Skills and competencies are most important. One needs to understand that it is not just the business or it’s not just the technology, it is both. One needs to acquire multiple skills. One needs to understand what the customer is going to use this product for, how is it getting implemented, the expectations of the various end users’, and their perspectives. 

Acquire those skills and competency to understand these. That’s what I advise – try to understand what customers are looking for, what their customers are seeking, and then finally what the end users are looking, for then only one will be able to test from their shoes.

We as QA people need to understand the customer’s program objectives also, that is most important. Only then will be able to align to their expectations and understand why they are making these changes and appreciate the impact/uplift they’re looking out from modern technologies.

The change in skill requirements are: In addition to knowing about technology, to be able test to meet customer program objectives. Many times in the product world, people wouldn’t have seen how people are really using it, are we aligned to them? Testers now have to have good tech/dev skills, and also have business analysis skills too.

KPIs for enterprise solution QA

( In this SmartBits, Sriramadesikan Santhanam outlines “Enterprise Customers & Quality“.  The video is at the end of this blog)

There is a change in the perspective of quality, measurements/KPIs for QA teams that deliver productised solutions to large enterprise customers. When we develop a product all the KPI’s around the product features are based on the features committed, in terms of code coverage, test coverage, defect density etc.

In a typical product life cycle that is the kind of the KPI we should look at, but when we implement it as a solution, then it is customized to customer’s specific business requirements. So the KPI’s need to be aligned with customer specific expectations. Then when we put it up and roll it out to their end customers, it is a program for them.

How the whole integration is really working, the end-to-end business flows, how the customers look at it from user experience, operational efficiency and ease of use should be the basis of the KPIs now. So definitely it changes from KPIs from program to project to the product. When we start with the product and well begin with the program, KPIs should definitely changes, being in line with customer’s expectations.

Should I know the architecture to test?

(In this SmartBits,  Jawahar Sabapathy outlines “Should I know the architecture to test?“.  The video is at the end of this blog)

I think it is a given. You cannot validate if you do not how it is constructed and what it is doing. One has to validate the normal functionality, in the case of vehicle, when the road is good and when it is real bumpy and bad. Hence I need to know under what operating conditions that the vehicle can actually be performing correctly and how it is built. For example if the vehicle has tubeless tyres, we may have to test it differently.

The same is applicable to software construction, so when we deliver on a high availability or scaling, the mechanism that is used must be known so that we can actually simulate those minimum- maximum outlier conditions and see that it is working in those conditions or not. At the least we should know and document it. So a deeper understanding of architecture in deployment is probably more necessary for today’s validation teams.

Business mindset

(In this SmartBits, Zulfikar Deen outlines Business mindset“. The video is at the end of this blog)

As a partner/solution provider, the first and foremost one is the need to have a partnership mind with the IT team of organization. It is about understanding the difficulties that have been articulated, spending a few of days with them in meetings and understanding the process and difficulties and empathise with the team.The solution needs to be planned right from the technology, to rolling out, to support along with the organization. Hence taking a partnership approach is of utmost priority.

Secondly the solution needs to be cognizant of the whole life cycle of system, be it a patch upgrade, support or training. Everything has to be taken into consideration for the solution to be successfully used by the user, the whole chasm needs to be cast. As a solution provider one must try to keep building these layers or parts of the system and have a view of whole thing into the solution, as it gets much easier then to roll it out.

Thirdly, never attempt to do a half-baked solution of production roll out, it is going to be very very challenging. For instance if there are twelve thousand users one cannot control the perception of users. Once the user gets the perception that the system is not good,  it is very difficult to recover from there.

If we have to deliver a minimum viable product of three features, we need to do it thoroughly well, make sure it is integrated well and works well before we put it into the system.  One should never have the view of handing over the system tousers, get feedback and then figure out what to do with them. A very different counter view of the DevOps process has been considered  in this case.

Finally, always build in a bake-in adoption matrix,  people don’t do that normally. If a system is rolled out, the management, CIO,  or even we as a solution provider should be able to look at the adoption matrix as a business matrix, where in actual adoption, usage, everything has to be already baked. These are the views beyond technology, database or architecture and need to be part of one’s thought process and view when building a solution.

When we are actually working with the solution providers who are not large but small, there is always a thought about the risk of startup going down. Then what happens? How do we protect our business is always on our mind and we should never underestimate that thought process. The best way forward as much as possible is to build on open standards, open platforms. When we use open source , it gives a sense of comfort. If something goes wrong we will be able to find skills to manage this beyond. Otherwise we are already challenged with  team that is not able to scale up with the rapidly moving technology. We can’t take one more solution and figure out what happens with the system. So it’s easier sell for a solution provider who build on open standards and take it to the market
Open standards could be domain specific for example with healthcare  it could be HL7. It could be technology specific. It could be domain, technology or either one of them build on open standards, making it easier for us to make the right decision.

How to reduce waste(bugs)?

(In this SmartBits, Tathagat Varma outlines How to reduce waste(bugs)?“. The video is at the end of this blog)

It is a holy grail of software development. We have seen from CMM days that defect prevention has been the holy grail. We fail to recognise that a software in itself is not isolated and independent. The context in which it is operating is changing all the time. Be it evolving systems, hardware, software, network or the evolving user behavior.

Ten years back when iPhone was launched, people found pinch and zoom feature very novel to use. Today it is a standard. The context in which people are changing is changing all the time. Definitely that makes sense. It is a good old saying a stitch in time saves nine and we have seen during the waterfall days how the cost of defect amplifies if not prevented. Today, even if we say we are agile, if we were to fix a defect in the production, it still costs  a lot. Definitely prevention will have its own role to play.

We are doing a lot of software reuse, whether it is third party or open source. We don’t always have control over a lot of components which we use. We get a component from somewhere and just use that. Do we really have insight into what went into when it was being designed? Do we have control over what kind of defect prevention steps were taken into that? This may or may not be known. We will still need some way of qualifying the components and putting them together and the speed at which we are aiming to really push our systems into production, it will require a lot of us to reuse.

In some sense software construction is being reduced to plumbing the whole thing. It is like the job of an architect when constructing a hundred storey building. We don’t make brick or steel pipes anymore. These just get sourced and are put together and then a system gets created in which we are able to perform incoming check criteria whether this is of the acceptable quality to us. Here some level of a component testing is done to see whether they meet our requirements.

Secondly, we put them together and see whether they really don’t burn each other out.  If we put this into the context that we have to release this into production hundred times a day, it’s kind of inevitable that we will end up doing a lot of automated testing. At some point we have to understand that if we are blindly relying on automated testing to solve or uncover every problem, that might be a little dangerous statement to make. We should use it really to make sure that whatever was the behaviour in the last build remains unchanged and unbroken and that becomes a very good verification. If new features/behaviour have been introduced, delegating that  to automated testing at the first instance might need a lot of subjectivity. We need to be a little careful. There is no definite yes and no answer but “a balance” probably is the best answer.

Management expectations of CIO & IT team

In this smartbits video “” Zulfikar Deen

( In this SmartBits, Zulfikar Deen outlines    Management expectations of CIO & IT team “. The video is at the end of this blog)

Whether the end-user organization is small or large, the challenges remain the same for both of them. For a large multinational multi-billioncorporation, or a smaller organization challenges are very similar. They could be related to security, adoption, consumer understanding, delivery, timeline or quality.

The difficulty for a smaller organization is that IT team is much smaller, not an army of people. They do not have a huge budget to ensure the same challenges are tackled better. With small and shoestring budgets, it is difficult to bring in newer technology and solutions into operations. Not having an appropriate budget is an important challenge, but that doesn’t mean that they will be left far behind. They still have to adapt, invent and have to move at the same speed.

The next challenge is that business leaders come across sound bites (for example Blockchain) from meetings they attend and are keen to implement. Our role is to ensure they distill it correctly and make sure they are used appropriately in context to the business, in relation to the readiness of the systems and ensure they don’t fall too far behind.

Another aspect to be looked at, is the level of board and the top management  CXO organization’s ability to look at technology. Often, CXO’s would neither be a risk-taking forerunner nor they want to lag behind. We need to understand what level of comfort the management has and then we need to play along with that. One needs to watch for a technology shaping up and as soon as it is ready for usage in the system they should start putting it in place.

Technical debt is fat, clean code is liposuction

Raja Nagendra Kumar in discussion with T Ashok (@ash_thiru on Twitter)


Raja Nagendra Kumar outlines the role of refactoring, unit testing in producing clean code. He states this very interestingly as “Technical debt is fat, clean code is liposuction” and crisply explains the act of producing clean code.

The video of this “nano learning” smartbits video is available here.

Question: In our conversations you said  “Technical Debt is Like Fat, Clean Code is Liposuction ”.  A very interesting phrase it is, could you please elaborate?

This phrase has come out with the intention of producing world-class products from India, where every developer as he gets married to the profession, his duty is to produce products of world-class.

In that context, every day whatever code he writes, he is trying to produce a product baby and as more and more code gets added like the way the baby learns each day.  The large code also will start accumulating a lot of fat. If the future of the baby has to be better, you should know how to grow the baby or constrain the baby in a way and that’s where the clean code practices starts coming. 

As more and more large code starts coming into the product, on one side  product growth happens and on the other side, the clean code  practices should know which is relevant now, then start cutting the fat. The beauty of this approach is that it is not like  doctor operating on somebody, here the professional himself is doing to yesterday’s code whatever he has written is able to remove that code which is not relevant or modify the code to make it scale better. 

So, when anything is growing there is also a fat and that fat has to be transformed in a way where it is taking advantage of that and is leaping forward much faster. Otherwise, then the two options are : living with fat and not being able to run or becoming better professional by identifying the fat on time and trying to run faster. 

What do mean by clean code?

When we are trying to achieve something there are a lot of abstractions coming on the way, which actually no code is communicating to you. For example, you want to write a feature X and you have done it in X way and feature Y comes in, now to position this feature “Y, you need to have a creative thinking rather than fitting that along with X.

Now the product will have both X and Y. Instead of fitting them you need to make sure  what is affecting Y to be there in concurrence with X, the   engineer must listen to these clashes and see how to refactor it so that Y can go smoother. Unless we try to be a listener to our own challenges, what code is speaking, clean code will not come.

Most of the time people try to fit x y z as silos, they may work independently, but not coherently. My meaning of clean code is “ listen to the last code while adding new code” and that’s where people will start talking about what is the need or purpose of refactoring, what the purpose of unit testing is. They’ are not compliance. They are the ways for you to bring out a world-class product from your own daily experiences. 

click to video