Corona virus aka COVID19 is a global talking point now. What is being done to contain the COVID19? Well, there are THREE actions in terms of Prevention, Detection, Containment being aggressively pursued. Three targets of young children, old people and sick persons are the most vulnerable as of now. In this article we take a look at the actions being taken to contain the pandemic and relate to what to do deliver clean code.
Corona virus aka COVID19 is a global talking point now. A pandemic now, it has shaken the entire population of the world, busted the business and the world economy. The major stock markets are taking a massive beating driven by the sentiment that there is no medicine for this as of now.
Interestingly the suggestions posed by experts are pretty basic stuff related to good hygiene and this seems to be the only way to go given that there is no medicine as of now. What can we learn from COVID19 to deliver clean code?
So what is being done to contain the COVID19? Well, there are THREE actions in terms of Prevention, Detection, Containment being aggressively pursued. Three targets of young children, old people and sick persons are the most vulnerable as of now.
What is being done in terms of PREVENTION that we can relate to code?
Cover when you sneeze/cough => USE ASSERTS, HANDLE EXCEPTIONS
Wash hands frequently => SIMPLIFY, REFACTOR
Don’t touch surfaces => STRIVE FOR LOW COUPLING
What is being done in terms of DETECTION that we can relate
to code?
Check
for fever => DO BASIC TESTS
Check
for other symptoms => USE SMART
CHECKLISTS, DO DETAILED TESTS
Check
if contact with affected => DO STATIC CODE ANALYSIS
Check
origin on arrival (for travellers) => WRITE DEFENSIVE CODE
Check
travel history => ANALYSE CODE COVERAGE
What is being done in terms of CONTAINMENT that we can relate
to code?
(In this SmartBits, Tathagat Varma outlines “ How to reduce waste(bugs)?“. The video is at the end of this blog)
It
is a holy grail of software development. We have seen from CMM days that defect
prevention has been the holy grail. We fail to recognise that a software in
itself is not isolated and independent. The context in which it is operating is
changing all the time. Be it evolving systems, hardware, software, network or
the evolving user behavior.
Ten
years back when iPhone was launched, people found pinch and zoom feature very
novel to use. Today it is a standard. The context in which people are changing
is changing all the time. Definitely that makes sense. It is a good old saying
a stitch in time saves nine and we have seen during the waterfall days how the
cost of defect amplifies if not prevented. Today, even if we say we are agile,
if we were to fix a defect in the production, it still costs a lot. Definitely prevention will have its
own role to play.
We
are doing a lot of software reuse, whether it is third party or open source. We
don’t always have control over a lot of components which we use. We get a
component from somewhere and just use that. Do we really have insight into what
went into when it was being designed? Do we have control over what kind of
defect prevention steps were taken into that? This may or may not be known. We
will still need some way of qualifying the components and putting them together
and the speed at which we are aiming to really push our systems into
production, it will require a lot of us to reuse.
In
some sense software construction is being reduced to plumbing the whole thing.
It is like the job of an architect when constructing a hundred storey building.
We don’t make brick or steel pipes anymore. These just get sourced and are put
together and then a system gets created in which we are able to perform
incoming check criteria whether this is of the acceptable quality to us. Here
some level of a component testing is done to see whether they meet our
requirements.
Secondly,
we put them together and see whether they really don’t burn each other
out. If we put this into the context
that we have to release this into production hundred times a day, it’s kind of
inevitable that we will end up doing a lot of automated testing. At some point
we have to understand that if we are blindly relying on automated testing to
solve or uncover every problem, that might be a little dangerous statement to
make. We should use it really to make sure that whatever was the behaviour in
the last build remains unchanged and unbroken and that becomes a very good
verification. If new features/behaviour have been introduced, delegating
that to automated testing at the first
instance might need a lot of subjectivity. We need to be a little careful.
There is no definite yes and no answer but “a balance” probably is the best
answer.
(In this SmartBits, Shivaji Raju outlines “ Digital test automation“. The video is at the end of this blog)
There is greater focus on services testing with lot of applications being built with service based architecture, that is one of the things that is changing significantly. From a framework standpoint, there are approaches like BDD and some custom Java-based solutions that people are trending to use when compared to the traditional keyword driven approaches that we were using in the past. The scale at which we test on different types of devices has also increased vis-a-vis when we used to test only on one browser, like IE. Now, the expectation is that we test the product on the web, on native, on different types of devices, the combinations increases.
The other change is in terms of using testing platforms. We don’t want to host mobile devices on premise. We test on Sauce Labs, Perfecto, or LambdaTest, these are some custom solutions that are available which can be used to scale up testing. Even lot of projects have moved from waterfall to Agile based implementations.
The other big thing is the DevOps. In context of testing, continuous testing is something that needs to be automated. So, other than we automate our execution aspects where we automate smoke,regression or different types of tests, whether it is UI services; we need to also ensure that the infrastructure and the test data elements are also automated. When we say Infrastructure, can we automate the deployment of builds into test servers or can we provision the instances on the fly instead of having some manual dependencies to ensure we get an end-to-end continuous pipeline.
Healthy code is not about just working correctly. It is about future-proofing, maintainability, adaptability, reusability and so on. As in real life where face shines when you are in the pink of health, beautiful code also shines!
“Healthy code is not the outcome of review or testing, it is from doing simple things diligently.”
What comes to your mind when you hear the word ‘HEALTHY’?
It really is a combination of healthy mind and healthy body. So when I say ““HEALTHY HABITS”, what do you think of?
Now, let us relate to code. What is healthy code?
So what are the habits to stay healthy? Let us correlate these habits to software.
Let us reorder these seven habits …
“Healthy code is not the outcome of review or testing, it is from doing simple things diligently.”
(In this SmartBits, Shivaji Raju outlines ” Key trends in automation “. The video is at the end of this blog)
Key trends in automation are significant focus on services testing with the architecture changes coming in. With service-first approach and microservices , there is a lot of emphasis on testing the services. The other aspect is that traditionally we had a lot of test cases or automation scripts on the UI. That seems to be changing. We are trying to bring in quite a balance between different layers of testing rather than just focusing on UI.
The other trend is on the tools. We were predominantly using tools like HP UFT, Rational tools, or Selenium to some extent. The trend now is shifting towards script-less solutions like Tosca, Functionize, Mabl which has ability to build scripts faster. Some trends have been noticed on the framework front too. We traditionally had QA-driven approaches which were quite heavy. Now the shift is to use lightweight approaches or frameworks, especially in the Agile context and frameworks that integrate to the toolsets as part of the DevOps pipeline itself. The need is to ensure that the framework integrates with different build tools like Maven, JIRA or Artifactory. Those are some of the expectations when building a solution for the framework.
Again, DevOps is a significant trend now, so the expectation is to see how we could automate the continuous testing pipeline, considering that it’s not just about automating the execution piece, automate even test data or probably automate the infrastructure piece. So for provisioning infrastructure from the clouds, there are certain tools to do that and then there are trends around virtualizing services, even virtualizing databases also to ensure that some automation is brought into the data and the services layer and that is how we would achieve an end to end CI or CT automation.
(In this SmartBits, Zulfikar Deen outlines “The challenges of legacy code “. The video is at the end of this blog)
We move quite slowly compared to how the industry moves. It is like a different cog in a gearbox where one actually moves very slowly and the other one is moving really fast, but there is a need to have a proper connection in place. If you are looking at the quality of the whole system, you need to look at holistically. When integrating with the slow-moving system, how do you ensure the data moves in the right speed, right velocity and it is tested correctly?
The biggest challenge would be that, in the existing system, the data may not be good. There could be a lot of garbage in the data because of usage of system for several years. When you are integrating with the newer technology, or a wonderful new system and post plugging in, if that expects good clean data, it is not going to happen. It’s going to be very challenging in terms of fidelity, velocity and data cleanup.
Partner talks about jQuery or possessing technologies like ML engine. My internal team may not be able to connect with them. There are so many technologies out there, my internal team is focused on making sure my system is running operations, so that becomes the primary focus.
Technology may be very interesting, but they may not be able to connect to what you are speaking. As a partner, we need to make sure that it is holistically tested. It is very critical to hand-hold the internal team when doing the integration pieces, during the testing time, operationalizing time and support time.
(In this SmartBits, Srinivasan Desikan outlines “The evolution of dev”. The video is at the end of this blog)
If you look at industrial evolution, you can classify this into four dimensions. The first evolution that happened is on the infrastructure side where individual people used to have their own PC’s and they wanted to share those individual PC’s. That’s when the Test lab was created. if you really see that dimension was really initiated by test engineers, they did the consolidation of all the machines and that was called a test lab, then over a period of time, they added some compliance requirements to it became data centre when they added to the data centre, they wanted to have more operating systems than what they have, so they created virtualisation.
The testing folks said, let us have a quick control of all these resources, whether it is virtual or whether it is real and that became data centre. Then looking at the reliability of the test lab data centre companies decided let us host our services on those data centres so that our customers can use it using the very own processes which were created by the test engineers. That became the industrialized data centre and then cloud evolved from it.
The cloud is nothing but something which is elastic, you put a small data centre somewhere and then you keep elongating it based on the needs based on utilization. Elasticity does not mean only expanding but also whenever there is no load bring it down that is what is cloud and after Cloud now people are actually talking about, having virtual images within virtual images which are called Dockers, containers and Kubernetes that’s one side of the dimension. That is the dimension one, which was initiated by the test Engineers.
The second dimension which obviously was not created by the testing teams, but it has evolved over a period of time. Those days we used to have desktops then it became laptops, smaller in size then it became palmtops even smaller in size. Then it became mobile and today we are actually talking about IOT devices which go and sit in each and every appliance at home that is a second dimension of the evolution.
If you look at the third evolution of dimension- the process delivery model, waterfall model where strict coding is completed and only then testing was done and after that people wanted to parallelize testing and development team then became V model and V model became spiral model then spiral model became agile model. Then the agile model is becoming super lean model, which is called lean development, processes and related things that is on the process delivery model that is the third dimension to it.
The fourth dimension to the evolutions what I have seen in the last 30 years is the customer payment model. Initially when all the desktops and other things were available in the office, they were all purchased by paying full cash. Which is a hundred percent Capex and 0% Opex. The electricity was paid by the same company. The maintenance was paid by the same company. There were no operational expenses. Everything is a capital expenditure and slowly they moved into optimal capex and Opex, they wanted to outsource some of the maintenance part of those machines therein they brought in some element of operational expenses other than a capital expenditure. Then went to another model where there is less capex but more opex they want ,started renting out the machines from outside and after that consumption-based volumetric model evolved where , you buy a product from us and depending on the number of users you have or depending upon the number of amount of utilization those users can, we will charge you that became the consumption-based volumetric model .
The last one which is becoming predominant, now is pay-per-use ,you do not invest anything that is zero cape and everything is opex. It is going in Uber or Ola and now the insurance is paid, the car is bought by the company and you just go use it and for the amount of time or amount of a distance you have used, you pay them. Whenever you want, you can do it you can rent out a Mercedes today, use it as it is your own vehicle but you still need not own it.
These are the four different dimensions in the industry has evolved in my opinion.The intersection of all these four is the sweet spot all the test engineer should work on. The dimension one and the fourth the infrastructure as well as the process delivery model. Both of them are invented by the key engineers. Moving to the waterfall model to V model is actually done by key engineer. Moving from the individually kept machines to a test lab was actually initiated by that testing teams Majority of the evolution what you see in the industry. The starting point is testing and the sweet spot for the test engineers is to look at all these four dimensions and how they are going. What I told you is only the 30-year story it will continue for another 30 years or another 60 years, but these dimensions will stay but the evolutions will continue to happen. The sweet spot here is which tells you how fast we should evolve, so speed is very important.
The second thing, no investment. Everything is becoming virtual, companies will become virtual, the offices will become virtual, people can work from home and people can take jobs workloads which you call it as work load from the internet and you may not have any manager. You may not have any infrastructure your capital expenditure, maybe zero sitting at home and you will be still delivering the products and most of it what you do sitting at home would be apt testing. That is where I think the industry is evolving.