SmartQA Community

What does it take to Build In Quality?

T Ashok @ash_thiru

Summary

This article is a set of brilliant ideas curated from four articles with the first suggesting ten ways to build high quality into software, second one from Scaled Agile framework outlining the clear definition of Done, the third highlighting how lean thinking and management helps, and the last outlining how Poka-Yoke can help in mistake proofing.


Building Quality In

“When looking at quality from a testing perspective, I would agree that it is not possible to build software quality in. To build quality in you need to look at the bigger picture. There are many ways to improve quality. It all depends on the problem. Maybe, you can automate something that previously had to be done by a human being. Maybe, you need training to better use the tools you have. Maybe you need to find a better tool to do the job. Or maybe, you need a checklist to remind you of what you need to look at. The possibilities are endless.

That’s not what I’m talking about when I talk about building quality in. Building in quality requires a more general, big-picture approach” says Karin Dames in the insightful article  10 Ways to Build Quality Into Software – Exploring the possibilities of high-quality software and outlines  TEN guidelines to consistently build quality into software:”

1. Slow down to speed up
You either do it fast, or thoroughly.

2. Keep the user in mind at all times
The story isn’t done until the right user can use it.

3. Focus on the integration points
Integration is probably the biggest cause for coding errors, understandably.

4. Make it visible
Spend time adding valuable logging with switches to to switch logging on and off on demand.

5. Error handling for humans
What would the next person need to understand this without having to bug me?

6. Stop and fix errors when they’re found
Done means done. End of story. Don’t accept commonly accepted levels of errors.

7. Prevent it from occurring again
Do RCA to uncover what caused the problem from happening in the first place and put a measure in place to prevent it from happening again.

8. Reduce the noise
Good design is simple. Good design is also good quality.

9. Reduce.Re-use.Recycle.
Focus on maintainability. A code base is organic. Factor in time for rewriting code and cleaning up code, just like you would spring clean your house regularly or clean up your desk.

10. Don’t rely on someone else to discover errors
Just because it’s not your job, doesn’t mean you shouldn’t be responsible. If you see something wrong, do something about it. If you can fix it, do it. Immediately.

Read the full article at 10 Ways to Build Quality Into Software – Exploring the possibilities of high-quality software

Reactive vs Proactive Quality Management

“To understand how to build quality into our products from the very beginning, we need to understand why this is not happening naturally. The most common way of preventing defects from reaching customers comes down to introducing a great number of inspections and countless KPI or metrics into the process. The problem with this approach is that it is reactive. And wasteful. If we think in the context of the value streams, neither inspections nor metrics add any value to the customer. At best, they help you discover and react to already produced defects. At worst, they encourage playing the system – you get what you measure.”  is what the insightful article “The Built-In Quality Management of Continuous Improvement” says.

The article continues on to outline the way how lean management views the issue of quality and defects is through the lens of value and continuous improvement.

Shifting into proactive quality management

Lean management views the issue of quality and defects is through the lens of value and continuous improvement

  • Value-centered mindset means everything you do needs to be bringing value to your client. Your client is anyone who receives the deliverable of your work.
  • Waste-conscious thinking helps remove whatever is not adding or supporting value. This results in fewer redundant metrics or steps in a process.
  • Continuous flow of work encourages working in smaller batches. This reduces the risk of larger defects, makes fixes easier and establishes a smooth delivery flow.
  • Bottlenecks are removed or guarded for the sake of the flow. If a work stage that adds a lot of value but is taking too much time, the cost of delay for the rest of the process might overcome this value.
  • Pull-powered flow means efforts and resources should not get invested into the things irrelevant to your stakeholders.
  • Upstream leadership empowers the person who is doing the work to elevate issues letting you cut the issues at the root.
  • Analysis and continuous improvement. Applying the Lean principles once won’t do the trick. Continuously analyze your work, outcomes, mistakes and build on that.

Want to know more, read the full article The Built-In Quality Management of Continuous Improvement.

Scalable Definition of Done 

The interesting article Built-In Quality states “Definition of Done is an important way of ensuring increment of value can be considered complete. The continuous development of incremental system functionality requires a scaled definition of done to ensure the right work is done at the right time, some early and some only for release. An example is shown in the picture below, but each team, train, and enterprise should build their own definition.” 

Copyright Scaled Agile Inc. Read the FAQs on how to use SAFe content and trademarks here: https://www.scaledagile.com/about/about-us/permissions-faq/. Explore Training at: https://www.scaledagile.com/training/calendar/ 

Read the full article The Built-In Quality Management of Continuous Improvement.

On a closing note “Have you heard of Poka Yoke?” Poka Yoke means ‘mistake-proofing’ or more literally – avoiding (yokeru) inadvertent errors (poka).  Its idea is to prevent errors and defects from appearing in the first place is universally applicable and has proven to be a true efficiency booster.

Poka Yokes ensure that the right conditions exist before a process step is executed, and thus preventing defects from occurring in the first place. Where this is not possible, Poka Yokes perform a detective function, eliminating defects in the process as early as possible.

Poka Yoke is any mechanism in a Lean manufacturing process that helps to avoid mistakes. Its purpose is to eliminate product defects by preventing, correcting, or drawing attention to human errors as they occur.

One of the most common is when a driver of a car with manual gearbox must press on the clutch pedal (a process step – Poka Yoke) before starting the engine. The interlock prevents from an unintended movement of the car.

When and how to use it?

Poka Yoke technique could be used whenever a mistake could occur or something could be done wrong preventing all kinds of errors:

  • Processing error: Process operation missed or not performed per the standard operating procedure.
  • Setup error: Using the wrong tooling or setting machine adjustments incorrectly.
  • Missing part: Not all parts included in the assembly, welding, or other processes.
  • Improper part/item: Wrong part used in the process.
  • Operations error: Carrying out an operation incorrectly; having the incorrect version of the specification.
  • Measurement error: Errors in machine adjustment, test measurement or dimensions of a part coming in from a supplier.

If you are keen to know more, read the full article What is the Poka Yoke Technique?

References

1 Karin Dames,  “10 Ways to Build Quality Into Software – Exploring the possibilities of high-quality software“.

2. From Scaled Agile Framework Inc, “Build In Quality”.

3 From kanbanize.com , “The Built-In Quality Management of Continuous Improvement“.

4. From kanbanize.com , “What is the Poka Yoke Technique?”.

Automation in isolation is more of a problem!

by Vijay Kumar Gambhiraopet

Businesses view software testing as a physical activity of executing tests. Test execution, being the interfacing activity between software development and business, is most visible and often perceived as the primary reason for delivery delays. Hence, any solution to expedite this activity is readily accepted. 

Run, run, run!

Automating test execution is seen as a silver bullet to address this issue. The tests can be generated by script-less tools, which do not require scripting and those with AI embedded promise to generate numerous tests in no time. Once the tests are generated, their execution automated, the tests can be run on-demand with no dependency on humans and at a fraction of time taken by them, eventually producing a defect-free product. 

Going the open source & script less 

Leaders are demanding solutions based on open source due to the ease of use, flexibility to integrate with enterprise solutions and availability of skills. Open source tools are often coupled with robotic solutions to offer a comprehensive solution for enterprise systems based on technologies accumulated over the years.

The race to automate in order to align to Agile practices has triggered a demand for plethora of script-less test automation tools. Percentage of tests automated is a high focus metric in governance reports. However, the automated scripts generated by these script-less tools are not portable – to protect the business of solution providers! 

The catch-up game

Even as Agile is being embraced by customers, the automated execution of scripts is, often, limited to the regression test suite. The functionality developed in sprint is tested manually and upon successful testing a representative set is identified to be included in the regression suite, which are the candidates for automation. Hence, the automation is usually catching-up, spanning sprints.

Complete automation of a regression suite is targeted for the release schedule. While the velocity of a team is expected to improve over time, the effort to identify test scenarios, sourcing test data, test environment setup, defect cycle are ignored. The effort for these tasks accumulates over sprints leading to team burn-out and the eventual casualty is automation. 

Testability is key to being Agile

An ideal automation strategy, thus, is to automate progressively, for a program to be truly agile. The strategy should include how to identify test cases for automation, adding testability a mandatory requisite to develop a functionality, procedures to ensure a dedicated and on-demand test environment and test data with commitment secured from the stakeholders.

The last best experience that anyone has anywhere, becomes the minimum expectation for the experience they want everywhere


Bridget van Kranlingen – IBM Leader

“NEW” expectations from QA


To meet this insatiable demand for quality, the responsibility on tester community is ever increasing. The testers must look beyond their confines of the team and ensure participation in product meetings, agile ceremonies and present a user perspective on the requirements, convey user priorities to make quality intrinsic to the requirements. Apart from looking deeper into functionality, a tester should start looking higher into the business objectives. Testers should make quality as everyone’s responsibility.

Summary

Automating test execution in isolation ends up being more of a problem than a solution. Any automation solution either to enhance quality and to improve test cycles should encompass the tasks across test discipline. Automation should be considered a lever to meet the business objectives and not an objective itself. 


About the author

Vijay works at IBM as Test Automation leader for North America. He has been engaged with multiple clients across geographies and domains.  His professional profile is at  https://www.linkedin.com/in/gambhiraopet/

Design for Testability – An overview

T Ashok @ash_thiru

Summary

This outlines what testability, the background of testability from hardware, the economic value of DFT,  why is testability important,  design principles to enable testability and guidelines to ease testability of codebase. This draws upon five interesting articles on DFT and presents a quick overview of DFT.


Introduction

Software testability is the degree to which a software artefact (i.e. a software system, software module, requirements or design document) supports testing in a given test context. If the testability of the software artefact is high, then finding faults in the system (if it has any) by means of testing is easier.

The correlation of ‘testability’ to good design can be observed by seeing that code that has weak cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test. A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all.
(From [1] “Software testability”  )

Testability is a product of effective communication between development, product, and testing teams. The more the ability to test is considered when creating the feature and the more other team members ask for the input of testers in this phase, the more effective testing will be.

(From [2] “Knowledge is Power When It Comes to Software Testability” )

Background 

Design for Testability (DFT) is not a new concept. It has been used with electronic hardware design for over 50 years. if you want to be able to test an integrated circuit both during the design stage and later in production, you have to design it so that it can be tested. You have to put the hooks” in when you design it. You can’t simply add testability later, as the circuit is already in silicon; you can’t change it now. 

DFT is a critical non-functional requirement that affects most every aspect of electronic hardware design. Similarly, complex agile software systems require testing both during design and production, and the same principles apply. You have to design your software for testability, else you won’t be able to test it when it’s done. 
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” )

The Economic Value of DFT 

Agile testing covers two specific business perspectives: (1) enabling critiquing the product, minimising the impact of defects’ being delivered to the user. and (2) supporting iterative development by providing quick feedback within a continuous integration process.

These are hard to achieve if the system does not allow for simple system/component/unit-level testing. This implies that Agile programs, that sustain testability through every design decision, will enable the enterprise to achieve shorter runway for business and architectural epics.  DFT helps reduce the impact of large system scope, and affords agile teams the luxury of working with something that is more manageable reducing the cost of delay in development by assuring assets developed are of high quality and needn’t be revisited.
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe“)

Why is testability important?

Testability impacts deliverability. When it’s easier for testers to locate issues, it gets debugged more quickly, and application gets to the user faster and without hidden glitches. By having higher testability, product/dev teams will benefit from faster feedback, enabling frequent fixes and iterations.

Shift-Left – Rather than waiting until test, having a whole-team approach to testability means giving your application thoughtful consideration during planning, design, and development, as well. This includes emphasising multiple facets such as documentation, logging, and requirements. The more knowledge a tester has of the product or feature, its purpose, and it’s expected behavior, the more valuable their testing and test results will be.
(From [2] “Knowledge is Power When It Comes to Software Testability” )

Exhaustive Testing

Exhaustive testing is practically better and easily achievable if applied in isolation for every component on all possible measures, this adds to its quality instead of trying to test the finished product with use-cases that tries to address all components. This raises another question, “Are all components testable” ? The answer is “build components highly testable as much as possible”.

However in addition to all these isolated tests an optimal system level test also should be carried out to ensure the End-To-End completeness.

Exhaustive testing is placing right set of tests at right levels i.e., more isolated tests and optimal system tests.

VGP

(From [4] “Designing the Software Testability” ]

“SOLID” design principles

Here are some principles and guidelines to can help you write easily-testable code, which is not only easier to test but also more flexible and maintainable, due to its better modularity. 

(1) Single Responsibility Principle (SRP) – Each software module should only have one reason to change.

(2) Open/Closed Principle (OCP) –  Classes should be open for extension but closed to modifications.

(3) Liskov Substitution Principle (LSP) – Objects of a superclass shall be replaceable with objects of its subclasses without breaking the application.

(4) Interface Segregation Principle (ISP)- No client should be forced to depend on methods it does not use

(5) Dependency Inversion Principle (DIP) – High-level modules should not depend on low-level modules; both should depend on abstractions. Abstractions should not depend on details. Details should depend upon abstractions.

[SOLID = SRP+OCP+LSP+ISP+DSP]
(From [5] “Writing Testable Code” )

Law of Demeter (LoD)

Another “law” which is useful for keeping the code decoupled and testable is the Law of Demeter. This principle states the following: Each unit should have only limited knowledge about other units: only units “closely” related to the current unit. Each unit should only talk to its friends; don’t talk to strangers. Only talk to your immediate friends.
(From [5] “Writing Testable Code” )

Guidelines to ease testability of codebase

(1) Make sure your code has seams – A seam is a place where you can alter behaviour in your program without editing that place.

(2) Don’t mix object creation with application logicHave two types of classes: application classes and factories. Application classes are those that do real work and have all the business logic while factories are used to create objects and respective dependencies.

(3) Use dependency injection
A class should not be responsible for fetching its dependencies, either by creating them, using global state (e.g. Singletons) or getting dependencies through other dependencies (breaking the Law of Demeter). Preferably, dependencies should be provided to the class through its constructor.

(4) Don’t use global state
Global state makes code more difficult to understand, as the user of those classes might not be aware of which variables need to be instantiated. It also makes tests more difficult to write due to the same reason and due to tests being able to influence each other, which is a potential source of flakiness.

(5) Avoid static methods
Static methods are procedural code and should be avoided in an object-oriented paradigm, as they don’t provide the seams required for unit testing.

(6) Favour composition over inheritance
Composition allows your code to better follow the Single Responsibility Principle, making code easy to test avoiding class number explosion. Composition provides more flexibility as the behaviour of the system is modelled by different interfaces that collaborate instead of creating a class hierarchy that distributes behaviour among business-domain classes via inheritance.
(From [5] “Writing Testable Code” )

References

[1] Software testability at  https://en.wikipedia.org/wiki/Software_testability

[2] “Knowledge is Power When It Comes to Software Testability” https://smartbear.com/blog/test-and-monitor/knowledge-is-power-when-it-comes-to-software-testa/

[3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” at https://www.scaledagileframework.com/design-for-testability-a-vital-aspect-of-the-system-architect-role-in-safe © Scaled Agile, Inc.

[4] “Designing the Software Testability” at https://medium.com/testengineering/designing-the-software-testability-2ef03c983955

[5] “Writing Testable Code at ” https://medium.com/feedzaitech/writing-testable-code-b3201d4538eb


Dissecting the human/machine test conundrum

T Ashok @ash_thiru

Summary

It is common to see testing discussions veer into a dichotomy of “manual vs automated testing” and how the latter is indeed the order of day today. Sadly I find the discussed seriously flawed. In this article I dissect these the way we test as being human-powered and machine-assisted and outline an interesting way as to how the role of power of human and machine assistance is paramount to do testing smartly, rapidly and super efficiently.

Introduction

The way we test has been trivialised into two buckets of manual and automated test by general milieu. Firstly the phrase “manual testing” seems to connote a menial job, intensely labour oriented which it is not and therefore the phrase highly incorrect. Secondly the notion of automated testing seems to connote writing scripts and running it frequently to detect issues. What is forgotten is that once a test script uncovers an issue which when fixed makes this script as a health ascertainer rather than a ‘bug finder’ and that test cases/strategy needs to be constantly updated to newer more ubiquitous issues.

Human-Machine testing NOT Manual-Automated testing

The more appropriate word would be HUMAN testing , as it connotes a combination of BODY-ily activity wth INTELECTual thinking powered by MIND. MACHINE as term is probably more appropriate in signifying an aid to test in a holistic manner rather than AUTOMATED testing, which seems to to connote only build and execute.

HUMAN Testing = intellect+body+mind

Philosophically human is seen a composition of Body, Mind and Intellect. Using the same idea context of testing, I see the act of physically observing, hearing, doing, feeling BODY-ily activity while INTELLECTual activity powers some of the key activities of testing while MIND enables appropriate thinking.

Human Based Testing = INTELLECT + BODY + MIND

The backdrop to dissection

To dissect the various the activities and understand what needs to be done HUMAN and what can be by a MACHINE, I am going to use the Test Activity list that outlines key activities in the lifecycle of testing:

  • Understanding the System under test
  • Strategising and planning the test
  • Designing test scenarios, cases, data sets
  • Executing tests including automation
  • Reporting – issues, progress
  • Analyse – issues, test progress, learnings

Now we will analyse the various HUMAN-powered and MACHINE-assisted activities for each key activity.

HUMAN-MACHINE Test Map

The complete HUMAN-MACHINE conundrum dissected here:

Note that this is not be intended to be comprehensive filled with all tool aids, nor all the human activities as this map would lose its utility then! Use this as as aid to understand the HUMAN-MACHINE test conundrum and for heaven’s sake STOP USING the phrases “MANUAL and AUTOMATED testing”.

“It is time we recognised that it takes smart HUMANs assisted by MACHINES (really tools/tech) to test less, test rapidly and accomplish more”




The Power of Geometry

A good running form, a great cycling geometry becomes essential to delivering higher performance with no increase in power output in running and cycling respectively. Applying this to the context of QA, it is not just the content of test artifacts like scenarios/cases, plan that matters, it is how they are structured/organized that is key to accomplishing more with the same or less.

This article outlines how structure(or organization) of elements plays a key to doing more with less. In the subsequent two articles on this theme, we examine in detail, the arrangement of product elements and test artifacts and how these aid in clear thinking to deliver high performance.

Click here to read the full article published in Medium.


Do brilliantly ‘right’ after taking a ‘left’!

T Ashok @ash_thiru

Summary

A logical ‘left brain’ thinking complemented with a creative ‘right-brained thinking’ results in brilliant testing. This is an amalgamation of forward, backward, approximate, visual, contextual and social thinking styles aided by techniques/principles using process, experience and great habits.


Testing is about perturbing a system intelligently and creatively shaking out issues that may be present. How do we know that all the issues have been shaken out is indeed a challenge. A logical thinking approach to identify good and erroneous situations is seen as necessary to justify the act of completeness of validation. It is also seen as necessary to be creative and use the context to perturb the system. Finally, injecting a dose of randomness to perturbation is seen as the final straw to being complete.

Picture stating "Smart with logical thinking, get creative and finish off with ad-hoc thinking.

Testing is a funny business where one has to be clairvoyant to see the unknown, to perceive what is missing and also assess comprehensively what is present ‘guaranteeing’ that nothing is amiss.

Left brained thinking

‘Left-brained thinking’ can be seen as collection of forward, backward and approximate thinking styles using methods that can be well formed techniques or high order principles based on an approach of disciplined process, good habits and learning from experiences. Read in detail at Left brain thinking to building great code.

Picture of left brained thinking

Right brained thinking

A logical ‘left brain’ thinking is essential to good testing. Right brained creative thinking comes in handy to go beyond the left, to enable us to vary the paths, discover new paths and improving outcomes. Thinking creatively is about thinking visually, thinking contextually and thinking socially, using pictures to think spatially, using application context to react, experiment and question and then morphing into an end user respectively. Read in detail at “It takes right brain thinking to go beyond the left”.

Picture of right brained thinking

The right brain creative thinking comes handy, to go beyond the left. To enable us to vary the paths, discover new paths and improving outcomes. This is not to be misconstrued as random or ad-hoc, though randomness does help. It is great to start with a logical/organised thinking, add a dose of creative thinking and finish it off with random meanderings.


Thinking Visually

nanoLearning on “Thinking Visually” using Sketchnotes from Anuj Magazine, Citrix.
The video of this smartbits is available here.


Ashok : As an avid visual thinker who uses Sketchnotes to communicate, please tell us the importance of visual thinking and how it can help us understand/think better and influence people?

Anuj : I think, one of the ways Visual thinking has helped me is to find a way to better stay in the moment and what I mean by that is when we are in the moment, we will be able to appreciate life even more than what it is. So, staying in the moment in one of the big benefits. While I don’t claim to be a big one, every artist seeks inspiration from life happening around them and the quest of seeking that inspiration itself is one that lets you live that moment better than in a condition without that, so that is one. 

The second way visual thinking I believe has helped me is – One of my online friends is Tanmay Vora, a very good Sketchnote artist. One of the blogs that he wrote and that stayed with me, talks about one of the principles he follows  – to consume less and create more. In essence, what he means by that is, people with all the revolution which has happened around smart phones are always consuming content. We can blame apps for it – in a way they have been designed to create that stickiness, but we are always in the consumption mode. What happens if we start eating lot of food? It shows up on our body. Ironically, consuming lot of content does not show up as visibly in our minds. We can feel our minds getting bloated up, getting overwhelmed with lot of stuff, but you got to catch those signals. So, in order to balance it out, one of the principles of consuming less and creating more comes into picture. How it helps the visual thinker in me is that if I read stuff, I try to restrict myself to reading good stuff, and whatever I read I have a kind of pact with myself that I will create something out of that, be it a blog, a sketch or some other consumable form. That really creates balance because you are not holding up information for too long and getting it to stale in your mind without it being put to the right kind of use.

Third way visual thinking has helped me is – I will tell you an instance where I had organised one of the sessions on quantum computing. As complex a subject as that is in today’s times, it was equally important for people to figure out how to explain it simple. So, one of the things that I had set myself to do in that session was to Sketchnote the session live. Eventually, it turned out to be a good summary and in doing so, I realised that Sketchnoting is helping me actively listen to the speaker. What I mean by active listening is that again I am not consuming the content for the sake of consuming the content. I am creating something out of it and also actively removing the noise out of the whole experience of listening. You can’t write each and every word in a Sketchnote, but you can write the key points and summarise it. I did present it to the speaker after the event. So, bringing in the ‘intention’ in the listening is one of the key traits that I learnt. 

Overall, the main areas where visual thinking has helped me, is to be more aware, be more present in situations and listen intently and balancing that continuum of creation and consumption which is important.

Ashok: So what you are saying is it does help you certainly be more mindful, absorb better and obviously assimilate it and keep up with the most important things so to speak and do it very continually along with the person who is doing his job.

Anuj:  I would like to add how will it help QA professionals.More often QA professionals find themselves in a situation where the bug reports, unfortunately still are considered as the key output. In absence of any innovation happening in creating new bug reports, they are again thought of as one of the predictable outputs from the profession. What if you create a Sketchnote out of a bug report? I think that might help people look into your bug with more interest and getting more motivated to fix them.


CIO views on Quality

Summary
This article is about views on quality from CIOs curated from a list of interesting articles. It is felt that solution quality is one of the Top-3 challenge doing DevOps adoption with reducing technical debt as a key focus area for 2019. Some of the interesting views from CIOs are “there’s no way you can satisfy the demands of digital transformation without DevOps, Continuous Testing”, “address testing and ensure it advances your digital transformation initiatives rather than holds them back”, “can’t risk disrupting frequent deployment,  this is where Continuous Testing comes in.”


Quality of solutions is a challenge during DevOps adoption
Based on Gartner’s 2019 DevOps Survey, ensuring the quality of solutions is among the top 3 challenges encountered during the adoption of DevOps. According to them,application leaders guiding a digital transformation initiative must make continuous quality the technical, organizational and cultural foundation of their strategy.

Many organizations are on a journey with DevOps, practicing continuous development and continuous deployment, yet a continuous approach to quality is often missing. Basic functional quality goals, is not sufficient to satisfy the quality expectations of the users, the business or the market. The growing pervasiveness of mobile, web, cloud and social computing scenarios has raised end users’ expectations for application quality. The notion of what constitutes superior quality has become much broader and includes overall user experience, quality of service (QoS), availability and performance, as well as security and privacy. It is no longer sufficient that the application just works. It must provide an optimised experience that leaves the user wanting to engage more and interact again. 

(Ref: https://www.cio.com/article/3411568/transforming-software-testing-for-digital-transformation-it-leaders-can-t-afford-to-wait.html)

Reducing technical debt needs increasing focus
When CIO.com asked CIOs “What are your top priorities for 2019?”, reducing Tech debt was 2nd most popular response. CIOs say reducing technical debt needs increasing focus. It isn’t wasting money. It’s about replacing brittle, monolithic systems with more secure, fluid, customizable systems. CIOs stress there is ROI in less maintenance labor, fewer incursions, and easier change. (Ref: https://www.cio.com/article/3329741/top-priorities-for-cios-in-2019.html)

CIO’s views on Digital Transformation
Here are some views from CIO on Digital Transformation, consumer expectations, and hence the changing expectations from testing.

Rajeev Ravindran, SVP & CIO, Ryder System, Inc.
“Oftentimes, when people talk about digital transformation, they are really talking about technology. For me, taking the company “digital” is both about technology and a mindset shift. As a part of this mindset shift, we are moving from an applications-focused environment to a product-focused environment. In our new model, we look at every application as a product that has a life cycle determined by a product owner, who is typically in a business function other than IT.”

“In IT, we are moving from a linear thinking perspective to design thinking, and we are moving from waterfall to iterative. The goal of these changes is to create a customer-centric culture, whether those customers are internal or external to Ryder. The customer centric culture along with a product mindset will help with operational efficiency and revenue growth.” (Ref: https://www.idgconnect.com/interviews/1502117/cio-spotlight-rajeev-ravindran-ryder)

Andy Walter (Procter and Gamble)
“I think Continuous Testing is going to be core to companies being able to dynamically evolve their structures, their M and A, joint ventures, all these types of areas. While we were doing the Cody divestiture, we started a covert project of “how are companies going to be structured in the future?” And there’s no way you can satisfy the demands of digital transformation without DevOps, Continuous Testing, and the speed and agility they enable.” 
(Ref: https://www.itproportal.com/features/cios-share-why-software-testing-matters/)

Jennifer Sepull (USAA, Kimberly Clark, American Honda)
“I think the beauty of creating a DevOps model is that you have a powerful team that is empowered to really connect with the consumer. When those teams come together in that powerful way, and they own the entire end-to-end process, there’s opportunities for innovation. Application development and testing are absolutely critical to making sure that those innovations, or that connection with the consumer, can happen.” This means that you have to address testing and ensure it advances your digital transformation initiatives rather than holds them back.” 
(Ref: https://www.itproportal.com/features/cios-share-why-software-testing-matters/)

Robert Webb (Etihad)
“.. I know that software testers can make the CIO’s survival rate higher—but they can make the company more profitable, make it safer, and help it grow faster. If they make your testing faster and get your new apps out there, you can be more competitive. And if they can do that while lowering costs, that’s remarkable..” “..Transforming testing is pivotal for accelerating how software is digitising the business.” 
(Ref: https://www.itproportal.com/features/cios-share-why-software-testing-matters/)

Vittorio Cretella (Mars)
We have to really understand how the user is reacting and how to achieve this optimal customer experience..”  “.. To accomplish this, we need constant deployment. But we also have to ensure that deploying functionality daily or hourly always improves the user experience. We can’t risk disrupting it—so this is where Continuous Testing comes in.” 
(Ref: https://www.itproportal.com/features/cios-share-why-software-testing-matters/)

Andreas Kranabitl (SPAR ICS)
“..I believe that the most important element in digital transformation is people. We cannot have people spending their time on software testing tasks that can and should be automated. There is much higher-level work to do. We need future-oriented staff, and we can’t afford to make them suffer by asking them to do needless manual testing.” 
(Ref: https://www.itproportal.com/features/cios-share-why-software-testing-matters/)

Robert Webb ( Etihad Aviation Group)
“..Can you make my testing faster and get my new apps out there so I can be more competitive? Can you do that in a way that makes testing more automated and safer, and can you do that while you’re lowering costs? This is something that is very, very unique, and I think we all have a wonderful opportunity to be part of this revolution..” (Ref: https://www.tricentis.com/blog/digital-imperative-software-transformation-cio/)


It takes right brain thinking to go beyond the left

Right brained creative thinking comes in handy to go beyond the left, to enable us to vary the paths, discover new paths and improving outcomes. Thinking creatively is about thinking visually, thinking contextually and thinking socially, using pictures to think spatially, using application context to react, experiment and question and then morphing into an end-user respectively.

Click here to read the full article published in Medium.


Left brain thinking to building great code

A logical ‘left brain’ thinking is essential to good testing. Testing is not just an act, but an intellectual examination of what may be incorrect and how to perturb them effectively and efficiently. This can be seen as a collection of thinking styles of forward, backward and approximate using methods that can be well-formed techniques or high order principles that is based on an approach of disciplined process, good habits and learning from experiences.

Click here to read the full article published in Medium.