Summary SmartQA is a beautiful combination of thinking styles, a mindset of brilliance and minimalism, tech prowess, heightened clarity and great design and meaningful pauses, outlined as ten suggestions in this article.
#1 Embrace multiple thinking styles Inculcate the deductive ability of a mathematician, creativity of an artist, mind of an engineer, value perception of a businessman, technical savviness, empathy, doggedness and nimbleness.
#2 Have a mindset of brilliant engineering Step into end user’s shoes, architect/design robustly, inject code to aid testability, strive to test minimally, do test related tasks lightweight.
#3 Analyse well, exploit tools for doing Much like the skill of a doctor to diagnose with exploiting the machines in the process. “Doing SmartQA”s a brilliant combination of “human powered and machine assisted” . The WHAT to-do is human while HOW-to-do is powered by machine/tools.
#4 Do minimally Strive to prevent issues, embed testability, review code carefully, use smart checklists, write minimally, regress intelligently.
#5 See better, cover more, test less Continuously see and assess product from multiple views – USERS, ATTRIBUTES, ENVIRONMENT, CODE, ENTITIES
#6 Pause to speed up Periodically pause and analyse to be sure that you are staying on the right track, reflect on outcomes to ensure you are doing it right and efficiently.
#7 View system from multiple angles View the system from internal, external, regulatory/compliance, operations, maintenance and release like code structure, architecture, technology, terms of behaviour, end users, environment, usage, standards.
#8 Be sensitive and aware Be sensitive and aware to issues you encounter and potential causes of issues, after all issues creep in due to untold expectation, accidental omission, quiet assumptions, incorrect implementation,inappropriate modifications,interesting side effects ,deliberate abuse , and innovative usage. Sharpen your senses to smell better!
#9 Design for robustness Don’t just test, design system robustly. Codein firewallsto be disaffected by inputs, configuration/settings, resources or dependent code.
#10 Design for testability Hook in code to be able to inject inputs to stimulate, check status, create traces/logs to debug/check-for-correctness, even embed ‘self-test code’.
Summary Over the years there are a few things that I do consistently to solve problems technical or business. The process is magical, to see the larva of an idea become a beautiful butterfly, the solution. This article outlines this as eight things I do as a collection of posters with crisp text.
Explore problem by puttering around, to understand it better, to try out mini experiments of potential solutions to germinate ideas. The path to solving a problem is never straight line. Focus, meander, move continuously, observe to form a good mental picture. After all, great clarity is key to brilliant problem solving.
As you explore, picturise the various facets of problem and various solution bits as doodles, mind maps, or diagrams. Be non-linear, use colours to enhance stimulation of thoughts.
Experiment with the problem, see if you can spot similarities to prior problems, connect to get a better handle to understand and vigorously pursue possibilities.
As you explore,experiment, jot down interesting observations, ideas, solution possibilities. Be terse, so that you are not distracted from what you doing. After all, quick notes help in assimilating better and jotting down quick ideas as they come by.
Describing the problem, explaining ideas, solutions to a willing listener is something I do every time. It clarifies my thoughts, enables me to identify solutions that have been dogging me. And this is just me explaining, not including suggestions and more ideas that I get from the listener.
Sometimes when mind is filled with information, struggling to process, clueless about coming with ideas, solutions, I stop whatever I am doing, empty my mind and in the utter calm, ideas/solutions fly in gently.
At times when it has been frustrating to see nothing emerging to solve the problem, I have found it wise to discard the pursuit for solution and do something else. Deflect my mind and engage in something else . And after some time, suddenly ideas flash.
Once an idea to a solution presents itself, I work feverishly on it. Implementing, refining, continually polishing by working feverishly. Totally immersed,all senses absolutely tuned to the act of implementation. Magical it is to see a larva of the idea become the beautiful butterfly of a solution.
( In this SmartBits, Zulfikar Deen outlines “ Management expectations of CIO & IT team “. The video is at the end of this blog)
Whether the end-user
organization is small or large, the challenges remain the same for both of
them. For a large multinational multi-billioncorporation, or a smaller
organization challenges are very similar. They could be related to security,
adoption, consumer understanding, delivery, timeline or quality.
The difficulty for a
smaller organization is that IT team is much smaller, not an army of people.
They do not have a huge budget to ensure the same challenges are tackled
better. With small and shoestring budgets, it is difficult to bring in newer
technology and solutions into operations. Not having an appropriate budget is
an important challenge, but that doesn’t mean that they will be left far
behind. They still have to adapt, invent and have to move at the same speed.
The next challenge
is that business leaders come across sound bites (for example Blockchain) from
meetings they attend and are keen to implement. Our role is to ensure they
distill it correctly and make sure they are used appropriately in context to
the business, in relation to the readiness of the systems and ensure they don’t
fall too far behind.
Another aspect to be
looked at, is the level of board and the top management CXO organization’s ability to look at
technology. Often, CXO’s would neither be a risk-taking forerunner nor they
want to lag behind. We need to understand what level of comfort the management
has and then we need to play along with that. One needs to watch for a
technology shaping up and as soon as it is ready for usage in the system they
should start putting it in place.
In this smartbits video “Design for Testability& Automation” Girish Elchuri outlines how design for testability aids in test automation. The transcript of this video is outlined below.
There
are three aspects to be looked at when we talk about test automation. The first
one is running the test cases, the second, invoking the functionality that
needs to be tested and the third, asserting the outcome tests as success or
failure. We can talk about test automation only if we can automate all these
three functions.
Test execution Most of the time, running the test cases is perceived as automation, but ideally it has to invoke the other two as well. With reference to running the test cases, there are enough tools that can be used and invoked, but in case of invoking the functionality, a developer can make a big difference.
Backdoor invocation Normally when a product is being developed, the product functionality is accessible only through GUI. Developers should also provide a backdoor to reach the functionality so that one can actually test the entire product functionality in a much more efficient way without having to invoke the GUI. This is how developers can help in terms of test automation.
Test outcome assessment In the third aspect of asserting the outcome as success or failure, sometimes it is not clear whether it has succeeded or failed because of some small state changes that we do not know how to check. So a suggested way is to have extensive logs, these are also called as the structured logs. While logging we put debug messages, information messages and error messages. There is another category that needs to be added, these are called test messages. In a structured log with test messages, it becomes easy for us to go and check the log and ascertain whether a particular test case has passed or failed.
These are ways how a developer can help testability in test automation – by facilitating invocation and assistance in assertion of outcomes of test result.
Summary Successful large system deployment failures is not merely due to poor testing of software. It is about poor operationalisation of software. This article outlines three major failures – Poor transition of software to end users, messed up business procedures and Data issues, the result of poor operationalisation. These have been curated from two articles.
FAIL #1 Avon : Poor transition of software to end users
In 2013 Avon’s$125 million SAP enterprise resource planning project failed after four years of work, development and employee testing.
ERP software can brag all it wants about functionality and all of the magical modules and apps you can use to make your business processes easier, but that won’t mean anything if your software isn’t actually usable. It’s all about aligning your software to your business processes, and if you can’t get staff to use your ERP, they won’t be carrying out the processes necessary to keep your business running. Make sure your employees are properly trained and transitioned into the new software, and that they want to use that system in the first place.
The Australian outpost of the venerable department store chain, affectionately known as “Woolies,” also ran into data-related problems as it transitioned from a system built 30 years ago in-house to SAP.
The day-to-day business procedures weren’t properly documented, and as senior staff left the company over the too-long six-year transition process, all that institutional knowledge was lost — and wasn’t able to be baked into the new rollout.
Many companies rolling out ERP systems hit snags when it comes to importing data from legacy systems into their shiny new infrastructure. The company’s supply chain collapsed, and investigators quickly tracked the fault down to this supposedly fresh data, which was riddled with errors -items were tagged with incorrect dimensions, prices, manufacturers, you name it.
Thousands of entries were put into the system by hand by entry-level employees with no experience to help them recognise when they had been given incorrect information from manufacturers, working on crushingly tight deadlines. It was later found that only about 30 percent of the data in the system was actually correct.
About SmartQA The theme of SmartQA is to explore various dimensions of smartness to leapfrog into the new age of software development, to accomplish more with less by exploiting our intellect along with technology. Towards this, we will strive to showcase interesting thoughts, expert industry views through high-quality content as articles, posters, videos, surveys outlined as a SmartQA Digest weekly emailer. SmartBites is “soundbites from smart people”. Ideas, thoughts and views to inspire you to think differently.
Summary The act of testing is a scientific exploration of a system done in three phases – RECONNAISSANCE to understand and plan, SEARCH to look for issues, REST&RECOVER to analyse and course correct. To enable the various activities in each phase to be done quickly and effectively, is where the SEVEN Thinking Tools outlined in this article help. How to apply these tools in a session-based approach is also briefly outlined.
When I hear people talking about testing as Manual or Automated with the latter being the need of the hour, I am flabbergasted. All the word ‘manual’ conjures in my brain is that of me doing a menial job of painful scrubbing!
It is time we used Intellectual & Tool-supported. “Think well. Exploit tools to do.”Enough of rant!
In current times, speed is everything, right? What can we do to test quickly ? Use tools. Automate. Right? Wait a minute – This is about execution, right? What about prior activities?
To answer, let us ask the basic question what is testing after all? Testing is exploration. Let me correct it. Testing is scientific exploration. And exploration is a human activity that is aided by tools & technology. How can we do scientific exploration rapidly? By using tools that help us think better and do faster.
Let us say you want to explore the nearby mountain range by foot. Would you just pick up your backpack and go on? I bet not, unless it is a really short trip. Otherwise I think you will study the geography/terrain, read others’ experiences, do a reconnaissance, create various maps of terrain, of pit stops, of food joints etc before you chalk out the full route. Once the route is setup, you will pack your bags and go. As you explore, you will discover that “the map is not the terrain” and be taunted, surprised, challenged and you will learn, adjust, improvise, revise the maps, routes as needed. Tired, you will rest, analyse, replan & recover to continue on your journey. This is not ad-hoc nor driven by sheer bravado. This requires logical thinking(scientific), planning and ability to observe, adjust continuously and also some bravado and good luck!
This is what we can apply too in testing our software/systems. This article distils this and provides you with SEVEN THINKING TOOLS to enable you do these easily and scientifically.
Applying the above analogy, we look act of testing as being done in THREE phases: RECONNAISANCE, EXPLORATION, REST&RECOVER.
RECONNAISANCE : Do survey and create maps Survey : Get to understand the system under test by reading documents, playing with the software/system, discussing with people, to clearly understand who the end users are, what the entities (e.g. features, requirements..) to test are, what the various attributes the end users expect are, and the environment in which it will be deployed. In a nutshell we want to know the Who, What-to-test, What-to-test-for and Where. This is done by the Tool #1 “Landscaper”.
Create Maps: Now that you know the key information, connect them to four useful maps: Persona map, Scope map, Interaction map and Environment map.
(Tool #2) Persona map : A list that clearly connects the “Who” to What”. This helps us understand who uses what and therefore helps us prioritise testing and certainly enables us to get user centric view to validation.
(Tool #3 )Scope map: A list that connects the ‘What’ to ‘What-for’. This helps us to understand what the expectations of the various entities are i.e. That for even feature F1, we have an expectation of performance. What does this help us do? This helps us identify the various types of tests to be done.
(Tool #4) Interaction map: No entity is an island i.e. each entity may affect one or more entities. i.e a feature F1 may affect another feature F2 and therefore a modification of F1 may require retesting of F2. How does his map help us? Well, this helps plan our regression strategy intelligently.
(Tool #5) Environment map : This lists out the various environments on which the final system may be run so that the functionality and attributes may be evaluated on various deployment environments.
Now that we have done done the reconnaissance, we should have good idea of the system under test and therefore be ready to explore.
SEARCH : Now that we have the maps, the next step would be chalk out the routes and then we are ready to commence our search for issues. This is done using the “Scenario creator” tool. Once this is done we commence our search for issues. When doing this we will encounter things we don’t know, things that we did not anticipate, issues and therefore will need to revise and course correct revise the landscape, maps and routes. This is accomplished via the Dashboard tool in the Rest&Recover phase.
(Tool #6) Scenario creator: This tool helps to design the various test scenarios that would serve as the starting point. Note that these will be continuously revised as we explore and gain a deeper understand of the system and its context and usage. What is important would be segregate the scenarios into Levels so that the test scenarios are focused and clear in their objective. Robust Test Design approach of HBT helps you design scenarios that may be done using a mix of formal techniques, past experience, domain knowledge, context but clearly segregated into various HBT Quality Levels.
REST& RECOVER: In this phase, the objective is to analyse the exploration phase results and improve what we can do, track progress of doing and judge of the quality of the system-under-test. This is done by the tool ‘Dashboard’
(Tool #7) Dashboard: This tool helps you to do there things : (a) judge adequacy by look at the map and route information and improve the same (b) track progress of work done by checking the what has been done vs.planned as far as routes are concerned (c) judge quality by looking at the execution outputs of the scenarios level wise.
So how do we apply this tools? We saw that these 7 tools could be used in the THREE phases of RECONNAISSANCE, SEARCH, REST&RECOVER by a session based approach “Immersive Session Testing”. Each session is suggested to be short and focused say 60-90 minutes with a session objective to one or a mix of the phases.
Note that a session could be an exclusive RECONNAISSANCE or SEARCH or REST&RECOVER on a combination of these. Why is the session time suggested to be 60-90 minutes? Well this is to ensure razor sharp focus on each on the activity done. Also a short focussed sessions allow one to get into a state of flow enabling higher productivity and enjoy the activity!
About SmartQA The theme of SmartQA is to explore various dimensions of smartness to leapfrog into the new age of software development, to accomplish more with less by exploiting our intellect along with technology. Towards this, we will strive to showcase interesting thoughts, expert industry views through high quality content as articles, posters, videos, surveys outlined as a SmartQA Digest weekly emailer. SmartBites is soundbites from smart people”. Ideas, thoughts and views to inspire you to think differently.
Signup to receive SmartQA digest that has something interesting weekly to becoming smarter in QA and delivering great products.