SmartQA Community

A quick primer on AI

Curated by T Ashok @ash_thiru

Summary

This article is curated from SIX articles as a quick primer on AI. Starting with a glossary on AI, it delves into tacit knowledge as codified via ML  and understanding the difference between ML & AI. A quick peek into deep learning and challenges of explaining the patterns ending with a interesting piece AI written by an AI program.


Glossary of terminology in AI

Here is an easy article “A Simple, yet Technical Glossary of Terminology in AI” that provides simple glossary of terms, abbreviations, and concepts related to the field and sub-fields of artificial intelligence. Based on technical definitions by pioneers and leaders in AI.

ML & Polanyi’s paradox

“Explicit knowledge is formal, codified, and can be readily explained to people and captured in a computer program. But tacit knowledge, a concept first introduced in the 1950s by scientist and philosopher Michael Polanyi, is the kind of knowledge we’re often not aware we have, and is therefore difficult to transfer to another person, let alone capture in a computer program. Machine learning has enabled AI to get around one of its biggest obstacles, the so-called Polanyi’s paradox.” Read more in this article ”What Machine Learning Can and Cannot Do

ML & AI – The difference (1)

There’s much confusion surrounding AI and ML. Some people refer to AI and ML as synonyms and use them interchangeably, while others use them as separate, parallel technologies. In many cases, the people speaking and writing about the technology don’t know the difference between AI and ML. In others, they intentionally ignore those differences to create hype and excitement for marketing and sales purposes.” This article  “Why the difference between AI and machine learning matters” attempts to disambiguate the jargon and myths surrounding AI.

ML & AI – The difference (2)

“Unfortunately, some tech organizations are deceiving customers by proclaiming using AI on their technologies while not being clear about their products’ limits. There’s still a lot of confusion within the public and the media regarding what truly is artificial intelligence, and what truly is machine learning. Often the terms are being used as synonyms, in other cases, these are being used as discrete, parallel advancements, while others are taking advantage of the trend to create hype and excitement, as to increase sales and revenue.” says Roberto Irliondo in his article “Machine Learning vs. AI, Important Differences Between Them”.

Explaining how patterns are connected

Deep learning is good at finding patterns in reams of data, but can’t explain how they’re connected. Turing Award winner Yoshua Bengio wants to change that, read about this is this article “An AI Pioneer Wants His Algorithms to Understand the ‘Why’

Chapter on AI written by a AI program

Here is an interesting excerpt from an ‘autobiographical’ chapter written by an AI program “This chapter on the future of Artificial Intelligence was written by Artificial Intelligence”, excerpted from the book “The Tech Whispererer”.


15 categories of tooling for digital test automation

T Ashok @ash_thiru

Summary

In this article I have tried picture(ise) the landscape of the plethora of tools for testing software which has moved away from just testing to build-test-deploy in a continuous manner. Keeping the interesting visual I have listed the FIFTEEN broad categories of tools that make up the modern digital testing landscape.


Given the plethora of front ends for digital applications of today,  from PCs to tablets and mobile phones wth varied form factors, OS and browsers, with varying connection speeds, sometimes uncertain, the integration with many external systems via services with demands on non-functional attributes and the frequent nature of releases has made the challenge of automation and keeping in sync harder. 

In this article, I have attempted to picture(ise) the landscape of test tooling with the entity-under-test(EUT) at centre (note that an EUT may be a small component, a subsystem or the complete system) with multiple ways to access it via API, Message/Service or UI on the left, evaluation by various test types at the top, the EUT enclosed in a deployment environment that may be an container/system with various ‘platform choices’.  Keeping this as the base I have attempted to enumerate the various activities related to evaluation as inject/stimulate, observe/measure, validate/oracle, log/record and generate mocks as necessary to test for functionality or other non-functional attributes in the larger context of test design, automate, build and deploy on a variety of platforms as necessary. 

Test Tools Landscape – 15 Categories of Tools

Using the above picture we have the FIFTEEN categories of tools with some example tools as a table below.

#CategoryDetailsExample tools
1inject/stimulateEnabling accessing the EUT via API, Service/Message or via UIxUnit, SoapUI, Postman, Selenium
2observe/measureEnabling run time aspects of EUTCoverage, Resource leaks
3validate/oracleEnabling assessment of pass/failFile comparators, Asserts
4log/recordEnabling logging data, test informationlog utilities, test execution recorders
5mocksProvide stubs for yet-to-be developed codeMockito, Wiremock
6non-functional test toolsEnable assessment of non-functional attributes JMeter, SonarCube
7platformsEnable testing on different mobile devices with different browserspCloudy
8virtualisation/deploymentEnable visualisation and deployment of codeJenkins, Tricentis TOSCA
9mocks/simulatorsEnable simulating or mocking large systemsPayment gateway simulators
10test designEnable design of test case via specification based testing, BDDCucumber, SpecFlow
11test data generation Enable large test data generationMockaroo, Worksoft
12buildEnable building of codeAnt, Maven
13test management Enable the management of testsJira, TestRail, PractiTest
14unit testEnable unit testingxUnit
15system testEnable testing of full system via UISelenium, TestComplete

What does it take to Build In Quality?

T Ashok @ash_thiru

Summary

This article is a set of brilliant ideas curated from four articles with the first suggesting ten ways to build high quality into software, second one from Scaled Agile framework outlining the clear definition of Done, the third highlighting how lean thinking and management helps, and the last outlining how Poka-Yoke can help in mistake proofing.


Building Quality In

“When looking at quality from a testing perspective, I would agree that it is not possible to build software quality in. To build quality in you need to look at the bigger picture. There are many ways to improve quality. It all depends on the problem. Maybe, you can automate something that previously had to be done by a human being. Maybe, you need training to better use the tools you have. Maybe you need to find a better tool to do the job. Or maybe, you need a checklist to remind you of what you need to look at. The possibilities are endless.

That’s not what I’m talking about when I talk about building quality in. Building in quality requires a more general, big-picture approach” says Karin Dames in the insightful article  10 Ways to Build Quality Into Software – Exploring the possibilities of high-quality software and outlines  TEN guidelines to consistently build quality into software:”

1. Slow down to speed up
You either do it fast, or thoroughly.

2. Keep the user in mind at all times
The story isn’t done until the right user can use it.

3. Focus on the integration points
Integration is probably the biggest cause for coding errors, understandably.

4. Make it visible
Spend time adding valuable logging with switches to to switch logging on and off on demand.

5. Error handling for humans
What would the next person need to understand this without having to bug me?

6. Stop and fix errors when they’re found
Done means done. End of story. Don’t accept commonly accepted levels of errors.

7. Prevent it from occurring again
Do RCA to uncover what caused the problem from happening in the first place and put a measure in place to prevent it from happening again.

8. Reduce the noise
Good design is simple. Good design is also good quality.

9. Reduce.Re-use.Recycle.
Focus on maintainability. A code base is organic. Factor in time for rewriting code and cleaning up code, just like you would spring clean your house regularly or clean up your desk.

10. Don’t rely on someone else to discover errors
Just because it’s not your job, doesn’t mean you shouldn’t be responsible. If you see something wrong, do something about it. If you can fix it, do it. Immediately.

Read the full article at 10 Ways to Build Quality Into Software – Exploring the possibilities of high-quality software

Reactive vs Proactive Quality Management

“To understand how to build quality into our products from the very beginning, we need to understand why this is not happening naturally. The most common way of preventing defects from reaching customers comes down to introducing a great number of inspections and countless KPI or metrics into the process. The problem with this approach is that it is reactive. And wasteful. If we think in the context of the value streams, neither inspections nor metrics add any value to the customer. At best, they help you discover and react to already produced defects. At worst, they encourage playing the system – you get what you measure.”  is what the insightful article “The Built-In Quality Management of Continuous Improvement” says.

The article continues on to outline the way how lean management views the issue of quality and defects is through the lens of value and continuous improvement.

Shifting into proactive quality management

Lean management views the issue of quality and defects is through the lens of value and continuous improvement

  • Value-centered mindset means everything you do needs to be bringing value to your client. Your client is anyone who receives the deliverable of your work.
  • Waste-conscious thinking helps remove whatever is not adding or supporting value. This results in fewer redundant metrics or steps in a process.
  • Continuous flow of work encourages working in smaller batches. This reduces the risk of larger defects, makes fixes easier and establishes a smooth delivery flow.
  • Bottlenecks are removed or guarded for the sake of the flow. If a work stage that adds a lot of value but is taking too much time, the cost of delay for the rest of the process might overcome this value.
  • Pull-powered flow means efforts and resources should not get invested into the things irrelevant to your stakeholders.
  • Upstream leadership empowers the person who is doing the work to elevate issues letting you cut the issues at the root.
  • Analysis and continuous improvement. Applying the Lean principles once won’t do the trick. Continuously analyze your work, outcomes, mistakes and build on that.

Want to know more, read the full article The Built-In Quality Management of Continuous Improvement.

Scalable Definition of Done 

The interesting article Built-In Quality states “Definition of Done is an important way of ensuring increment of value can be considered complete. The continuous development of incremental system functionality requires a scaled definition of done to ensure the right work is done at the right time, some early and some only for release. An example is shown in the picture below, but each team, train, and enterprise should build their own definition.” 

Copyright Scaled Agile Inc. Read the FAQs on how to use SAFe content and trademarks here: https://www.scaledagile.com/about/about-us/permissions-faq/. Explore Training at: https://www.scaledagile.com/training/calendar/ 

Read the full article The Built-In Quality Management of Continuous Improvement.

On a closing note “Have you heard of Poka Yoke?” Poka Yoke means ‘mistake-proofing’ or more literally – avoiding (yokeru) inadvertent errors (poka).  Its idea is to prevent errors and defects from appearing in the first place is universally applicable and has proven to be a true efficiency booster.

Poka Yokes ensure that the right conditions exist before a process step is executed, and thus preventing defects from occurring in the first place. Where this is not possible, Poka Yokes perform a detective function, eliminating defects in the process as early as possible.

Poka Yoke is any mechanism in a Lean manufacturing process that helps to avoid mistakes. Its purpose is to eliminate product defects by preventing, correcting, or drawing attention to human errors as they occur.

One of the most common is when a driver of a car with manual gearbox must press on the clutch pedal (a process step – Poka Yoke) before starting the engine. The interlock prevents from an unintended movement of the car.

When and how to use it?

Poka Yoke technique could be used whenever a mistake could occur or something could be done wrong preventing all kinds of errors:

  • Processing error: Process operation missed or not performed per the standard operating procedure.
  • Setup error: Using the wrong tooling or setting machine adjustments incorrectly.
  • Missing part: Not all parts included in the assembly, welding, or other processes.
  • Improper part/item: Wrong part used in the process.
  • Operations error: Carrying out an operation incorrectly; having the incorrect version of the specification.
  • Measurement error: Errors in machine adjustment, test measurement or dimensions of a part coming in from a supplier.

If you are keen to know more, read the full article What is the Poka Yoke Technique?

References

1 Karin Dames,  “10 Ways to Build Quality Into Software – Exploring the possibilities of high-quality software“.

2. From Scaled Agile Framework Inc, “Build In Quality”.

3 From kanbanize.com , “The Built-In Quality Management of Continuous Improvement“.

4. From kanbanize.com , “What is the Poka Yoke Technique?”.

Automation in isolation is more of a problem!

by Vijay Kumar Gambhiraopet

Businesses view software testing as a physical activity of executing tests. Test execution, being the interfacing activity between software development and business, is most visible and often perceived as the primary reason for delivery delays. Hence, any solution to expedite this activity is readily accepted. 

Run, run, run!

Automating test execution is seen as a silver bullet to address this issue. The tests can be generated by script-less tools, which do not require scripting and those with AI embedded promise to generate numerous tests in no time. Once the tests are generated, their execution automated, the tests can be run on-demand with no dependency on humans and at a fraction of time taken by them, eventually producing a defect-free product. 

Going the open source & script less 

Leaders are demanding solutions based on open source due to the ease of use, flexibility to integrate with enterprise solutions and availability of skills. Open source tools are often coupled with robotic solutions to offer a comprehensive solution for enterprise systems based on technologies accumulated over the years.

The race to automate in order to align to Agile practices has triggered a demand for plethora of script-less test automation tools. Percentage of tests automated is a high focus metric in governance reports. However, the automated scripts generated by these script-less tools are not portable – to protect the business of solution providers! 

The catch-up game

Even as Agile is being embraced by customers, the automated execution of scripts is, often, limited to the regression test suite. The functionality developed in sprint is tested manually and upon successful testing a representative set is identified to be included in the regression suite, which are the candidates for automation. Hence, the automation is usually catching-up, spanning sprints.

Complete automation of a regression suite is targeted for the release schedule. While the velocity of a team is expected to improve over time, the effort to identify test scenarios, sourcing test data, test environment setup, defect cycle are ignored. The effort for these tasks accumulates over sprints leading to team burn-out and the eventual casualty is automation. 

Testability is key to being Agile

An ideal automation strategy, thus, is to automate progressively, for a program to be truly agile. The strategy should include how to identify test cases for automation, adding testability a mandatory requisite to develop a functionality, procedures to ensure a dedicated and on-demand test environment and test data with commitment secured from the stakeholders.

The last best experience that anyone has anywhere, becomes the minimum expectation for the experience they want everywhere


Bridget van Kranlingen – IBM Leader

“NEW” expectations from QA


To meet this insatiable demand for quality, the responsibility on tester community is ever increasing. The testers must look beyond their confines of the team and ensure participation in product meetings, agile ceremonies and present a user perspective on the requirements, convey user priorities to make quality intrinsic to the requirements. Apart from looking deeper into functionality, a tester should start looking higher into the business objectives. Testers should make quality as everyone’s responsibility.

Summary

Automating test execution in isolation ends up being more of a problem than a solution. Any automation solution either to enhance quality and to improve test cycles should encompass the tasks across test discipline. Automation should be considered a lever to meet the business objectives and not an objective itself. 


About the author

Vijay works at IBM as Test Automation leader for North America. He has been engaged with multiple clients across geographies and domains.  His professional profile is at  https://www.linkedin.com/in/gambhiraopet/

Design for Testability – An overview

T Ashok @ash_thiru

Summary

This outlines what testability, the background of testability from hardware, the economic value of DFT,  why is testability important,  design principles to enable testability and guidelines to ease testability of codebase. This draws upon five interesting articles on DFT and presents a quick overview of DFT.


Introduction

Software testability is the degree to which a software artefact (i.e. a software system, software module, requirements or design document) supports testing in a given test context. If the testability of the software artefact is high, then finding faults in the system (if it has any) by means of testing is easier.

The correlation of ‘testability’ to good design can be observed by seeing that code that has weak cohesion, tight coupling, redundancy and lack of encapsulation is difficult to test. A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all.
(From [1] “Software testability”  )

Testability is a product of effective communication between development, product, and testing teams. The more the ability to test is considered when creating the feature and the more other team members ask for the input of testers in this phase, the more effective testing will be.

(From [2] “Knowledge is Power When It Comes to Software Testability” )

Background 

Design for Testability (DFT) is not a new concept. It has been used with electronic hardware design for over 50 years. if you want to be able to test an integrated circuit both during the design stage and later in production, you have to design it so that it can be tested. You have to put the hooks” in when you design it. You can’t simply add testability later, as the circuit is already in silicon; you can’t change it now. 

DFT is a critical non-functional requirement that affects most every aspect of electronic hardware design. Similarly, complex agile software systems require testing both during design and production, and the same principles apply. You have to design your software for testability, else you won’t be able to test it when it’s done. 
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” )

The Economic Value of DFT 

Agile testing covers two specific business perspectives: (1) enabling critiquing the product, minimising the impact of defects’ being delivered to the user. and (2) supporting iterative development by providing quick feedback within a continuous integration process.

These are hard to achieve if the system does not allow for simple system/component/unit-level testing. This implies that Agile programs, that sustain testability through every design decision, will enable the enterprise to achieve shorter runway for business and architectural epics.  DFT helps reduce the impact of large system scope, and affords agile teams the luxury of working with something that is more manageable reducing the cost of delay in development by assuring assets developed are of high quality and needn’t be revisited.
(From [3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe“)

Why is testability important?

Testability impacts deliverability. When it’s easier for testers to locate issues, it gets debugged more quickly, and application gets to the user faster and without hidden glitches. By having higher testability, product/dev teams will benefit from faster feedback, enabling frequent fixes and iterations.

Shift-Left – Rather than waiting until test, having a whole-team approach to testability means giving your application thoughtful consideration during planning, design, and development, as well. This includes emphasising multiple facets such as documentation, logging, and requirements. The more knowledge a tester has of the product or feature, its purpose, and it’s expected behavior, the more valuable their testing and test results will be.
(From [2] “Knowledge is Power When It Comes to Software Testability” )

Exhaustive Testing

Exhaustive testing is practically better and easily achievable if applied in isolation for every component on all possible measures, this adds to its quality instead of trying to test the finished product with use-cases that tries to address all components. This raises another question, “Are all components testable” ? The answer is “build components highly testable as much as possible”.

However in addition to all these isolated tests an optimal system level test also should be carried out to ensure the End-To-End completeness.

Exhaustive testing is placing right set of tests at right levels i.e., more isolated tests and optimal system tests.

VGP

(From [4] “Designing the Software Testability” ]

“SOLID” design principles

Here are some principles and guidelines to can help you write easily-testable code, which is not only easier to test but also more flexible and maintainable, due to its better modularity. 

(1) Single Responsibility Principle (SRP) – Each software module should only have one reason to change.

(2) Open/Closed Principle (OCP) –  Classes should be open for extension but closed to modifications.

(3) Liskov Substitution Principle (LSP) – Objects of a superclass shall be replaceable with objects of its subclasses without breaking the application.

(4) Interface Segregation Principle (ISP)- No client should be forced to depend on methods it does not use

(5) Dependency Inversion Principle (DIP) – High-level modules should not depend on low-level modules; both should depend on abstractions. Abstractions should not depend on details. Details should depend upon abstractions.

[SOLID = SRP+OCP+LSP+ISP+DSP]
(From [5] “Writing Testable Code” )

Law of Demeter (LoD)

Another “law” which is useful for keeping the code decoupled and testable is the Law of Demeter. This principle states the following: Each unit should have only limited knowledge about other units: only units “closely” related to the current unit. Each unit should only talk to its friends; don’t talk to strangers. Only talk to your immediate friends.
(From [5] “Writing Testable Code” )

Guidelines to ease testability of codebase

(1) Make sure your code has seams – A seam is a place where you can alter behaviour in your program without editing that place.

(2) Don’t mix object creation with application logicHave two types of classes: application classes and factories. Application classes are those that do real work and have all the business logic while factories are used to create objects and respective dependencies.

(3) Use dependency injection
A class should not be responsible for fetching its dependencies, either by creating them, using global state (e.g. Singletons) or getting dependencies through other dependencies (breaking the Law of Demeter). Preferably, dependencies should be provided to the class through its constructor.

(4) Don’t use global state
Global state makes code more difficult to understand, as the user of those classes might not be aware of which variables need to be instantiated. It also makes tests more difficult to write due to the same reason and due to tests being able to influence each other, which is a potential source of flakiness.

(5) Avoid static methods
Static methods are procedural code and should be avoided in an object-oriented paradigm, as they don’t provide the seams required for unit testing.

(6) Favour composition over inheritance
Composition allows your code to better follow the Single Responsibility Principle, making code easy to test avoiding class number explosion. Composition provides more flexibility as the behaviour of the system is modelled by different interfaces that collaborate instead of creating a class hierarchy that distributes behaviour among business-domain classes via inheritance.
(From [5] “Writing Testable Code” )

References

[1] Software testability at  https://en.wikipedia.org/wiki/Software_testability

[2] “Knowledge is Power When It Comes to Software Testability” https://smartbear.com/blog/test-and-monitor/knowledge-is-power-when-it-comes-to-software-testa/

[3] “Design for Testability: A Vital Aspect of the System Architect Role in SAFe” at https://www.scaledagileframework.com/design-for-testability-a-vital-aspect-of-the-system-architect-role-in-safe © Scaled Agile, Inc.

[4] “Designing the Software Testability” at https://medium.com/testengineering/designing-the-software-testability-2ef03c983955

[5] “Writing Testable Code at ” https://medium.com/feedzaitech/writing-testable-code-b3201d4538eb


Dissecting the human/machine test conundrum

T Ashok @ash_thiru

Summary

It is common to see testing discussions veer into a dichotomy of “manual vs automated testing” and how the latter is indeed the order of day today. Sadly I find the discussed seriously flawed. In this article I dissect these the way we test as being human-powered and machine-assisted and outline an interesting way as to how the role of power of human and machine assistance is paramount to do testing smartly, rapidly and super efficiently.

Introduction

The way we test has been trivialised into two buckets of manual and automated test by general milieu. Firstly the phrase “manual testing” seems to connote a menial job, intensely labour oriented which it is not and therefore the phrase highly incorrect. Secondly the notion of automated testing seems to connote writing scripts and running it frequently to detect issues. What is forgotten is that once a test script uncovers an issue which when fixed makes this script as a health ascertainer rather than a ‘bug finder’ and that test cases/strategy needs to be constantly updated to newer more ubiquitous issues.

Human-Machine testing NOT Manual-Automated testing

The more appropriate word would be HUMAN testing , as it connotes a combination of BODY-ily activity wth INTELECTual thinking powered by MIND. MACHINE as term is probably more appropriate in signifying an aid to test in a holistic manner rather than AUTOMATED testing, which seems to to connote only build and execute.

HUMAN Testing = intellect+body+mind

Philosophically human is seen a composition of Body, Mind and Intellect. Using the same idea context of testing, I see the act of physically observing, hearing, doing, feeling BODY-ily activity while INTELLECTual activity powers some of the key activities of testing while MIND enables appropriate thinking.

Human Based Testing = INTELLECT + BODY + MIND

The backdrop to dissection

To dissect the various the activities and understand what needs to be done HUMAN and what can be by a MACHINE, I am going to use the Test Activity list that outlines key activities in the lifecycle of testing:

  • Understanding the System under test
  • Strategising and planning the test
  • Designing test scenarios, cases, data sets
  • Executing tests including automation
  • Reporting – issues, progress
  • Analyse – issues, test progress, learnings

Now we will analyse the various HUMAN-powered and MACHINE-assisted activities for each key activity.

HUMAN-MACHINE Test Map

The complete HUMAN-MACHINE conundrum dissected here:

Note that this is not be intended to be comprehensive filled with all tool aids, nor all the human activities as this map would lose its utility then! Use this as as aid to understand the HUMAN-MACHINE test conundrum and for heaven’s sake STOP USING the phrases “MANUAL and AUTOMATED testing”.

“It is time we recognised that it takes smart HUMANs assisted by MACHINES (really tools/tech) to test less, test rapidly and accomplish more”




It takes right brain thinking to go beyond the left

Right brained creative thinking comes in handy to go beyond the left, to enable us to vary the paths, discover new paths and improving outcomes. Thinking creatively is about thinking visually, thinking contextually and thinking socially, using pictures to think spatially, using application context to react, experiment and question and then morphing into an end-user respectively.

Click here to read the full article published in Medium.


Left brain thinking to building great code

A logical ‘left brain’ thinking is essential to good testing. Testing is not just an act, but an intellectual examination of what may be incorrect and how to perturb them effectively and efficiently. This can be seen as a collection of thinking styles of forward, backward and approximate using methods that can be well-formed techniques or high order principles that is based on an approach of disciplined process, good habits and learning from experiences.

Click here to read the full article published in Medium.


High-performance thinking using the power of language

This is the first article in the series of twelve articles “XII Perspectives to High-Performance QA”, outlining interesting & counter-intuitive perspectives to high-performance QA aligned on four themes of Language, Thinking, Structure & Doing.

In this article under the ‘LANGUAGE’ theme, we examine how language helps in enabling a mindset of brilliant clarity to ‘High-Performance Thinking”. Here I outline how various styles of writing, various sentence constructs & sentence types play a key role in the activities we do, as a producer of brilliant code from the QA angle.

Click here to read the article published in Medium.


15 Facets to Problem Solving

T Ashok @ash_thiru

Summary:
We use many terms like philosophy, mindset, framework, models, process, practice, techniques etc in SW dev/test. This article attempts to simplify and put together a nice image of how they all fit in, to enable clear thinking for brilliant problem solving.


Given that the act of developing software is “problem solving”, we are bombarded by very many interesting terms like philosophy, mindset, framework, models, process, practice, techniques etc.  I am sure we have encountered these terms – Deming philosophy, CMM Model, Scaled Agile Framework, Lean process, White box techniques etc.

What are these? Are they just jargons that complicate our thinking? Well these are really different facets of problem solving. A crisp definition of these are listed below, picked up from dictionary.com.

Crisp definition of the 15 facets as a table

And here is the simple depiction of these inspired by “Matryoshka doll” !

15 facets depicted as a picture  inspired by “Matrushka doll” !

Problem solving philosophy requiring a mindset nurtured by good organisation culture, via model, framework, methodology, applying a set of processes/procedures using guidelines, principles, techniques, heuristics  and aided by tools, templates, checklists


About SmartQA The theme of SmartQA is to explore various dimensions of smartness to leapfrog into the new age of software development, to accomplish more with less by exploiting our intellect along with technology.  Towards this, we will strive to showcase interesting thoughts, expert industry views through high-quality content as articles, posters, videos, surveys outlined as a SmartQA Digest weekly emailer. SmartBites is “soundbites from smart people”. Ideas, thoughts and views to inspire you to think differently.