Skip to main content

Defect details and its Documents in Software Testing.........



DEFINITION
A Software Defect / Bug is a condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations (which may not be specified but are reasonable). In other words, a defect is an error in coding or logic that causes a program to malfunction or to produce incorrect/unexpected results.
  • A program that contains a large number of bugs is said to be buggy.
  • Reports detailing bugs in software are known as bug reports. (See Defect Report)
  • Applications for tracking bugs are known as bug tracking tools.
  • The process of finding the cause of bugs is known as debugging.
  • The process of intentionally injecting bugs in a software program, to estimate test coverage by monitoring the detection of those bugs, is known as bebugging.
Software Testing proves that defects exist but NOT that defects do not exist.
CLASSIFICATION
Software Defects/ Bugs are normally classified as per:
  • Severity / Impact (See Defect Severity)
  • Probability / Visibility (See Defect Probability)
  • Priority / Urgency (See Defect Priority)
  • Related Dimension of Quality (See Dimensions of Quality)
  • Related Module / Component
  • Phase Detected
  • Phase Injected
Related Module /Component
Related Module / Component indicates the module or component of the software where the defect was detected. This provides information on which module / component is buggy or risky.
  • Module/Component A
  • Module/Component B
  • Module/Component C
Phase Detected
Phase Detected indicates the phase in the software development lifecycle where the defect was identified.
  • Unit Testing
  • Integration Testing
  • System Testing
  • Acceptance Testing
Phase Injected
Phase Injected indicates the phase in the software development lifecycle where the bug was introduced. Phase Injected is always earlier in the software development lifecycle than the Phase Detected. Phase Injected can be known only after a proper root-cause analysis of the bug.
  • Requirements Development
  • High Level Design
  • Detailed Design
  • Coding
  • Build/Deployment
Note that the categorizations above are just guidelines and it is up to the project/organization to decide on what kind of categorization to use. In most cases, the categorization depends on the defect tracking tool that is being used. It is essential that project members agree beforehand on the categorization (and the meaning of each categorization) so as to avoid arguments, conflicts, and unhealthy bickering later.
DEFECT SEVERITY Fundamentals
Defect Severity or Impact is a classification of software defect (bug) to indicate the degree of negative impact on the quality of software.
ISTQB Definition
  • severity: The degree of impact that a defect has on the development or operation of a component or system.
DEFECT SEVERITY CLASSIFICATION
The actual terminologies, and their meaning, can vary depending on people, projects, organizations, or defect tracking tools, but the following is a normally accepted classification.
  • Critical: The defect affects critical functionality or critical data. It does not have a workaround. Example: Unsuccessful installation, complete failure of a feature.
  • Major: The defect affects major functionality or major data. It has a workaround but is not obvious and is difficult. Example: A feature is not functional from one module but the task is doable if 10 complicated indirect steps are followed in another module/s.
  • Minor: The defect affects minor functionality or non-critical data. It has an easy workaround. Example: A minor feature that is not functional in one module but the same task is easily doable from another module.
  • Trivial: The defect does not affect functionality or data. It does not even need a workaround. It does not impact productivity or efficiency. It is merely an inconvenience. Example: Petty layout discrepancies, spelling/grammatical errors.
Severity is also denoted as:
  • S1 = Critical
  • S2 = Major
  • S3 = Minor
  • S4 = Trivial
CAUTION:
Defect Severity is one of the most common causes of feuds between Testers and Developers. A typical situation is where a Tester classifies the Severity of Defect as Critical or Major but the Developer refuses to accept that: He/she believes that the defect is of Minor or Trivial severity.
Though we have provided you some guidelines in this article on how to interpret each level of severity, this is still a very subjective matter and chances are high that one will not agree with the definition of the other. You can however lessen the chances of differing opinions in your project by discussing (and documenting, if necessary) what each level of severity means and by agreeing to at least some standards (substantiating with examples, if necessary.)
ADVICE: Go easy on this touchy defect dimension and good luck!

DEFECT PROBABILITY Fundamentals
Defect Probability (Defect Visibility or Bug Probability or Bug Visibility) indicates the likelihood of a user encountering the defect / bug.
  • High: Encountered by all or almost all the users of the feature
  • Medium: Encountered by about 50% of the users of the feature
  • Low: Encountered by very few or no users of the feature
Defect Probability can also be denoted in percentage (%).
The measure of Probability/Visibility is with respect to the usage of a feature and not the overall software. Hence, a bug in a rarely used feature can have a high probability if the bug is easily encountered by users of the feature. Similarly, a bug in a widely used feature can have a low probability if the users rarely detect it.

DEFECT PRIORITY Fundamentals
Defect Priority (Bug Priority) indicates the importance or urgency of fixing a defect. Though priority may be initially set by the Software Tester, it is usually finalized by the Project/Product Manager.
Priority can be categorized into the following levels:
  • Urgent: Must be fixed in the next build.
  • High: Must be fixed in any of the upcoming builds but should be included in the release.
  • Medium: May be fixed after the release / in the next release.
  • Low: May or may not be fixed at all.
Priority is also denoted as P1 for Urgent, P2 for High and so on.
NOTE: Priority is quite a subjective decision; do not take the categorizations above as authoritative. However, at a high level, priority is determined by considering the following:
  • Business need for fixing the defect
  • Severity/Impact
  • Probability/Visibility
  • Available Resources (Developers to fix and Testers to verify the fixes)
  • Available Time (Time for fixing, verifying the fixes and performing regression tests after the verification of the fixes)
ISTQB Definition:
  • priority: The level of (business) importance assigned to an item, e.g. defect.
Defect Priority needs to be managed carefully in order to avoid product instability, especially when there is a large of number of defects.


Defect Life Cycle (Bug Life cycle) is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used.

Nevertheless, the life cycle in general resembles the following:
Status
Alternative Status
NEW

ASSIGNED
OPEN
DEFERRED

DROPPED
REJECTED
COMPLETED
FIXED, RESOLVED, TEST
REASSIGNED
REOPENED
CLOSED
VERIFIED
Defect Status Explanation
  • NEW: Tester finds a defect and posts it with the status NEW. This defect is yet to be studied/approved. The fate of a NEW defect is one of ASSIGNED, DROPPED and DEFERRED.
  • ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect and if it is found to be valid it is assigned to a member of the Development Team. The assigned Developer’s responsibility is now to fix the defect and have it COMPLETED. Sometimes, ASSIGNED and OPEN can be different statuses. In that case, a defect can be open yet unassigned.
  • DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed in upcoming releases instead of the current release it is DEFERRED. This defect is ASSIGNED when the time comes.
  • DROPPED / REJECTED: Test / Development/ Project lead studies the NEW defect and if it is found to be invalid, it is DROPPED / REJECTED. Note that the specific reason for this action needs to be given.
  • COMPLETED / FIXED / RESOLVED / TEST: Developer ‘fixes’ the defect that is ASSIGNED to him or her. Now, the ‘fixed’ defect needs to be verified by the Test Team and the Development Team ‘assigns’ the defect back to the Test Team. A COMPLETED defect is either CLOSED, if fine, or REASSIGNED, if still not fine.
  • If a Developer cannot fix a defect, some organizations may offer the following statuses:
    • Won’t Fix / Can’t Fix: The Developer will not or cannot fix the defect due to some reason.
    • Can’t Reproduce: The Developer is unable to reproduce the defect.
    • Need More Information: The Developer needs more information on the defect from the Tester.
  • REASSIGNED / REOPENED: If the Tester finds that the ‘fixed’ defect is in fact not fixed or only partially fixed, it is reassigned to the Developer who ‘fixed’ it. A REASSIGNED defect needs to be COMPLETED again.
  • CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is indeed fixed and is no more of any concern, it is CLOSED / VERIFIED. This is the happy ending.
Defect Life Cycle Implementation Guidelines
  • Make sure the entire team understands what each defect status exactly means. Also, make sure the defect life cycle is documented.
  • Ensure that each individual clearly understands his/her responsibility as regards each defect.
  • Ensure that enough detail is entered in each status change. For example, do not simply DROP a defect but provide a reason for doing so.
  • If a defect tracking tool is being used, avoid entertaining any ‘defect related requests’ without an appropriate change in the status of the defect in the tool. Do not let anybody take shortcuts. Or else, you will never be able to get up-to-date defect metrics for analysis.
DEFECT REPORT Fundamentals
After uncovering a defect (bug), testers generate a formal defect report. The purpose of a defect report is to state the problem as clearly as possible so that developers can replicate the defect easily and fix it.
DEFECT REPORT TEMPLATE
In most companies, a defect reporting tool is used and the elements of a report can vary. However, in general, a defect report can consist of the following elements.
ID
Unique identifier given to the defect. (Usually Automated)
Project
Project name.
Product
Product name.
Release Version
Release version of the product. (e.g. 1.2.3)
Module
Specific module of the product where the defect was detected.
Detected Build Version
Build version of the product where the defect was detected (e.g. 1.2.3.5)
Summary
Summary of the defect. Keep this clear and concise.
Description
Detailed description of the defect. Describe as much as possible but without repeating anything or using complex words. Keep it simple but comprehensive.
Steps to Replicate
Step by step description of the way to reproduce the defect. Number the steps.
Actual Result
The actual result you received when you followed the steps.
Expected Results
The expected results.
Attachments
Attach any additional information like screenshots and logs.
Remarks
Any additional comments on the defect.
Defect Severity
Severity of the Defect. (See Defect Severity)
Defect Priority
Priority of the Defect. (See Defect Priority)
Reported By
The name of the person who reported the defect.
Assigned To
The name of the person that is assigned to analyze/fix the defect.
Status
The status of the defect. (See Defect Life Cycle)
Fixed Build Version
Build version of the product where the defect was fixed (e.g. 1.2.3.9)
REPORTING DEFECTS EFFECTIVELY
It is essential that you report defects effectively so that time and effort is not unnecessarily wasted in trying to understand and reproduce the defect. Here are some guidelines:
  • Be specific:
    • Specify the exact action: Do not say something like ‘Select ButtonB’. Do you mean ‘Click ButtonB’ or ‘Press ALT+B’ or ‘Focus on ButtonB and click ENTER’? Of course, if the defect can be arrived at by using all the three ways, it’s okay to use a generic term as ‘Select’ but bear in mind that you might just get the fix for the ‘Click ButtonB’ scenario. [Note: This might be a highly unlikely example but it is hoped that the message is clear.]
    • In case of multiple paths, mention the exact path you followed: Do not say something like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’, you get D.” Understanding all the paths at once will be difficult. Instead, say “Do ‘A and X’ and you get D.” You can, of course, mention elsewhere in the report that “D can also be got if you do ‘B and Y’ or ‘C and Z’.”
    • Do not use vague pronouns: Do not say something like “In ApplicationA, open X, Y, and Z, and then close it.” What does the ‘it’ stand for? ‘Z’ or, ‘Y’, or ‘X’ or ‘ApplicationA’?”
  • Be detailed:
    • Provide more information (not less). In other words, do not be lazy. Developers may or may not use all the information you provide but they sure do not want to beg you for any information you have missed.
  • Be objective:
    • Do not make subjective statements like “This is a lousy application” or “You fixed it real bad.”
    • Stick to the facts and avoid the emotions.
  • Reproduce the defect:
    • Do not be impatient and file a defect report as soon as you uncover a defect. Replicate it at least once more to be sure. (If you cannot replicate it again, try recalling the exact test condition and keep trying. However, if you cannot replicate it again after many trials, finally submit the report for further investigation, stating that you are unable to reproduce the defect anymore and providing any evidence of the defect if you had gathered. )
  • Review the report:
    • Do not hit ‘Submit’ as soon as you write the report. Review it at least once. Remove any typos.


Comments

Popular posts from this blog

Test Scenarios for Excel Export Functionality

1. The file should get exported in the proper file extension. 2. The file name for the exported Excel file should be as per the standards e.g. if the file name is using the timestamp, it should get replaced properly with an actual timestamp at the time of exporting the file. 3. Check for date format if exported Excel file contains the date columns. 4. Check number formatting for numeric or currency values. Formatting should be the same as shown on the page. 5. The exported file should have columns with proper column names. 6. Default page sorting should be carried in the exported file as well. 7. Excel file data should be formatted properly with header and footer text, date, page numbers etc. values for all pages. 8. Check if the data displayed on a page and exported Excel file is the same. 9. Check export functionality when pagination is enabled. 10. Check if export button is showing proper icon according to the exported file type E.g . Excel file icon for xls files 11.

Basic Introduction of RPA

  1) What is Robotic Process Automation? ·          Robotic process automation (RPA) is defined by the Institute for Robotic Process Automation (IRPA) as ‘the application of technology allowing employees in a company to configure computer software or a ‘robot’ to capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems.’ ·          RPA (Robotic Process Automation) Robotic Process Automation enables you with tools to create your own software robots to automate any business process. Your “bots” are configurable software set up to perform the tasks you assign and control. 2) What Can RPA Do? ·          RPA is software-based, it can be used to perform various tasks. These include maintenance of records, queries, calculations, and transactions. Additionally, any application commonly used by your company can be operated by RPA. ·          For example, Citrix, .NET, HTML,

Introduction to Apache JMeter

Apache JMeter is a great open source application with awesome testing abilities. Web Server is a platform which carries loads of numbers of applications and users, so that it is necessary to know that how does it works or performs means; how effective it is to handle simultaneous users or applications. For example; how the “Gmail” supporting server will perform when numbers of users simultaneous access the Gmail account – basically have to do performance testing using performance testing tools like JMeter, Loadrunner etc. To check the high performance of the application or server, do high performance testing using JMeter for exceptional results. Before understanding Overview of JMeter , let us have a look on three testing approach, Performance Test : This test provides the best possible performance of the system or application under a given configuration of infrastructure. Very fast, it also highlights the change need to be made before application goes into producti