The role of <strong>QA (Quality Assurance)</strong> is to monitor the quality of the process to produce a quality of a product.<br/> While the <strong>software testing</strong>, is the process of ensuring the final product and check the functionality of final product and to see whether the final product meets the user’s requirement.
Testware is the subset of software, which helps in performing the testing of application. It is a term given to the combination of software application and utilities which is required for testing a software package.
Test case is a specific term that is used to test a specific element. It has information of test steps, prerequisites, test environment and outputs.
The document that describes, the user action and system response, for a particular functionality is known as USE case. It includes revision history, table of contents, flow of events, cover page, special requirements, pre-conditions and post-conditions.
In verification, all the key aspects of software developments are taken in concern like code, specifications, requirements and document plans. Verification is done on the basis of four things list of issues, checklist, walk through and inspection meetings. Following verification, validation is done, it involves actual testing, and all the verification aspects are checked thoroughly in validation.
<strong>Black box testing</strong> is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested.<br/> <strong>White box testing</strong> is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.<br/> In <strong>gray box testing </strong>this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.
No an increase in testing does not always mean improvement of the product, company, or project. In real test scenarios only 20% of test plans are critical from a business angle. Running those critical test plans will assure that the testing is properly done. The following graph explains the impact of under testing and over testing. If you under test a system the number of defects will increase, but if you over test a system your cost of testing will increase. Even if your defects come down your cost of testing has gone up.
<strong>1.</strong>Master Test Plan contains all the testing and risk involved area of the application where as Test case document contains test cases. <br/> <strong>2.</strong>Master Test plan contain all the details of each and every individual tests to be run during the overall development of application whereas test plan describe the scope, approach, resources and schedule of performing test.<br/> <strong>3.</strong>Master Test plan contain the description of every tests that is going to be performed on the application where as test plan only contain the description of few test cases. during the testing cycle like Unit test, System test, beta test etc <br/> <strong>4.</strong>Master Test Plan is created for all large projects but when it is created for the small project then we called it as test plan.
Bug Life Cycle, again is the various stages through which it goes after it's discovered. So once the Quality Assurance Engineer discovers the bug, it's marked as "New", then the concerned authority will assign it to the developer where the status of the bug changes to 'Assigned', once the bug is assigned it's either 'fixed' or 'rejected'. In both the cases, the QAE will verify the bug and mark it as 'reopen' (if not fixed properly) or 'closed' if it no longer exists.
What would happen if you turn on a newly brought TV and you get smoke coming out of it? <strong>Smoke testing</strong> is basically to ensure that the basic functionality of the product (in TV's case, it should be displaying video when turned on) works fine. So you'll identify the most basic test cases you need to execute and perform them.<br> <strong>Sanity testing </strong>is similar - which ensures that the system or product functions without any logical errors. If you are testing a calculator app; you may multiply a number by 9 and check whether the sum of the digits of the answer is divisible by 9.
Unit testing<br/> Integration testing and regression testing<br/> Shakeout testing<br/> Smoke testing<br/> Functional testing<br/> Performance testing<br/> White box and Black box testing<br/> Alpha and Beta testing<br/> Load testing and stress testing<br/> System testing
Data driven testing is an automation testing part, which tests the output or input values. These values are read directly from the data files. The data files may include csv files, excel files, data pools and many more. It is performed when the values are changing by the time.
<strong>Build:</strong> It is a number given to Installable software that is given to testing team by the development team.<br/> <strong>Release:</strong> It is a number given to Installable software that is handed over to customer by the tester or developer.
<strong>Bug release</strong> is when software or an application is handed over to the testing team knowing that the defect is present in a release. During this the priority and severity of bug is low, as bug can be removed before the final handover.<br/> <strong>Bug leakage </strong>is something, when the bug is discovered by the end users or customer, and missed by the testing team to detect, while testing the software.
There's actually no difference between 'bug' and 'defect'. It's basically an unexpected behaviour of the software. 'Error' too would fall in the same category; but many times errors are well defined. For example - 404 error in HTML pages.
<strong>1.</strong>Once the bug is identified by the tester, it is assigned to the development manager in open status.<br/> <strong>2.</strong>If the bug is a valid defect the development team will fix it and if it is not a valid defect, the defect will be ignored and marked as rejected.<br/> <strong>3.</strong>The next step will be to check whether it is in scope, if it is happen so that, the bug is not the part of the current release then the defects are postponed.<br/> <strong>4.</strong>If the defect or bug is raised earlier then the tester will assigned a DUPLICATE status.<br/> <strong>5.</strong>When bug is assigned to developer to fix, it will be given a IN-PROGRESS status.<br/> <strong>6.</strong>Once the defect is repaired, the status will changed to FIXED at the end the tester will give CLOSED status if it passes the final test.
Structure-based testing techniques are also termed as white-box testing techniques.<br/> These are dynamic techniques. <br/>They use the internal structure of the software to derive test cases. <br/>They are usually termed as 'white-box' or 'glass-box' techniques.<br/>They need you to have knowledge of how the software is implemented and how it works.
Component testing is also termed as unit, module or program testing. <br/> It looks for defects in the software and verifies its functioning. <br/> It can be done in isolation from rest of the system depending on the context of the development life cycle and the system. <br/> Mostly stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner.<br/> The stub is called from the software component to be tested while a driver calls a component to be tested.<br/>
Decision tables are specification-based techniques which are more focused on business logic or business rules. <br/> A decision table is useful in dealing with combinations of things like inputs.<br/> This technique is sometimes also called as 'cause-effect' table as there exists an associated logic diagramming technique called 'cause-effect graphing' which is at times used to help derive the decision table. <br/> The techniques of equivalence partitioning and boundary value analysis are usually applied to specific situations or inputs.
In exploratory testing approach testers are involved in minimum planning and maximum test execution.<br/> The planning includes creation of a test charter, a short declaration of the scope of a short time-boxed test effort, the objectives and possible approaches to be used. <br/> The test design and test execution are performed parallelly without any formal documentation of test conditions, test cases or test scripts. However, this does not imply that other more formal testing techniques will not be used.
The pre-requisites for white-box testing are similar to that of black-box testing with one major difference: During white-box testing, the testers have access to the application logic. The tester should ask for access to detailed functional specs and requirements, design documents (both high-level and detailed), and source code. The tester analyzes the source code and prepares functional tests to ensure that the application behaves in compliance with both the requirements and the specs.
Drivers and stubs are a part of incremental testing.<br/> The two approaches used in incremental testing are: the top down and the bottom up methods.<br/> Drivers are used for the bottom up approach. <br/> Drivers are the modules that run the components that are being tested.<br/> A stub is used for the top down approach. <br/> It is a replacement of sorts for a component which is used to test a component that it calls.
The testing of all the branches of the application, which is tested once, is known as branch testing. While the testing, which is focused on the limit conditions of the software is known as boundary testing.
Testing objectives<br/>Testing scope<br/>Testing the frame<br/>The environment<br/>Reason for testing<br/> The criteria for entrance and exit<br/>Deliverables<br/>Risk factors<br/>
The systematic and independent examination for determining the quality of activities is known as quality audit. It allows the cross check for the planned arrangements, whether they are properly implemented or not.
As the dependencies on the clients are more, the client or server applications are complex.<br/> The testing needs are extensive as servers, communications and hardware are interdependent. Integration and system testing is also for a limited period of time.
Selenium<br/> Firebug<br/> OpenSTA<br/> WinSCP<br/> YSlow for FireBug<br/> Web Developer toolbar for firebox
<strong>Load Testing: </strong>Testing an application under heavy but expected load is known as Load Testing. Here, the load refers to the large volume of users, messages, requests, data, etc.<br/> <strong>Stress Testing: </strong>When the load placed on the system is raised or accelerated beyond the normal range then it is known as Stress Testing.<br/> <strong>Volume Testing: </strong> The process of checking the system, whether the system can handle the required amounts of data, user requests, etc. is known as Volume Testing.
Setting up the requirements criteria, the requirements of a software should be complete, clear and agreed by all.<br/> The next thing is the realistic schedule like time for planning , designing, testing, fixing bugs and re-testing.<br/> Adequate testing, start the testing immediately after one or more modules development.<br/> Use rapid prototype during design phase so that it can be easy for customers to find what to expect.<br/> Use of group communication tools
A thread testing is a top-down testing, where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
It is a process to control and document any changes made during the life of a project. Release control, Change control and Revision control are the important aspects of configuration management.
It is a testing phase where the tester tries to break the system by randomly trying the system’s functionality. It can include negative testing as well.
Phase of development where functionality is implemented in entirety with only bug fixes remaining.<br/> All functions from the functional specifications are already implemented.
<strong>Code Coverage : </strong>This is an analysis method which determines which parts of the software have already been covered by the test case suite and which are remaining. <br/> <strong>Code Inspection : </strong>A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
A software quality assurance engineer tasks include following things<br/> Writing source code<br/> Software design<br/> Control of source code<br/> Reviewing code<br/> Change management<br/> Configuration management<br/> Integration of software<br/> Program testing<br/> Release management process<br/>
The stub is called from the software component to be tested, it is used in top down approach. The driver calls a component to be tested, it is used in bottom up approach. It is required when we need to test the interface between modules X and Y and we have developed only module X. So we cannot just test module X but if there is any dummy module we can use that dummy module to test module X Now module B cannot receive or send data from module A directly, so in these case we have to transmit data from one module to another module by some external features. This external feature is referred as Driver
A bug triage is a process to ensure bug report completeness.<br/> Assign and analyze the bug<br/> Assigning bug to proper bug owner<br/> Adjust bug severity properly<br/> Set appropriate bug priority<br/>
Web testing is software testing focusing on web applications. <br/> It helps in addressing issues before making the application live to public. <br/> Security of the web application, basic functionality of the site, accessibility to handicapped users and fully able users, readiness for expected traffic, number of users, ability to survive a massive traffic spike are some of the issues which are handled in these applications.
A decision table is a good way to deal with combinations of things (e.g. inputs).<br/> The techniques of equivalence partitioning and boundary value analysis are often applied to specific situations or inputs.<br/> If different combinations of inputs result in different actions being taken, it can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface.<br/> The other two specification based techniques - decision tables and state transition testing are more focused on business logic or business rules.
Unstable/ Incomplete software as they are still undergoing changes<br/> Test scripts which are run once in a while<br/> Code and document review<br/>
SilkTest is software testing automation tool developed by Segue Software, Inc.<br/> The methodology behind silk test is a six-phase testing process:<br/> <strong>1.</strong>Plan - Determine the testing strategy and define specific test requirements.<br/> <strong>2.</strong>Capture - Classify the GUI objects in your application and build a framework for running your tests.<br/> <strong>3.</strong>Create - Create automated, reusable tests. Use recording and/ or programming to build test scripts written in Segue's 4Test language.<br/> <strong>4.</strong>Run - Select specific tests and execute them against the AUT.<br/> <strong>5.</strong>Report - Analyze test results and generate defect reports.<br/> <strong>6.</strong>Track - Track defects in the AUT and perform regression testing.
<strong>SilkTest Host - </strong>SilkTest component that manages and executes test scripts. It usually runs on a machine different than the machine where AUT (Application Under Test) is running.<br/> <strong>SilkTest Agent - </strong>SilkTest component that receives testing commands from the SilkTest Host and interacts with AUT (Application Under Test) directly. SilkTest Agent usually runs on the same machine where AUT is running.<br/> <strong>SilkTest Project - </strong>A SilkTest project is a collection of files that contains required information about a test project.
A beta test when the product is about to release to the end user whereas pilot testing take place in the earlier phase of the development cycle.<br/>In beta testing application is given to a few user to make sure that application meet the user requirement and does not contain any showstopper whereas in case of pilot testing team member give their feedback to improve the quality of the application.
Regression scenarios would be run on all the test cases that failed during manual testing because of the bug in software. Checking history of the bug may help identifying the regression scenarios.
A bug cannot be reproduced for following reasons:<br/> <strong>1. </strong>Low memory. <br/> <strong>2. </strong>Addressing to non available memory location. <br/> <strong>3. </strong>Things happening in a particular sequence.<br/> Tester can do following things to deal with not reproducible bug:<br/> Includes steps that are close to the error statement.<br/> Evaluate the test environment.<br/> Examine and evaluate test execution results.<br/> Resources & Time Constraints must be kept in point.
To support testing during development of application following tools can be used.<br/> Test Management Tools: JIRA, Quality Center etc.<br/> Defect Management Tools: Test Director, Bugzilla.<br/> Project Management Tools: Sharepoint.<br/> Automation Tools: RFT, QTP, and WinRunner
A cause effect graph is a graphical representation of inputs and the associated outputs effects that can be used to design test cases.
In software testing, Test Metric is referred to standard of test measurement. They are the statistics narrating the structure or content of a program. It contains information like<br/> Total test<br/> Test run<br/> Test passed<br/> Test failed<br/> Tests deferred<br/> Test passed the first time<br/>
A test matrix is used to verify the test scripts per specified requirements of test cases.
Retesting is carried out to check the defects fixes, while regression testing is performed to check whether the defect fix have any impact on other functionality.
Software quality practices includes:<br/> Review the requirements before starting the development phase<br/> Code Review<br/> Write comprehensive test cases<br/> Session based testing<br/> Risk based testing<br/> Prioritize bug based on usage<br/> Form a dedicated security and performance testing team<br/> Run a regression cycle<br/> Perform sanity tests on production<br/> Simulate customer accounts on production<br/> Include software QA Test Reports<br/>
The types of documents in QA are<br/> Requirement Document<br/> Test Metrics<br/> Test cases and Test plan<br/> Task distribution flow chart<br/> Transaction Mix<br/> User profiles<br/> Test log<br/> User profiles<br/> Test incident report<br/> Test summary report<br/>
QA testing document should include:<br/> List the number of defects detected as per severity level<br/> Explain each requirement or business function in detail<br/> Inspection reports<br/> Configurations<br/> Test plans and test cases<br/> Bug reports<br/> User manuals<br/> Prepare separate reports for managers and users
MR stands for Modification Request also referred as Defect report, it is written for reporting errors/problems/suggestions in the software.
Validation activities should be conducted by following techniques.<br/> Hire third party independent verification and validation.<br/> Assign internal staff members that are not involved in validation and verification activities<br/> Independent evaluation
Is this page helpful to you? Please give us your feedback below. We would love to hear your thoughts on these articles, it will help us improve further our learning process.