Most Software Quality Assurance and Program Testing can be described using the information below. Please note the following information is only a reference and should only be used for a guide to ITOC's Quality Assurance and Program testing.
What is Software Quality Assurance? [Return to Top]
Quality Assurance makes sure the project will be completed based on the previously agreed specifications, standards and functionality required without defects and possible problems. It monitors and tries to improve the development process from the beginning of the project to ensure this. It is oriented to "prevention".
When should QA testing start in a project, and Why? [Return to Top]
QA is involved in the project from the beginning. This helps the teams communicate and understand the problems and concerns, also gives time to set up the testing environment and configuration. On the other hand, actual testing starts after the test plans are written, reviewed and approved based on the design documentation.
What is Software Testing? [Return to Top]
Software testing is oriented to "detection". It's examining a system or an application under controlled conditions. It's intentionally making things go wrong when they should not and things happen when they should not.
What is Software Quality? [Return to Top]
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.
What is Verification and Validation? [Return to Top]
Verification is preventing mechanism to detect possible failures before the testing begin. It involves reviews, meetings, evaluating documents, plans, code, inspections, specifications etc. Validation occurs after verification and it's the actual testing to find defects against the functionality or the specifications.
What is Test Plan? [Return to Top]
Test Plan is a document that describes the objectives, scope, approach, and focus of a software testing effort.
What is Test Case? [Return to Top]
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
What is Good Code? [Return to Top]
Good code is code that works according to the requirements, bug free, readable, and expandable in the future and easily maintainable.
What is Good Design? [Return to Top]
In good design, the overall structure is clear, understandable, easily modifiable, and maintainable. Works correctly when implemented and functionality can be traced back to customer and end-user requirements.
Who is Good Test Engineer? [Return to Top]
Good test engineer has the ability to think the unthinkable, has the test to break attitude, strong desire to quality and attention to detail.
What is Walkthrough? [Return to Top]
Walkthrough is quick and informal meeting for evaluation purposes.
What is Software Life Cycle? [Return to Top]
The Software Life Cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
What is Inspection? [Return to Top]
The purpose of inspection is trying to find defects and problems mostly in documents such as test plans, specifications, test cases, coding etc. It helps to find the problems and report it but not to fix it. It is one of the most cost effective methods of software quality. Many people can join the inspections but normally one moderator, one reader and one note taker are mandatory.
What are the benefits of Automated Testing? [Return to Top]
It's very valuable for long term and on going projects. You can automate some or all of the tests which needs to be run from time to time repeatedly or difficult to test manually. It saves time and effort, also makes testing possible out of working hours and nights. They can be used by different people and many times in the future. By this way, you also standardize the testing process and you can depend on the results.
What are the main problems of working in a geographically distributed team? [Return to Top]
The main problem is the communication. To know the team members, sharing as much information as possible whenever you need is very valuable to solve the problems and concerns. On the other hand, increasing the wired communication as much as possible, setting up meetings help to reduce the miscommunication problems.
What are the common problems in Software Development Process? [Return to Top]
Poor requirements, unrealistic schedule, inadequate testing, miscommunication and additional requirement changes after development begin.
What are Test Types? [Return to Top]
· black box testing - You don't need to know the internal design or have deep knowledge about the code to conduct this test. It's mainly based on functionality and specifications, requirements.
· white box testing - This test is based on knowledge of the internal design and code. Tests are based on code statements, coding styles, etc.
· unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.
· incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
· integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
· functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
· system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
· end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
· sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
· regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
· acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
· load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
· stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
· performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
· usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
· install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
· recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
· failover testing - typically used interchangeably with 'recovery testing'
· security testing - testing how well the system protects against unauthorised internal or external access, wilful damage, etc; may require sophisticated testing techniques.
· compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
· exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
· ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
· context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
· user acceptance testing - determining if software is satisfactory to an end-user or customer.
· comparison testing - comparing software weaknesses and strengths to competing products.
· alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
· beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
· mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.