Q: What is verification?
A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walk-throughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information. Click on a link!
Q: What is validation?
A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.
Q: What is a walk-through?
A: A walk-through is an informal meeting for evaluation or informational purposes. A walk-through is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walk-through is to ensure the code fits the purpose.
Walk-through also offer opportunities to assess an individual’s or team’s competency.
Q: What is an inspection?
A: An inspection is a formal meeting, more formalized than a walk-through and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.
Q: What is quality?
A: Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization’s management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.
Q: What is good code?
A: A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.
Q: What is good design?
A: Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.
Q: What is software life cycle?
A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.
Q: How do you introduce a new software QA process?
A: It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and
testable.
Q: What is the role of documentation in QA?
A: Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.
Q: Why are there so many software bugs?
A: Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.
- There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.
- Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
- Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
- As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
- Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
- Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.
- Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.
- Software development tools, including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.
Q: Give me five common problems that occur during software development.
A: Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.
- Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.
- The schedule is unrealistic if too much work is crammed in too little time.
- Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
- It’s extremely common that new features are added after development is underway.
- Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.
Q: Do automated testing tools make testing easier?
A: Yes and no.
For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile.
A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret.
If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change.
One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.
You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information. Click on a link!
Q: Give me five solutions to problems that occur during software development.
A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.
1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.
2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.
Q: Give me five solutions to problems that occur during software development. (Cont’d…)
5. Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Do use documentation that is electronic, not paper. Promote teamwork and cooperation.
Q: What makes a good test engineer?
A: Good test engineers have a “test to break” attitude. We, good test engineers, take the point of view of the customer; have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process gives the test engineer an appreciation for the developers’ point of view and reduces the learning curve in automated test tool programming.
Rob Davis is a good test engineer because he has a “test to break” attitude, takes the point of view of the customer, has a strong desire for quality, has an attention to detail, He’s also tactful and diplomatic and has good a communication skill, both oral and written. And he has previous software development experience, too.
Q: What is a requirements test matrix?
A: The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project’s life cycle.
The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.
The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality.
The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.
The requirements test matrix is a representation of user requirements aligned against system testing.
Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort.
Q: Give me a requirements test matrix template!
A: For a simple requirements test matrix template, you want a basic table that you would like to use for cross-referencing purposes.
How do you create one? You can create a requirements test matrix template in the following six steps:
Step 1: Find out how many requirements you have.
Step 2: Find out how many test cases you have.
Step 3: Based on these numbers, create a basic table. Let’s suppose you have a list of 90 requirements and 360 test cases. Based on these numbers, you want to create a table of 91 rows and 361 columns.
Step 4: Focus on the first column of your table. One by one, copy all your 90 requirement numbers, and paste them into rows 2 through 91 of your table.
Step 5: Focus on the first row of your table. One by one, copy all your 360 test case numbers, and paste them into columns 2 through 361 of your table.
Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they satisfy. If, for the sake of this example, test case 64 satisfies requirement 12, then put a large “X” into cell 13-65 of your table… and then you have it; you have just created a requirements test matrix template that you can use for cross-referencing purposes.
Q: What is reliability testing?
A: Reliability testing is designing reliability test cases, using accelerated reliability techniques (e.g. step-stress, test/analyze/fix, and continuously increasing stress testing techniques), AND testing units or systems to failure, in order to obtain raw failure time data for product life analysis.
The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer’s reliability requirements.
In the system test phase, or after the software is fully developed, one reliability testing technique we use is a test/analyze/fix technique, where we couple reliability testing with the removal of faults.
Q: What is reliability testing? (Cont’d…)
When we identify a failure, we send the software back to the developers, for repair. The developers build a new version of the software, and then we do another test iteration. We track failure intensity (e.g. failures per transaction, or failures per hour) in order to guide our test process, and to determine the feasibility of the software release, and to determine whether the software meets the customer’s reliability requirements.
Q: Give me an example on reliability testing.
A: For example, our products are defibrillators. From direct contact with customers during the requirements gathering phase, our sales team learns that a large hospital wants to purchase defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly.
In this example, the fact that our defibrillator is able to run for 250 hours without any failure, in order to demonstrate the reliability, is irrelevant to these customers. In order to test for reliability we need to translate terminology that is meaningful to the customers into equivalent delivery units, such as the number of shocks. We describe the customer needs in a quantifiable manner, using the customer’s terminology. For example, our of quantified reliability testing goal becomes as follows: Our defibrillator will be considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks.
Then, for example, we use a test/analyze/fix technique, and couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks into dummy resistor loads.
We track failure intensity (i.e. number of failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software release, and to determine whether the software meets our customers’ reliability requirements.
Q: What is the role of test engineers?
A: We, test engineers, speed up the work of your development staff, and reduce the risk of your company’s legal liability. We give your company the evidence that the software is correct and operate properly. We also improve your problem tracking and reporting. We maximize the value of your software, and the value of the devices that use it. We also assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool, and before your employees get bogged down. We help the work of your software development staff, so your development team can devote its time to build up your product. We also promote continual improvement. We provide documentation required by FDA, FAA, other regulatory agencies, and your customers. We save your company money by discovering defects EARLY in the design process, before failures occur in production, or in the field. We save the reputation of your company by discovering bugs and design flaws, before bugs and design flaws damage the reputation of your company.
Q: What is a QA engineer?
A: We, QA engineers, are test engineers but we do more than just testing. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important. We, QA engineers, are successful if people listen to us, if people use our tests, if people think that we’re useful, and if we’re happy doing our work. I would love to see QA departments staffed with experienced software developers who coach development teams to write better code. But I’ve never seen it. Instead of coaching, we, QA engineers, tend to be process people.
Q: What is the difference between software fault and software failure?
A: Software failure occurs when the software does not do what the user expects to see. Software fault, on the other hand, is a hidden programming error.
A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended.
Q: What is the role of a QA engineer?
A: The QA engineer’s role is as follows: We, QA engineers, use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and provide feedback to the developers, i.e. tell them if they’ve achieved the desired level of quality.
Q: What are the responsibilities of a QA engineer?
A: Let’s say, an engineer is hired for a small software company’s QA role, and there is no QA team. Should he take responsibility to set up a QA infrastructure/process, testing and quality of the entire product? No, because taking this responsibility is a classic trap that QA people get caught in. Why? Because we QA engineers cannot assure quality. And because QA departments cannot create quality. What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell the developers, they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test department, they will slack off on their testing. We need to offer to help with quality assessment only.
Q: How do you perform integration testing?
A: To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input. You CAN learn to perform integration testing, with little or no outside help. Get CAN get free information. Click on a link!
Q: What is integration testing?
A: Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.
Q: How do test plan templates look like?
A: The test plan document template helps to generate test plan documents that describe the objectives, scope, approach and focus of a software testing effort. Test document templates are often in the form of documents that are divided into sections and subsections. One example of a template is a 4-section document where section 1 is the description of the “Test Objective”, section 2 is the description of “Scope of Testing”, section 3 is the description of the “Test Approach”, and section 4 is the “Focus of the Testing Effort”.
All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for a user to find what they want. With standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.
A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. You CAN learn to generate test plan templates, with little or no outside help. Get CAN get free information. Click on a link!
Q: What is a bug life cycle?
A: Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out.
Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. bug, and ends when the bug is fixed, and the bug is no longer in existence.
What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.
Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking, management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.
Q: When do you choose automated testing?
A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But for small projects, the time needed to learn and implement the automated testing tools is usually not worthwhile. Automated testing tools sometimes do not make testing easier. One problem with automated testing tools is that if there are continual changes to the product being tested, the recordings have to be changed so often, that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is that the interpretation of the results (screens, data, logs, etc.) can be a time-consuming task. You CAN learn to use automated tools, with little or no outside help. Get CAN get free information. Click on a link!
Q: What other roles are in testing?
A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test Configuration Managers.
Depending on the project, one person can and often will wear more than one hat. For instance, we, Test Engineers, often wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager as well.
Q: Which of these roles are the best and most popular?
A: As to popularity, if we count the number of applicants and resumes, software developer positions tend to be the most popular among software engineers. As to testing, tester roles tend to be the most popular. Less popular roles are the roles of System Administrators, Test/QA Team Leads, and Test/QA Managers.
As to “best” roles, the best ones are the ones that make YOU happy. The best job is the one that works for YOU, using the skills, resources, and the talents YOU have.
To find the “best” role, you want to experiment and “play” different roles. Persistence, combined with experimentation, will lead to success!
Q: What’s the difference between priority and severity?
A: The word “priority” is associated with scheduling, and the word “severity” is associated with standards. “Priority” means something is afforded or deserves prior attention; a precedence established by urgency or order of or importance.
Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles. For example, a severe code of behavior.
The words priority and severity do come up in bug tracking. A variety of commercial, problem-tracking / management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. The fixes are based on project priorities and severity of bugs. The severity of a problem is defined in accordance to the end client’s risk assessment, and recorded in their selected tracking tool. Buggy software can severely affect schedules, which, in turn can lead to a reassessment and renegotiation of priorities.
Q: What is the difference between efficient and effective?
A: “Efficient” means having a high ratio of output to input; which means working or producing with a minimum of waste. For example, “An efficient engine saves gas.” Or, “An efficient test engineer saves time”.
“Effective”, on the other hand, means producing or capable of producing an intended result, or having a striking effect. For example, “For rapid long-distance transportation, the jet engine is more effective than a witch’s broomstick”. Or, “For developing software test procedures, engineers specializing in software testing are more effective than engineers who are generalists”.
Q: What is the difference between verification and validation?
A: Verification takes place before validation, and not vice versa.
Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.
The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product.
The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.
Q: What is upwardly compatible software?
A: “Upwardly compatible software” is software that is compatible with a later or more complex version of itself. For example, upwardly compatible software is able to handle files created by a later version of itself.
Q: What is upward compression?
A: In software design, “upward compression” means a form of demutualization in which a subordinate module is copied into the body of a superior module.
Q: What is usability?
A: “Usability” means ease of use; the ease with which a user can learn to operate, prepares inputs for, and interprets the outputs of a software product.
Q: What is V&V?
A: “V&V” is an acronym that stands for verification and validation.
Q: What is verification and validation (V&V)?
A: Verification and validation (V&V) is a process that helps to determine if the software requirements are complete, correct; and if the software of each development phase fulfills the requirements and conditions imposed by the previous phase; and if the final software complies with the applicable software requirements.
Q: What is a waterfall model?
A: Waterfall is a model of the software development process in which the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase are performed in that order, probably with overlap, but with little or no iteration.
Q: What are the phases of the software development life cycle?
A: The software development life cycle consists of the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase.
Q: What is the difference between system testing and integration testing?
A: “System testing” is a high level testing, and “integration testing” is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa.
For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.
The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.
Q: What types of testing can you tell me about?
A: Each of the followings represents a different type of testing: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
Q: What is disaster recovery testing?
A: “Disaster recovery testing” is testing how well a system recovers from disasters, crashes, hardware failures, or other catastrophic problems.
Q: How do you conduct peer reviews?
A: The peer review, sometimes called PDR, is a formal meeting, more formalized than a walk-through, and typically consists of 3-10 people including the test lead, task lead (the author of whatever is being reviewed) and a facilitator (to make notes). The subject of the PDR is typically a code block, release, or feature, or document. The purpose of the PDR is to find problems and see what is missing, not to fix anything. The result of the meeting is documented in a written report. Attendees should prepare for PDRs by reading through documents, before the meeting starts; most problems are found during this preparation.
Why is the PDR great? Because it is a cost-effective method of ensuring quality, because bug prevention is more cost effective than bug detection.
Q: How do you check the security of an application?
A: To check the security of an application, one can use security/penetration testing. Security/penetration testing is testing how well a system is protected against unauthorized internal, or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
Q: What stage of bug fixing is the most cost effective?
A: Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more cost effective than bug detection.
Q: What types of white box testing can you tell me about?
A: Clear box testing, glass box testing, and open box testing.
Clear box testing is white box testing. Glass box testing is also white box testing. Open box testing is also white box testing.
White box testing is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
Q: What black box testing types can you tell me about?
A: Functional testing, system testing, acceptance testing, closed box testing, integration testing. Functional testing is a black box testing geared to functional requirements of an application. System testing is also a black box testing. Acceptance testing is also a black box testing. Closed box testing is also a black box testing. Integration testing is also a black box testing.
Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality.
Q: Is regression testing performed manually?
A: The answer to this question depends on the initial testing approach. If the initial testing approach was manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing.
Q: What is good about PDRs?
A: PDRs are informal meetings, and I do like all informal meetings. PDRs make perfect sense, because they’re for the mutual benefit of you and your end client.
Your end client requires a PDR, because they work on a product, and want to come up with the very best possible design and documentation. Your end client requires you to have a PDR, because when you organize a PDR, you invite and assemble the end client’s best experts and encourage them to voice their concerns as to what should or should not go into the design and documentation, and why.
When you’re a developer, designer, author, or writer, it’s also to your advantage to come up with the best possible design and documentation. Therefore you want to embrace the idea of the PDR, because holding a PDR gives you a significant opportunity to invite and assemble the end client’s best experts and make them work for you for one hour, for your own benefit. To come up with the best possible design and documentation, you want to encourage your end client’s experts to speak up and voice their concerns as to what should or should not go into your design and documentation, and why.
Q: Give me a list of ten good things about PDRs!
A: Number 1: PDRs are easy, because all your meeting attendees are your co-workers and friends.
Number 2: PDRs do produce results. With the help of your meeting attendees, PDRs help you produce better designs and better documents than the ones you could come up with, without the help of your meeting attendees.
Number 3: Preparation for PDRs helps a lot, but, in the worst case, if you had no time to read every page of every document, it’s still OK for you to show up at the PDR.
Number 4: It’s technical expertise that counts the most, but many times you can influence your group just as much, or even more so, if you’re dominant or have good acting skills.
Number 5: PDRs are easy, because, even at the best and biggest companies, you can dominate the meeting by being either very negative, or very bright and wise.
Number 6: It is easy to deliver gentle suggestions and constructive criticism. The brightest and wisest meeting attendees are usually gentle on you; they deliver gentle suggestions that are constructive, not destructive.
Number 7: You get many-many chances to express your ideas, every time a meeting attendee asks you to justify why you wrote what you wrote.
Number 8: PDRs are effective, because there is no need to wait for anything or anyone; because the attendees make decisions quickly (as to what errors are in your document). There is no confusion either, because all the group’s recommendations are clearly written down for you by the PDR’s facilitator.
Number 9: Your work goes faster, because the group itself is an independent decision making authority. Your work gets done faster, because the group’s decisions are subject to neither oversight nor supervision.
Number 10: At PDRs, your meeting attendees are the very best experts anyone can find, and they work for you, for FREE!
Q: What is the exit criteria?
A: The “exit criteria” is a checklist, sometimes known as the “PDR sign-off sheet”. It is a list of peer design review related tasks that have to be done by the facilitator or attendees of the PDR, either during or near the conclusion of the PDR.
By having a checklist, and by going through the checklist, the facilitator can verify that A) all attendees have inspected all the relevant documents and reports, B) all suggestions and recommendations for each issue have been recorded, and C) all relevant facts of the meeting have been recorded.
The facilitator’s checklist includes the following questions:
- Have we inspected all the relevant documents, code blocks, or products?
- Have we completed all the required checklists?
- Have I recorded all the facts relevant to this peer review?
- Does anyone have any additional suggestions, recommendations, or comments?
- What is the outcome of this peer review?
As the end of the PDR, the facilitator asks the attendees to make a decision as to the outcome of the PDR, i.e. “What is our consensus… are we accepting the design (or document or code)?” Or, “Are we accepting it with minor modifications?” Or, “Are we accepting it after it has been modified and approved through e-mails to the attendees?” Or, “Do we want another peer review?” This is a phase, during which the attendees work as a committee, and the committee’s decision is final.
Q: What is the entry criteria?
A: The entry criteria is a checklist, or a combination of checklists that includes the “developer’s checklist”, “testing checklist”, and the “PDR checklist”. Checklists are lists of tasks that have to be done by developers, testers, or the facilitator, at or before the start of the PDR.
Using these checklists, before the start of the PDR, the developer, tester and facilitator can determine if all the documents, reports, code blocks or software products are ready to be reviewed, and if the PDR’s attendees are prepared to inspect them. The facilitator can ask the PDR’s attendees if they have been able to prepare for the peer review, and if they’re not well prepared, the he can send them back to their desks, and even ask the task lead to reschedule the PDR.
The facilitator’s script for the entry criteria includes the following questions:
- Are all the required attendees present at the PDR?
- Have all the attendees received all the relevant documents and reports?
- Are all the attendees well prepared for this PDR?
- Have all the preceding life cycle activities been concluded?
- Are there any changes to the baseline?
Q: What is the difference between build and release?
A: Builds and releases are similar, because both builds and releases are end products of software development processes. Builds and releases are similar, because both builds and releases help developers and QA teams to deliver reliable software.
A build is a version of a software; typically one that is still in testing. A version number is usually given to a released product, but sometimes a build number is used instead.
Difference number one: “Build” refers to software that is still in testing, but “release” refers to software that is usually no longer in testing.
Difference number two: “Builds” occur more frequently; “releases” occur less frequently.
Difference number three: “Versions” are based on “builds”, and not vice versa. Builds (or a series of builds) are generated first, as often as one build per every morning (depending on the company), and then every release is based on a build (or several builds), i.e. the accumulated code of several builds.
Q: What is CMM?
A: CMM is an acronym that stands for Capability Maturity Model. As to efforts in developing and testing software, the idea of CMM is that concepts and experiences do not always point us in the right direction, therefore we should develop processes, and then refine those processes. There are five CMM levels, of which Level 5 is the highest…
- CMM Level 1 is called “Initial”.
- CMM Level 2 is called “Repeatable”.
- CMM Level 3 is called “Defined”.
- CMM Level 4 is called “Managed”.
- CMM Level 5 is called “Optimized”.
CMM assessments take two weeks. They’re conducted by a nine-member team led by a SEI-certified lead assessor. There are not many Level 5 companies; most hardly need to be. Within the United States, fewer than 8% of software companies are rated CMM Level 4, or higher. The U.S. government requires that all companies with federal government contracts maintain a minimum of a CMM Level 3 assessment.
Q: What are CMM levels and their definitions?
A: There are five CMM levels, of which Level 5 is the highest.
CMM Level 1 is called “Initial”. The software process is at CMM Level 1, if it is an ad hoc process. At CMM Level 1, few processes are defined, and success, in general, depends on individual effort and heroism.
CMM Level 2 is called “Repeatable”. The software process is at CMM Level 2, if the subject company has some basic project management processes, in order to track cost, schedule, and functionality. Software processes are at CMM Level 2, if necessary processes are in place, in order to repeat earlier successes on projects with similar applications. Software processes are at CMM Level 2, if there are requirements management, project planning, project tracking, subcontract management, QA, and configuration management.
CMM Level 3 is called “Defined”. The software process is at CMM Level 3, if the software process is documented, standardized, and integrated into a standard software process for the subject company. The software process is at CMM Level 3, if all projects use approved, tailored versions of the company’s standard software process for developing and maintaining software. Software processes are at CMM Level 3, if there are process definition, training programs, process focus, integrated software management, software product engineering, intergroup coordination, and peer reviews.
CMM Level 4 is called “Managed”. The software process is at CMM Level 4, if the subject company collects detailed data on the software process and product quality, and if both the software process and the software products are quantitatively understood and controlled. Software processes are at CMM Level 4, if there are software quality management (SQM), and quantitative process management.
CMM Level 5 is called “Optimized”. The software process is at CMM Level 5, if there is continuous process improvement, if there is quantitative feedback from the process, and from piloting innovative ideas and technologies. Software processes are at CMM Level 5, if there are process change management, and defect prevention technology change management.
Q: What is the difference between bug and defect in software testing?
A: In software testing, the difference between “bug” and “defect” is small, and also depends on the end client. For some clients, bug and defect are synonymous, while others believe bugs are subsets of defects.
Difference number one: In bug reports, the defects are easier to describe.
Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In other words, defects tend to require only brief explanations.
Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and defects damage the reputation of our company.
Commonality number two: We, software QA engineers, use the software much like real users would, to find both bugs and defects, to find ways to replicate both bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they’ve achieved the desired level of quality.
Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In our reports, we include both bugs and defects that are the results of software testing.
Q: What is grey box testing?
A: Grey box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. In grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the grey box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.
Q: What is the difference between version and release?
A: Both version and release indicate particular points in the software development life cycle, or in the life cycle of a document. Both terms, version and release, are similar, i.e. pretty much the same thing, but there are minor differences between them.
Minor difference number 1: Version means a variation of an earlier or original type. For example, you might say, “I’ve downloaded the latest version of XYZ software from the Internet. The version number of this software is _____”
Minor difference number 2: Release is the act or instance of issuing something for publication, use, or distribution. Release means something thus released. For example, “Microsoft has just released their brand new gaming software known as _______”
Q: What is data integrity?
A: Data integrity is one of the six fundamental components of information security. Data integrity is the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data.
In databases, important data - including customer information, order database, and pricing tables - may be stored. In databases, data integrity is achieved by preventing accidental, or deliberate, or unauthorized insertion, or modification, or destruction of data.
Q: How do you test data integrity?
A: Data integrity is tested by the following tests:
Verify that you can create, modify, and delete any data in tables.
Verify that sets of radio buttons represent fixed sets of values.
Verify that a blank value can be retrieved from the database.
Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.
Verify that the default values are saved in the database, if the user input is not specified.
Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.
Why do we perform data integrity testing? Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time.
Q: What is data validity?
A: Data validity is the correctness and reasonablenesss of data. Reasonableness of data means that, for example, account numbers falling within a range, numeric data being all digits, dates having a valid month, day and year, and spelling of proper names. Data validity errors are probably the most common, and most difficult to detect (data-related) errors.
What causes data validity errors? Data validity errors are usually caused by incorrect data entries, when a large volume of data is entered in a short period of time. For example, a data entry operator enters 12/25/2010 as 13/25/2010, by mistake, and this data is therefore invalid. How can you reduce data validity errors? You can use one of the following two, simple field validation techniques.
Technique 1: If the date field in a database uses the MM/DD/YYYY format, then you can use a program with the following two data validation rules: “MM” should not exceed “12″, and “DD” should not exceed “31″.
Technique 2: If the original figures do not seem to match the ones in the database, then you can use a program to validate data fields. You can compare the sum of the numbers in the database data field to the original sum of numbers from the source. If there is a difference between the two figures, it is an indication of an error in at least one data element.
Q: Tell me about the TestDirector®
A: The TestDirector® is a software tool that helps software QA professionals to gather requirements, to plan, schedule and run tests, and to manage and track defects/issues/bugs. It is a single browser-based application that streamlines the software QA process.
The TestDirector’s “Requirements Manager” links test cases to requirements, ensures traceability, and calculates what percentage of the requirements are covered by tests, how many of these tests have been run, and how many have passed or failed.
As to planning, the test plans can be created, or imported, for both manual and automated tests. The test plans then can be reused, shared, and preserved.
The TestDirector’s “Test Lab Manager” allows you to schedule tests to run unattended, or run even overnight.
The TestDirector’s “Defect Manager” supports the entire bug life cycle, from initial problem detection through fixing the defect, and verifying the fix.
Additionally, the TestDirector can create customizable graphs and reports, including test execution reports and release status assessments.
Q: Why should I use static testing techniques?
A: There are several reasons why one should use static testing techniques.
Reason number 1: One should use static testing techniques because static testing is a bargain, compared to dynamic testing.
Reason number 2: Static testing is up to 100 times more effective. Even in selective testing, static testing may be up to 10 times more effective. The most pessimistic estimates suggest a factor of 4.
Reason number 3: Since static testing is faster and achieves 100% coverage, the unit cost of detecting these bugs by static testing is many times lower than detecting bugs by dynamic testing.
Reason number 4: About half of the bugs, detectable by dynamic testing, can be detected earlier by static testing.
Reason number 5: If one uses neither static nor dynamic test tools, the static tools offer greater marginal benefits.
Reason number 6: If an urgent deadline looms on the horizon, the use of dynamic testing tools can be omitted, but tool-supported static testing should never be omitted.
Q: What is smoke testing?
A: Smoke testing is a relatively simple check to see whether the product “smokes” when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development life cycle, this can be the only kind of testing that can be performed.
Smoke testing, by definition, is not exhaustive, but, over time, you can increase your coverage of smoke testing.
A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested.
Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly. Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration. At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.
Q: What is the difference between monkey testing and smoke testing?
A: Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the the goal of exposing any major problems.
Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.
Difference number 3: Monkey testing is performed by “monkeys”, while smoke testing is performed by skilled testers.
Difference number 4: “Smart monkeys” are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.
Difference number 5: “Dumb monkeys” are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.
Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.
Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough.
Difference number 8: Monkey testing takes “six monkeys” and a “million years” to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours.
Q: Tell me about daily builds and smoke tests.
A: The idea is to build the product every day, and test it every day. The software development process at Microsoft and many other software companies requires daily builds and smoke tests. According to their process, every day, every single file has to be compiled, linked, and combined into an executable program; and then the program has to be “smoke tested”.
Smoke testing is a relatively simple check to see whether the product “smokes” when it runs.
Please note that you should add revisions to the build only when it makes sense to do so. You should to establish a build group, and build daily; set your own standard for what constitutes “breaking the build”, and create a penalty for breaking the build, and check for broken builds every day.
In addition to the daily builds, you should smoke test the builds, and smoke test them Daily. You should make the smoke test evolve, as the system evolves. You should build and smoke test Daily, even when the project is under pressure.
Think about the many benefits of this process! The process of daily builds and smoke tests minimizes the integration risk, reduces the risk of low quality, supports easier defect diagnosis, improves morale, enforces discipline, and keeps pressure cooker projects on track. If you build and smoke test DAILY, success will come, even when you’re working on large projects!
Q: What is the purpose of test strategy?
A: Reason number 1: The number one reason of writing a test strategy document is to “have” a signed, sealed, and delivered, FDA (or FAA) approved document, where the document includes a written testing methodology, test plan, and test cases.
Reason number 2: Having a test strategy does satisfy one important step in the software testing process.
Reason number 3: The test strategy document tells us how the software product will be tested.
Reason number 4: The creation of a test strategy document presents an opportunity to review the test plan with the project team.
Reason number 5: The test strategy document describes the roles, responsibilities, and the resources required for the test and schedule constraints.
Reason number 6: When we create a test strategy document, we have to put into writing any testing issues requiring resolution (and usually this means additional negotiation at the project management level).
Reason number 7: The test strategy is decided first, before lower level decisions are made on the test plan, test design, and other testing issues.
Q: Give me one test case that catches all the bugs!
A: On the negative side, if there was a “magic bullet”, i.e. the one test case that was able to catch ALL the bugs, or at least the most important bugs, it’d be a challenge to find it, because test cases depend on requirements; requirements depend on what customers need; and customers have great many different needs that keep changing. As software systems are changing and getting increasingly complex, it is increasingly more challenging to write test cases.
On the positive side, there are ways to create “minimal test cases” which can greatly simplify the test steps to be executed. But, writing such test cases is time consuming, and project deadlines often prevent us from going that route. Often the lack of enough time for testing is the reason for bugs to occur in the field.
However, even with ample time to catch the “most important bugs”, bugs still surface with amazing spontaneity. The fundamental challenge is, developers do not seem to know how to avoid providing the many opportunities for bugs to hide, and testers do not seem to know where the bugs are hiding.
Q: What is a test scenario?
A: The terms “test scenario” and “test case” are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.
Q: What is the difference between a test plan and a test scenario?
A: Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a document that describes both typical and atypical situations that may occur in the use of an application.
Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results.
Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end.
Q: Can you give me a few sample test cases?
A: For instance, if one of the requirements is, “The brake lights shall be on, when the brake pedal is depressed”, then, based on this one requirement, I would write all of the following test cases:
Test case number “101″: “Inputs:” The headlights are on. The brake pedal is depressed. “Expected result:” The brake lights are on. Verify that the brake lights are on, when the brake pedal is depressed.
Test case number “102″: “Inputs:” The left turn lights are on. The brake pedal is depressed. “Expected result”: The brake lights are on. Verify that the brake lights are on, when the brake pedal is depressed.”
Test case number “103″: “Inputs”: The right turn lights are on. The brake pedal is depressed. “Expected result”: The brake lights are on. Verify that the brake lights are on, when the brake pedal is depressed.
As you might have guessed, to verify this one particular requirement, one could write many-many additional test cases, but you get the idea.
Q: What is a requirements test matrix?
A: The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project’s life cycle.
The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.
The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.
The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort.
Q: Can you give me a requirements test matrix template?
A: For a requirements test matrix template, you want to visualize a simple, basic table that you create for cross-referencing purposes.
Step 1: Find out how many requirements you have.
Step 2: Find out how many test cases you have.
Step 3: Based on these numbers, create a basic table. If you have a list of 90 requirements and 360 test cases, you want to create a table of 91 rows and 361 columns.
Step 4: Focus on the the first column of your table. One by one, copy all your 90 requirement numbers, and paste them into rows 2 through 91 of the table.
Step 5: Now switch your attention to the the first row of the table. One by one, copy all your 360 test case numbers, and paste them into columns 2 through 361 of the table.
Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they satisfy. If, for the sake of this example, test case number 64 satisfies requirement number 12, then put a large “X” into cell 13-65 of your table… and then you have it; you have just created a requirements test matrix template that you can use for cross-referencing purposes.
Q: What is reliability testing?
A: Reliability testing is designing reliability test cases, using accelerated reliability techniques - for example step-stress, test / analyze / fix, and continuously increasing stress testing techniques - AND testing units or systems to failure, in order to obtain raw failure time data for product life analysis.
The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer’s reliability requirements.
In the system test phase, or after the software is fully developed, one reliability testing technique we use is a test / analyze / fix technique, where we couple reliability testing with the removal of faults.
When we identify a failure, we send the software back to the developers, for repair. The developers build a new version of the software, and then we do another test iteration.
Then we track failure intensity - for example failures per transaction, or failures per hour - in order to guide our test process, and to determine the feasibility of the software release, and to determine whether the software meets the customer’s reliability requirements.
Q: What is stress testing?
A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denial of service tools.
Q: What is load testing?
A: Load testing simulates the expected usage of a software program, by simulating multiple users that access the program’s services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system’s response at peak loads. You CAN learn load testing, with little or no outside help. Get CAN get free information. Click on a link!
Q: What is the difference between stress testing and load testing?
A: The term, stress testing, is often used synonymously with performance testing, reliability testing, and volume testing, and load testing. Load testing is a blanket term that is used in many different ways across the professional software testing community. Load testing generally stops short of stress testing. During stress testing, the load is so great that the expected results are errors, though there is gray area in between stress testing and load testing.
Q: What is the difference between performance testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information. Click on a link!
Q: What is the difference between reliability testing and load testing?
A: The term, reliability testing, is often used synonymously with load testing. Load testing is a blanket term that is used in many different ways across the professional software testing community. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.
Q: What is incremental testing?
A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.
Q: What is alpha testing?
A: Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.
Q: What is beta testing?
A: Following alpha testing, “beta versions” of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.
Q: What is the difference between alpha and beta testing?
A: Alpha testing is performed by in-house developers and in-house software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public. Beta testing is performed after the alpha testing is completed.
Q: What is clear box testing?
A: Clear box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic. You CAN learn clear box testing, with little or no outside help. Get CAN get free information. Click on a link!
Q: What is boundary value analysis?
A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries.
Q: What is ad hoc testing?
A: Ad hoc testing is a testing approach; it is the least formal testing approach.
Q: What is gamma testing?
A: Gamma testing is testing of software that does have all the required features, but did not go through all the in-house quality checks. Cynics tend to refer to software releases as “gamma testing”.
Q: What is glass box testing?
A: Glass box testing is the same as white box testing. It’s a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
Q: What is open box testing?
A: Open box testing is same as white box testing. It’s a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
Q: What is black box testing?
A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software. You CAN learn to do black box testing, with little or no outside help. You CAN get free information. Click on a link!
Q: What is functional testing?
A: Functional testing is the same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software.
Q: What is closed box testing?
A: Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the “inner workings” of the software.
Q: What is bottom-up testing?
A: Bottom-up testing is a technique of integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.
Q: How do you know when to stop testing?
A: This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment that complete testing can never be done. Common factors in deciding when to stop are…
- Deadlines, e.g. release deadlines, testing deadlines;
- Test cases completed with certain percentage passed;
- Test budget has been depleted;
- Coverage of code, functionality, or requirements reaches a specified point;
- Bug rate falls below a certain level; or
- Beta or alpha testing period ends.
Q: What is configuration management?
A: Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes.
Rob Davis has had experience with a full range of CM tools and concepts. He can easily adapt to your software tool and process needs.
Q: What should be done after a bug is found?
A: When a bug is found, it needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/ management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its severity, and reproduce and fix it.
Q: What is a test plan?
A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document helps people outside the test group to understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that no one outside the test group can understand it.
Q: What if there isn’t enough time for thorough testing?
A: Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:
- Which functionality is most important to the project’s intended purpose?
- Which functionality is most visible to the user?
- Which functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which aspects of the application can be tested early in the development cycle?
- Which parts of the code are most complex and thus most subject to errors?
- Which parts of the application were developed in rush or panic mode?
- Which aspects of similar/related previous projects caused problems?
- Which aspects of similar/related previous projects had large maintenance expenses?
- Which parts of the requirements and design are unclear or poorly thought out?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?
- Which tests will have the best high-risk-coverage to time-required ratio?
Q: What if the project isn’t big enough to justify extensive testing?
A: If the project isn’t big enough to justify extensive you need to consider the impact of project errors, not the size of the project.
However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under “What if there isn’t enough time for thorough testing?” do apply; and then the test engineer should do “ad hoc” testing, or write up a limited test plan based on the risk analysis.
Q: What can be done if the requirements are changing continuously?
A: If the requirements are changing continuously, you want to work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance.
It is helpful if the application’s initial design allows for some adaptability, so that any later changes do not require redoing the application from scratch.
Additionally, try to…
- Ensure the code is well commented and well documented; this makes changes easier for the developers;
- Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes;
- Allow for some extra time commensurate with probable changes in the project’s initial schedule;
- Move new requirements to the ‘Phase 2′ version of the application and use the original requirements for the ‘Phase 1′ version;
- Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application;
- Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes; then let management or the customers decide if the changes are warranted; after all, that’s their job;
- Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes;
- Design some flexibility into automated test scripts;
- Focus initial automated testing on application aspects that are most likely to remain unchanged;
- Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
- Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;
- Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.
Q: What if the application has functionality that wasn’t in the requirements?
A: It can take a serious effort to determine if an application has significant unexpected or hidden functionality, which can indicate deeper problems in the software development process.
If the functionality isn’t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the unexpected functionality only affects areas, e.g. minor improvements in user interface, then it may not be a significant risk.
Q: Why do you recommend that we test during the design phase?
A: I recommend that we test during the design phase because testing during the design phase can prevent defects later on. I recommend verifying three things…
- Verify the design is good, efficient, compact, testable and maintainable.
- Verify the design meets the requirements and is complete (i.e. specifies all relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module, and how to guarantee the state of each module).
- Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product.
Q: What is parallel/audit testing?
A: Parallel/audit testing is a type of testing where the tester reconciles the output of the new system to the output of the current system, in order to verify the new system operates correctly.
Q: What is end-to-end testing?
A: End-to-end testing is similar to system testing. It is the ‘macro’ end of the test scale, i.e. the testing of the complete application in a situation that mimics real world use, such as using network communication, interacting with a database, other hardware, application, or system.
Q: What is regression testing?
A: What is regression testing is the type of testing that ensures the software remains intact. A baseline sets of data and scripts are maintained and executed to verify that changes introduced during the release have not “undone” any previous code. Then expected results from the baseline are compared to results of the software under test. All discrepancies have to be highlighted and accounted for, before the testing proceeds to the next level.
Q: What is sanity testing?
A: Sanity testing is cursory testing, and performed whenever cursory testing ‘is’ sufficient to prove the application is functioning according to specifications. Sanity testing is a subset of regression testing. It normally includes a set of core tests such as basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
Q: What is installation testing?
A: Installation testing is testing full, partial, upgrade, install, or uninstall processes. The installation test for production release is conducted with the objective of demonstrating production readiness. Installation testing includes the inventory of configuration items, performed by the application’s system administrator, the evaluation of data readiness, and dynamic tests focused on basic system functionality. Following installation testing, a sanity test is performed, if needed.
Q: What is security/penetration testing?
A: Security/penetration testing is testing how well the system is protected against unauthorized internal access, external access, or willful damage. Security/penetration testing usually requires sophisticated testing techniques.
Q: What is recovery/error testing?
A: Recovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Q: What are the parameters of performance testing?
A: Performance testing verifies loads, volumes, and response times, as defined by requirements. Performance testing is a part of system testing, but it is also a distinct level of testing.
The term ‘performance testing’ is often used synonymously with stress testing, load testing, reliability testing, and volume testing.
Q: What is the difference between volume testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.
source:http://linuxpoison.blogspot.com/2007/10/135781758015340.html