Monday, October 30, 2006

Six Sigma Terms and Definitions

According to Jack Welch, former CEO of General Electric (GE), and one of the most succesful corporate leaders ever, who used Six Sigma as major Quality Improvement method:
Six Sigma is one the biggest management innovations of the past 25 years. Six Sigma improves the development procedures, brings products faster to the market with less defects, and reduces cost. The biggest - but least praised - advantage of Six Sigma is its potential to cultivate excellent leaders.Six Sigma is not about averages, it is about eliminating variances from your relationship with the customers.
Six Sigma Definitions from the the GE website:
Quality Approaches and Models
DFSS – (Design for Six Sigma) is a systematic methodology utilizing tools, training and measurements to enable us to design products and processes that meet customer expectations and can be produced at Six Sigma quality levels.
DMAIC – (Define, Measure, Analyze, Improve and Control) is a process for continued improvement. It is systematic, scientific and fact based. This closed-loop process eliminates unproductive steps, often focuses on new measurements, and applies technology for improvement.
Six Sigma – A vision of quality which equates with only 3.4 defects per million opportunities for each product or service transaction. Strives for perfection.
Quality Tools
Associates are exposed to various tools and terms related to quality. Below are just a few of them.
Control Chart – Monitors variance in a process over time and alerts the business to unexpected variance which may cause defects.
Defect Measurement – Accounting for the number or frequency of defects that cause lapses in product or service quality.
Pareto Diagram – Focuses on efforts or the problems that have the greatest potential for improvement by showing relative frequency and/or size in a descending bar graph. Based on the proven Pareto principle: 20% of the sources cause 80% of any problems.
Process Mapping – Illustrated description of how things get done, which enables participants to visualize an entire process and identify areas of strength and weaknesses. It helps reduce cycle time and defects while recognizing the value of individual contributions.
Root Cause Analysis – Study of original reason for nonconformance with a process. When the root cause is removed or corrected, the nonconformance will be eliminated.
Statistical Process Control – The application of statistical methods to analyze data, study and monitor process capability and performance.
Tree Diagram – Graphically shows any broad goal broken into different levels of detailed actions. It encourages team members to expand their thinking when creating solutions.
Quality Terms
Black Belt – Leaders of team responsible for measuring, analyzing, improving and controlling key processes that influence customer satisfaction and/or productivity growth. Black Belts are full-time positions.
Control – The state of stability, normal variation and predictability. Process of regulating and guiding operations and processes using quantitative data.
CTQ: Critical to Quality (Critical "Y") – Element of a process or practice which has a direct impact on its perceived quality.
Customer Needs, Expectations – Needs, as defined by customers, which meet their basic requirements and standards.
Defects – Sources of customer irritation. Defects are costly to both customers and to manufacturers or service providers. Eliminating defects provides cost benefits.
Green Belt – Similar to Black Belt but not a full-time position.
Master Black Belt – First and foremost teachers. They also review and mentor Black Belts. Selection criteria for Master Black Belts are quantitative skills and the ability to teach and mentor. Master Black Belts are full-time positions.
Variance – A change in a process or business practice that may alter its expected outcome.
Six Sigma aims at repeating internal processes and complex new product developments. Forced application of Six Sigma in creative activities does not make a lot of sense and cause a lot of commotion.

Bug Priority Vs. Severity

Differentiate Priority and Severity. The effect of a bug on the software does not automatically correlate with the priority for fixing it. A severe bug that crashes the software only once in a blue moon for 1% of the users is lower priority than a mishandled error condition resulting in the need to re-enter a portion of the input for every user every time.
Therefore: Track priority and severity separately, then triage appropriately. It helps to have input from others on the team on priority. The importance of a bug is a project decision, different from the bug's perception by the Customer. In some cases it makes sense to track Urgency, the customer's point of view, separately.
Microsoft uses a four-point scale to describe severity of bugs. Severity 1 is a crash or anything that loses persistent data , i.e., messing up your files on disk. Sev 2 is a feature that doesn't work. Sev 3 is an aspect of a feature that doesn't work. Sev 4 is for purely cosmetic problems, misspellings in dialogs, redraw issues, etc. This system works very well. (Interestingly, sev 4 bugs end up getting set to priority 1 fairly often, because they are frequently very annoying to users, and fixing them is generally easy and doesn't destabilize things.)
Keep clear the difference between severity and priority". The example given says "a start-up splash screen with your company logo backwards and the name misspelled is purely a cosmetic problem. However, most companies would treat it as a high-priority bug."

ISO9126 Software Quality Attributes

ISO/IEC 9126 provides a framework for the evaluation of software quality.
It defines six software quality attributes, also called quality characteristics:

Functionality: are the required functions available, including interoperabilithy and security
Reliability: maturity, fault tolerance and recoverability
Usability: how easy it is to understand, learn, operate the software system
Efficiency: performance and resource behaviour
Maintainability: how easy is it to modify the software
Portability: can the software easily be transfered to another environment, including installability

Almost all these attributes are non-functional. However it is not uncommon that software requirements and testing activities are mostly focused on functionality. Problems with nonfunctional requirements detected late in the project can have a major impact on the schedule. Often significant changes have to be done on the architecture to resolve non-functional quality issues.
Use quality attributes to specify the required product quality, both for software development and software evaluation. For example select the top 3 attributes and define them Specific, Measurable, Acceptable, Realisable, and Traceable (SMART).
Do you give enough attention to the non-functional requirements of your development project, and are you using the relevant review and testing techniques to verify and validate the non-functional requirements ?

10 Critical Traps and 10 Success Criteria For Software Development.

Seems obvious but great companies with great products fail due to these traps.

Critical Traps
1. Unclear ownership of product quality.
2. No overall test program design or goals.
3. Non-existent or ill-defined test plans and cases.
4. Testing that focuses narrowly on functional cases.
5. No ad hoc, stress or boundary testing.
6. Use of inconsistent or incorrect testing methodology.
7. Relying on inexperienced testers.
8. Improper use of tools and automation, resulting in lost time and reduced ROI.
9. No meaningful metrics for tracking bugs and driving quality back into development.
10. Incomplete regression cycle before software release.
To avoid these traps, it is important to incorporate best practices into your quality assurance process. The process should include an evaluation of where you are with quality assurance today, what your QA goals are, what the gaps are in the process, and finally you should build a roadmap to obtain your goals. Only after these steps have been taken can you avoid these quality assurance traps.

Success Criteria
1. User
2. Executive Management Support
3. Clear Statement of Requirements
4. Proper Planning
5. Realistic Expectations
6. Smaller Project Milestones
7. Competent Staff
8. Ownership
9. Clear Vision & Objectives
10. Hard-Working, Focused Staff

In order to make order out of the chaos, we need to examine why projects fail. Each major software failure must be investigated, studied, reported and shared. Failure begets knowledge. Out of knowledge you gain wisdom, and it is with wisdom that you can become truly successful.

What is Software Quality ?

Definitions of Quality:
(IEEE 610.12-1990) Standard Glossary of Software Engineering Terminology:

"the degree to which a system. component, or process meets (1) specified requirements, and (2) customer or user needs or expectations".

(ISO 9003-3-1991) Guidelines for the application of ISO 9001 to the Development, Supply and Maintenance of Software:

"the totality of features and characteristics of a product or service that bear on its ability to satisfy specified or implied needs".

Meeting customer needs is key in these definitions. But only with adequate quality assurance techniques quality can give you a competitive advantage.
One could argue that "the quality of your software makes your customers happy" or "the customers define the quality of your software" ? For the latter, throwing a lot of testing at the product can improve your external quality to an extend that it is satisfactory for you customers, however at a significant cost.
Testing shows you the (lack of) external quality: correctness, efficiency, reliability. External quality displays the visible symptons when there are issues, but the roots are invisible internal quality attributes: program structure, complexity, coupling, testability, reusability, readability, maintainability, ...
A nice metaphore is the Software Quality Iceberg (Code Complete, Steve McConell)
Software Quality Assurance (SQA) is the set of methods used to improve internal and external qualities. SQA aims at preventing, identifying and removing defects throughout the development cycle as early as possible, as such reducing test(!) and maintenance costs.

Thursday, October 26, 2006

What is ACID Testing ?

One of the primary tests for reliability of a database management system (DBMS) is the ACID test. ACID-compliant systems possess certain properties that offer greater protection to stored data in the event of an unexpected hardware or software failure; even if the database is being read from or written to at the time the failure occurs.In general, an ACID-compliant DBMS is greatly preferred to a non-complaint DBMS. In applications where the availability and integrity of the stored data are critical, an ACID-compliant database is required; non-compliant systems should be automatically rejected.The ACID test alone does not guarantee reliability. Other factors such as the reliability of the host environment (both hardware and software components), a strictly observed backup policy, etc. are also crucial in maintaining any DBMS.
The four ACID properties are Atomicity, Consistency, Isolation and Durability.
When a transaction that updates the database occurs, either all of the update occurs, or none of the update occurs, even if a hardware or software failure occurs during the transaction.

For example, suppose that a particular transaction is supposed to update a record consisting of ten fields of data (name, gender, age, etc.) into a customer database. Further suppose that an unexpected software failure occurs halfway through the transaction. If the DBMS is not atomic, when the database comes back online, the record will be in an unknown state: All, some or none of the fields in the record may have been updated. Therefore, a future transaction that depends on the record may be relying upon incorrect information. In contrast, an atomic DBMS in the same situation would void the already completed parts of the transaction and return to the state before the transaction was attempted.

The mostly widely used mechanism for providing atomicity is the transactional commit/rollback mechanism. A group of write operations in a transactions are attempted. If all of the writes succeed, then the writes are committed to the database; that is, the writes are made permanent. If any of the writes failed, then the database is rolled back to the point before the transaction was started.
Any change to the value of an instance is consistent with all other changes to other values of that instance.

For example, suppose a student checks out a 2N222 transistor. The 2N2222 must be charged against the student's account, the number of 2N2222 transistors available must be decreased by one and the number of 2N2222 transistors in use must be increased by one. If all of these changes do not occur, the database is in an inconsistent state.

An ACID-complaint DBMS provides the tools to enforce consistency, usually in the form of rules checking. However, it is up to the designer to implement consistency enforcement.
Isolation prevents changes in concurrent transactions from conflicting with each other. Isolation also allows multiple users to each use the database as if he or she were the only user.

Isolation is primarily accomplished through locking. To lock a table or record prevents other transactions from reading or writing the data in that table or record until the current transaction is finished. This process ensures that no transaction reads data, which is no longer valid.

Locking a record is preferable to locking an entire table. Pending transactions on a locked table must wait for the entire table to be unlocked, even if only one record in the table is being updated. In contrast, with record-level locking, only transactions that depend upon the locked record must wait; other transactions can proceed without waiting.
When a hardware or software failure occurs, the information in the database must be accurate up to the last committed transaction before the failure.

This durability is required even if the failure causes the operating system to crash or the server to shut down. The only exception is a hard disk failure, at which point the database is valid up to the last successful backup made before the failure.

All durable database management systems are atomic, but not all atomic database management systems are durable.

Friday, October 20, 2006

Test Results - Do's and Don'ts

Any testing activity at the end should always be accompanied with the test results. The test result can be of both defects and the results from the test cases, which were executed during testing.
The Do’s
1. Ensure that a defect summary report is sent to the Project Lead after each release testing. This on a high level can discuss on the number of open/reopen/closed/fixed defects. To drill down the report can also contain the priority of open and reopen defects.
2. Ensure that a test summary report is sent to the Project Lead after each release testing. This can contain details about the total number of test cases, total number of test cases executed, total number of passed test cases, total number of failed test cases, total number of test cases that were not run (This here means the test cases were not able to run here either due to non-availability of production environment or non-availability of real time data or some other dependencies. Hence looking at the non-run test cases should give a clear picture what areas were not tested. This should not contain information on the test cases which were not run due to lack of time), total number of test cases that were not applicable.
3. On a high level if the above details can be tracked for all releases then this should give a clear picture on the growing stability of the application.
4. Track metrics as identified during the plan stage
The Don’ts
1. Do not attempt to update anyone with huge information on test results. It has to be precise. You need not give information of the test execution steps which failed during testing as this will be a tedious process for one to sit and go through these test cases
2. Finally what it comes out is how easily is the test result information can be interpreted. Hence do not leave room for assumptions while interpreting the test metrics. Make it simple!

Software Testing - Do's and Dont's

A good test engineer should always work towards breaking the product right from the first release till the final release of the application (Killer attitude). This section will not just focus on testing but all the activities related to testing is it defect tracking, configuration management or testing itself.
The Do’s
1. Ensure if the testing activities are in sync with the test plan
2. Identify technically not strong areas where you might need assistance or trainings during testing. Plan and arrange for these technical trainings to solve this issue.
3. Strictly follow the test strategies as identified in the test plan
4. Try getting a release notes from the development team which contains the details of that release that was made to QA for testing. This should normally contain the following details
a) The version label of code under configuration management
b) Features part of this release
c) Features not part of this release
d) New functionalities added/Changes in existing functionalities
e) Known Problems
f) Fixed defects etc.
5. Stick to the input and exit criteria for all testing activities. For example, if the input criteria for a QA release is sanity tested code from development team, ask for sanity test results.
6. Update the test results for the test cases as and when you run them
7. Report the defects found during testing in the tool identified for defect tracking
8. Take the code from the configuration management (as identified in plan) for build and installation.
9. Ensure if code is version controlled for each release.
10. Classify defects (It can be P1, P2, P3, P4 or Critical or High or Medium or Low or anything) in a mutual agreement between the development team so as to aid developers prioritize fixing defects
11. Do a sanity testing as and when the release is made from development team.
The Don’ts
1. Do not update the test cases while executing it for testing. Track the changes and update it based on a written reference (SRS or functional specification etc). Normally people tend to update the test case based on the look and feel of the application.
2. Do not track defects in many places i.e. having defects in excel sheets and in any other defect tracking tools. This will increase the time to track all the defects. Hence use one centralized repository for defect tracking
3. Do not get the code from the developers sandbox for testing, if it is a official release from the development team
4. Do not spend time in testing the features that are not part of this release
5. Do not focus your testing on the non critical areas (from the customers perspective)
6. Even if the defect identified is of low priority, do not fail to document it.
7. Do not leave room for assumptions while verifying the fixed defects. Clarify and then close!
8. Do not hastily update the test cases without running them actually, assuming that it worked in earlier releases. Sometimes these pre conceived notions would be a big trouble if that functionality is suddenly not working and is later found by the customer.
9. Do not focus on negative paths, which are going to consume lots of time but will be least used by customer. Though this needs to be tested at some point of time the idea really is to prioritize tests.

Test Case Design - Do's and Don'ts

Any good and complete testing is as good as its test cases, since test cases reflect the understandability of the test engineer over the application requirements. A good test case should identify the yet undiscovered errors in testing.

The Do’s:
1. Identify test cases for each module
2. Write test cases in each executable step.
3. Design more functional test cases.
4. Clearly identify the expected results for each test case
5. Design the test cases for workflow so that the test cases follow a sequence in the web application during testing. For example for mail applications say yahoo, it has to start with a registration process for new users, then signing up, composing mail, sending mail etc.
6. Security is high priority in web testing. Hence document enough test cases related to application security.
7. Develop a trace ability matrix to understand the test case coverage with the requirements

The Don’ts
1. Do not write repetitive UI test cases. This will lead to high maintenance since UI will evolve over due course.
2. Do not write more than one execution step in each test case.
3. Do not concentrate on negative paths for User acceptance test cases if the business requirements clearly indicate on the application behavior and usage by the business users.
4. Do not fail to get the test cases reviewed by individual module owners of the development team. This will enable the entire team to be in the same line.
5. Do not leave any functionality uncovered in the test cases unless and until if it is specified in the test plan as features not tested.
6. Try not to write test cases on error messages based on assumptions. Document error message validation test cases if the exact error message to be displayed is given in requirements.

How To Prepare A Killer Bug Report ?

After a defect has been found, it must be reported to development so that it can be fixed. Much has been written about identifying defects and reproducing them– but very little has been done to explain the reporting process and what developers really need.

Overview of Bugs
No matter what a system does, what language it’s written in, what platform it’s run on, whether it’s client/server based or not– its basic functions are the same. They are broken down into the following categories:

1. Entry
2. Storage
3. Output
4. Process

As the interaction between data and the system increases usually so does the severity of the bug, and the detail needed in a report.

Bug severity can be categorized as follows:

1. Cosmetic
2. Inconvenience
3. Loss of Function
4. System Crash or Hang
5. Loss of Data

Cosmetic bugs are the simplest bugs to report, and affect the system the least. They are simply instances where things look wrong. Spelling errors, screen anomalies– these are cosmetic bugs.

Bugs that are classified as an inconvenience are just that, something that makes the system harder to use. These are slightly more nebulous since part of their effect is subjective. This also makes it harder to describe what the actual problem is.

When a bug results in a loss of function, reporting becomes a bit more complicated and the urgency to fix the bug is greater. These bugs do not affect the data, but it means that a process is useless until it is fixed. Because of this, the report again becomes more complicated.

Bugs that cause the system to crash or hang can be the hardest to reproduce, and therefore the hardest to adequately describe. If you experience a crash or hang in testing, it is imperative to see if you can reproduce the problem, documenting all the steps taken along the way. On these occasions, it is also important to include the data used in causing the system to crash/hang.

The final classification is the worst– bugs that result in the loss of data. Data is the heart of almost every system, and anything that threatens the integrity of that data must be fixed as quickly as possible. Therefore more than any other bug type it must be documented as thoroughly as possible.

Reporting Guidelines
The key to making a good report is providing the development staff with as much information as necessary to reproduce the bug. This can be broken down into 5 points:
1) Give a brief description of the problem
2) List the steps that are needed to reproduce the bug or problem
3) Supply all relevant information such as version, project and data used.
4) Supply a copy of all relevant reports and data including copies of the expected results.
5) Summarize what you think the problem is.

When you are reporting a defect the more information you supply, the easier it will be for the developers to determine the problem and fix it.

Simple problems can have a simple report, but the more complex the problem– the more information the developer is going to need.

For example: cosmetic errors may only require a brief description of the screen, how to get it and what needs to be changed.

However, an error in processing will require a more detailed description, such as:

1) The name of the process and how to get to it.
2) Documentation on what was expected. (Expected results).
3) The source of the expected results, if available. This includes spread sheets, an earlier version of the software and any formulas used).
4) Documentation on what actually happened. (Perceived results).
5) An explanation of how the results differed.
6) Identify the individual items that are wrong.
7) If specific data is involved, a copy of the data both before and after the process should be included.
8) Copies of any output should be included.

As a rule the detail of your report will increase based on a) the severity of the bug, b) the level of the processing and the c) the complexity of reproducing the bug.

Postmortem of a Bug Report
Bug reports need to do more than just describe the bug. They have to give developers something to work with so that they can successfully reproduce the problem.

In most cases the more correct information given, the better. The report should explain exactly how to reproduce the problem and an explanation of exactly what the problem is.

The basic items in a report are as follows:

This is very important. In most cases the product is not static, developers will have been working on it and if they’ve found a bug– it may already have been reported or even fixed. In either case, they need to know which version to use when testing out the bug.

If you are developing more than one product– Identify the product in question.

Unless you are reporting something very simple, such as a cosmetic error on a screen, you should include a dataset that exhibits the error.

If you’re reporting a processing error, you should include two versions of the dataset, one before the process and one after. If the dataset from before the process is not included, developers will be forced to try and find the bug based on forensic evidence. With the data, developers can trace what is happening.

List the steps taken to recreate the bug. Include all proper menu names, don’t abbreviate and don’t assume anything.

After you’ve finished writing down the steps, follow them - make sure you’ve included everything you type and do to get to the problem. If there are parameters, list them. If you have to enter any data, supply the exact data entered. Go through the process again and see if there are any steps that can be removed.

When you report the steps they should be the clearest steps to recreating the bug.

Explain what is wrong - Try to weed out any extraneous information, but detail what is wrong. Include a list of what was expected. Remember report one problem at a time, don’t combine bugs in one report.

Supporting Documentation:
If available, supply documentation. If the process is a report, include a copy of the report with the problem areas highlighted. Include what you expected. If you have a report to compare against, include it and its source information (if it’s a printout from a previous version, include the version number and the dataset used)

This information should be stored in a centralized location so that Developers and Testers have access to the information. The developers need it to reproduce the bug, identify it and fix it. Testers will need this information for later regression testing and verification.

Organization is one of the most important tools available. If your reporting process is organized and standardized it will serve you well. Take the time to develop a standardized method of reporting and train Testers, QA and Beta-testers in its use.

If at all possible, use a tracking system for your defect/development tracking and make sure that everyone using it understands the fields and their importance.

Document your data samples to match up with the bugs/defects reported. These will be useful both to development when fixing the bug and to Testing/QA when it comes time for regression testing.

A bug report is a case against a product. In order to work it must supply all necessary information to not only identify the problem but what is needed to fix it as well.

It is not enough to say that something is wrong. The report must also say what the system should be doing.

The report should be written in clear concise steps, so that someone who has never seen the system can follow the steps and reproduce the problem. It should include information about the product, including the version number, what data was used.

The more organized information provided, the better the report will be.

Tuesday, October 17, 2006

Different Types of Software Testing

I think I am the trillionth person on this planet to post a blog on this topic! But still I am writing because this is the most fundamental thing, a would-be ‘Software Tester’ must know. There can be many more types of Software Testing that can be appended to this list. But these are the most widely used and most widely accepted types of Software Testing.

Unit testing: This type of testing tests individual application objects or methods in an isolated environment before being integrated into the system. It verifies the smallest unit of the application to ensure the correct structure and the defined operations. Unit testing is the most efficient and effective means to detect defects or bugs at the most basic level. The testing tools available in the market today are capable of creating unit test scripts. This type of testing is mostly done by the developers.

Integration testing: This testing is to evaluate proper functioning of the integrated modules (objects, methods) that make up a subsystem. The focus of integration testing is on cross-functional tests rather than on unit tests within one module. Available testing tools usually provide gateways to create stubs and mock objects for this test.

System testing: System testing should be executed as soon as an integrated set of modules has been assembled to form the application. System testing verifies the product by testing the application in the integrated system environment.
Regression testing: Regression testing ensures that code modification, bug correction, and any postproduction activities have not introduced any additional bugs into the previously tested code. This test often reuses the test scripts created for unit and integration testing. Software testing tools offer harnesses to manage these test scripts and schedule the regression testing.

Usability testing: Usability testing ensures that the presentation, data flow, and general ergonomics of the application meet the requirements of the intended users. This testing phase is critical to attract and keep customers. Usually, manual testing methods are inevitable for this purpose.

Stress testing: Stress testing makes sure that the features of the software and hardware continue to function correctly under a pre-designed set and volume of test scenarios. The purpose of stress testing is to ensure that the system can hold and operate efficiently under different load conditions. Thus, the possible hardware platforms, operating systems, and other applications used by the customers should be considered for this testing phase. Test conditions are bare minimum resources.

Load testing: This test involves giving the system more than what it can handle. In this sort, this test is the opposite of stress testing.

Performance testing: Performance testing measures the response times of the systems to complete a task and the efficiency of the algorithms under varied conditions. Therefore, performance testing also takes into consideration the possible hardware platforms, operating systems, and other applications used by the customers.

Repetitive testing: This test involves repeating the same test again and again till it eventually exposes a bug. This testing can help discover memory leaks in the software.

The following tests are done to determine if an application is world-ready.

Globalization Testing: The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible.

Localization Testing: Localization translates the product UI and occasionally changes some initial settings to make it suitable for another region. Localization testing checks the quality of a product's localization for a particular target culture/locale. This test is based on the results of globalization testing, which verifies the functional support for that particular culture/locale. Localization testing can be executed only on the localized version of a product. Localizability testing does not test for localization quality.

Localizability Testing: Localizability testing verifies that you can easily translate the user interface of the program to any target language without re-engineering or modifying code. Localizability testing catches bugs normally found during product localization, so localization of the program is required to complete this test. As such, localizability testing is essentially a hybrid of globalization testing and localization testing.

Compatibility Testing: Compatibility testing verifies that the application or software is compatible with all the platforms and other end user settings.
Accessibility Testing: Accessibility testing verifies that the application is well designed for all kinds of users and works even without the available audio-visual technology.

Intel to Launch Quad-Core Chips on November 13

In a race with rival Advanced Micro Devices, Intel will bring its quad-core chips to market in a new line of Hewlett-Packard workstations due to be introduced on November 13.

HP sent out invitations to the event but did not specify exact models and prices. The computers will probably use Intel’s planned Xeon 5300 chip, and will be designed to run high-end applications like seismic analysis and visualization technologies from Ansys, Autodesk, Landmark Graphics, and Parametric Technology.

The launch would mean that Intel brings quad-core processors to market before AMD, a crucial win in a year when Intel has made as many headlines for its layoffs and missed earnings targets as for its technology.

Microsoft Agrees to Changes in Vista Security

From Vista Live:

“Bowing to pressure from European antitrust regulators and rival security vendors, today, Microsoft has agreed to modify Windows Vista to better accommodate third-party security software makers. In a press conference Friday, Microsoft said it would configure Vista to let third-party anti-virus and other security software makers bypass ‘PatchGuard,’ a feature in 64-bit versions of Windows Vista designed to bar access to the Windows kernel.

Microsoft said it would create an API to let third-party vendors access the kernel and to disable the Windows Security Center so that users would not be prompted by multiple alerts about operating system security. In addition, Redmond said it would modify the welcome screen presented to Vista users to include links to other security software other than Microsoft’s own OneCare suite. From the article: ‘It looks like Microsoft was really testing the waters here, sort of pushing the limits of antitrust and decided they probably couldn’t cross that line just yet.”’

Monday, October 16, 2006

Can Software be defect free?

This topic could have been discussed 'n' number of times. But when it comes to software automatically discussion leads to defects in software and damaged caused by them. Business, people, organizations get affected because of buggy software.
Every day we read about new methodologies, new tools, and new approaches about how software can be made defect free? Still it's a same story everywhere. This January, Microsoft released as security update for WMF vulnerability. But this is just not the case with any particular software application. Defects are everywhere and 10-20 years down the line usage of computers is going to increase tremendously. Software is going to be used in almost each and every part of our daily life. So if defects occur at such a frequency in a software application then really, it will be a chaotic situation.
So what do you think how a software can be made defect free?

Why software testing doesn't become a success story?


Software testing essentially reveals the mistakes done by a human mind when building up a piece of code. However, in some cases software testing can become a never-ending story. Testing team completes first testing cycle, no. of defects are found, development team fixes the defects, again testing is carried out, some more defects crop up and so on.

If project development goes this way then project manager's tension builds up and the estimates goes haywire. Release date gets extended by days, weeks and sometimes by a month or two. However, this kind of situations can be avoided if few things are taken into consideration:

1. Functionality of the application to be developed should be clear and well documented with a support of good change management process
2. Development phase of the project should be completed
3. Test cases must cover entire functionality of the application and they must be executed in a controlled environment
4. A robust process for determining the severity and priority of defect
5. Carrying out analysis of testing phase --> No. of defects found against no. of test cases executed. If application is in second stage of testing then no. of defects reoccurred, time taken for fixing up of defects according their severity.

Saturday, October 14, 2006

Myths and Realities of Software Testing

Myths arise from a lack of direct experience. In the absence of information, we form beliefs based on what we think we know, often with a skeptical feeling towards what we don't know. In the realm of software development, myths can make it difficult to approach real-world problems objectively, thus putting budgets and schedules at increased risk. I am presenting some myths those are prevalent in Software Testing Industry.
Myth: In a majority of software development projects, we quickly code a prototype application to reduce risk and prove a concept, with the intention of throwing this code away.
Reality: There is nothing wrong with this approach. But, because of schedule pressure or because the results are encouraging, we do not discard that code. The reality is that our prototyping is actually early coding. That code becomes the foundation and framework for our new application. However, because it was created under the presumption that it would be thrown away, it bypassed requirements reviews, design reviews, code reviews, and unit testing. Our next-generation application is now built on an indeterminate foundation.
In an iterative development lifecycle, continuous experimentation is viewed as a good thing, and early prototyping is encouraged in each development iteration. But any code that is submitted into the product needs to follow best practices to assure its stability, reliability, and maintainability. One way to encourage proper software development practices from the very start of a project is to use the initial coding effort to plan, prove, and present the processes you intend to use throughout the various iterative development phases. As part of the iterative testing method, you can use your early coding cycles to test your product's concept and flush out glitches in your development processes at the same time.
Myth: Initiating testing activities earlier in the development cycle increases delivery time while reducing the number of features in the product.
Reality: Testing is not the time-consuming activity in a development lifecycle. Diagnosing and fixing defects are the time-consuming, bottleneck activities.
Testing is not the obstacle that shipwrecks our projects -- testing is the lighthouse that keeps our projects off the rocks. Defects are in the product whether we look for them or not. Iterative testing moves the detection of problems closer to where and when they were created. This minimizes the cost of correcting the bug as well as its impact on schedules.
Myth: You can't test if you don't have a product to test.
Reality: Iterative testing isn't limited to testing code.
Every artifact your team produces can be verified against your success criteria for acceptance. Likewise, each process or procedure you use to produce your deliverables can be validated against your success criteria for quality. This includes the product concepts, the architecture, the development framework, the design, the customer usage flows, the requirements, the test plans, the deployment structure, the support and service techniques, the diagnostic and troubleshooting methods, and even the procedures you follow to produce the product.
As you can see from this partial list, the majority of these items don't involve code. Therefore, you miss opportunities to support quality and reduce risk by waiting until you actually have deliverable code. I disagree with the widely accepted notion that "you can't test quality into a product." You can test quality into a product. You just need to start early enough.
Myth: You are more efficient if you have one developer (or a single source) specialized in each development area. In the simplest argument, if you have thirty developers and employ pair programming, you can code fifteen features. If you assign one developer per feature, you can code thirty features in the same amount of time. With thirty features, you have a fuller and more complete product.
Reality: The risk associated with having one developer per feature or component is that no one else can maintain and understand that feature. This strategy creates bottlenecks and delays. Defects, enhancement requests, or modifications are now queued to that single resource. To stay on schedule, your developers must work weekends and extra hours for extended periods, because they are the only ones who can continue new feature development and fix defects in this area. Your entire project schedule is now dependent on heroic efforts by a number of "single resources."
1 When that resource leaves the team, goes on vacation, or becomes overwhelmed, it causes delays in your schedule. Because of the way you've chosen to implement and manage your development strategy, you are now unable to provide your team with relief.
Pair programming, pair testing, code reviews, and design reviews are sound practices that not only increase the quality of the product, but also educate others on each feature or component, such that they increase your pool of resources to fix and maintain project code. The two members of a pair don't have to be equally sophisticated or knowledgeable in their area. They can be just knowledgeable enough to eliminate the inevitable bottlenecks created by specialization as discussed above.
Moreover, dividing development activities into logical, smaller, independent chunks (a.k.a. sprints) allows developers of different skill levels to work efficiently on the various pieces. With multiple capable resources, you can avoid actually assigning the different tasks to a specific developer. When we have more than one resource capable of accomplishing the task, assigning specific tasks to specific individuals creates a false dependency. Similar to multiple bank tellers servicing a single line of waiting customers, efficiency improves when developers instead sign up for the next task when they've completed the last task.
Myth: Producing code is the developer's primary task.
Reality: The primary task of everyone on the development team is to produce a product that customers will value. This means that during the requirements review activities, for example, the developer's primary task is "requirement review." During the design activities, the developer's primary task is creating and reviewing the design documents. During the coding activities, the developer's primary task is generating bug-free and customer-relevant code. During the documentation review activities, the developer's primary task is making sure the user assistance materials and error messages serve to flatten the customer's learning curve. During installation and setup, the developer's primary task is to make sure customers can set up and configure your product easily, so that they can get their "real" job done as efficiently as possible. The greater the effort required to use software to accomplish a task, the lower the customer's return on investment and the greater the abandonment rate for the application.
Myth: Requirements need to be stable and well defined early for software development and testing to take place efficiently.
Reality: If our ultimate goal was "efficient software development and testing," then having stable and well-defined requirements up-front might be a must. But our actual goal is to produce a product that customers will value. Most of us can readily admit that we don't always know what we want. Since there are constant changes in the marketplace, such as new products and options, we often change our minds after being introduced to new concepts and information. Acknowledging the above illustrates one key reason why it is rarely effective to keep requirements static. Frequent interaction with the product during development continually exposes the customer to our efforts and allows us to modify our course to better meet customers' changing needs and values.
Myth: When the coding takes longer than originally scheduled, reducing test time can help get the project back on schedule.
Reality: Typically, coding delays occur because of unexpected difficulties or because the original design just didn't work. When it's apparent that we've underestimated project complexity, slashing test time is a very poor decision. Instead, we need to acknowledge that, since the test effort was probably based on the underestimated coding effort, the test effort was probably underestimated as well. We may therefore need to schedule more test time to correct the original estimation, not less.
Iterative testing increases test time without affecting the overall schedule by starting the testing earlier in each iteration. Also, the quality of the product determines the amount of testing that is required -- not the clock. For instance, if the product is solid and no new defects are being found in various areas, then the testing will go very smoothly and you can reduce testing time without reducing the number of tests or the test coverage. If the product is unstable, and many defects are being discovered and investigated, you'll need to add test cycles until the quality criteria are met. And don't forget that testing is not the project's time-consuming activity in the first place.
Myth: Finding and fixing all the defects will create a quality product.
Reality: Recent studies illustrate that only 10 percent of software development activities, such as creating customer-requested features, actually add value for the customer. A percentage of features, for example, are developed in order to stay competitive in the market, but are not necessarily used or desired by the customers. Finding and fixing defects related to these features also does not add customer value, because the customers might never have encountered these bugs.
Iterative testing, on the other hand, actually reduces defect inventory and customer wait time based upon what is of value to the customer. By involving customers in each iteration, iterative testing compresses the delivery cycle to the design partner customers, while maximizing the value of the application to this customer.
Myth: Continually regression-testing everything every time we change code is tedious and time consuming…but, in an ideal world, it should be done.
Reality: Regression testing doesn't mean "testing everything, every time."
Iterative regression testing means testing what makes sense in each phase and iteration. It also means modifying our coverage based on the impact of the change, the history of the product, and the previous test results.
If your regression tests are automated, then go ahead and run all of them all the time. If not, then be selective about what tests you run based on what you want the testing to accomplish. For instance, you might run a "sanity regression suite" or "set of acceptance tests" prior to "accepting" the product to the next phase of testing. Since the focus of each iteration is not necessarily the same, the tests don't need to be the same each time. Focus on features and tests that make sense for the upcoming deliverables and phase. For instance, if you're adopting components from a third party, like a contractor or an open source product, the sanity regression suites would focus on the integration points between the external and internal components. If, during your initial sanity regression testing of this third party module, you find defects or regressions, you may choose to alter your regression suites by adding additional tests based on the early results.
If, on the other hand, you're adopting a set of defect fixes that span the entire product that is within your control, the sanity regression suite would be focused and structured entirely on your end-to-end, high profile customer use cases. If the change is confined to just one area and the product has a stable quality track record, you could focus the regression suite on just that area. Likewise, in the end-game, you may want a very small sanity regression suite that covers media install, but not in-depth or end-to-end tests. Once again, the focus of the sanity or acceptance regression suite depends upon what was tested in the previous cycle, the general stability of the product, and the focus of the next iteration.
Myth: It's not a bug -- the feature is working as designed.
Reality: The over-explanation of why the product is doing what it's doing is a common trap. Sometimes we just know too much. When defects are triaged and reviewed, we often explain away the reasons for the defect. Sometimes we tag defects as "works as designed" or "no plans to fix" because the application is actually working as it was designed, and it would be too costly or risky to make the design change. Similarly, we explain many usability concerns as "it's an external component" or "it's a bell and whistle." Our widgets or UI controls may have known limitations. Or we may even tell ourselves that "once the user learns to do it this way, they'll be fine."
Myth: A tester's only task is to find bugs.
Reality: This view of the tester's role is very limited and adds no value for the customer. Testers are experts with the system, application, or product under test. Unlike the developers, who are responsible for a specific function or component, the tester understands how the system works as a whole to accomplish customer goals. Testers understand the value added by the product, the impact of the environment on the product's efficiency, and the best ways to get the most out of the product.
Taking advantage of this product knowledge expands the value and role of our testers. Expanding the role of the tester to include customer-valued collateral (like tips, techniques, guidelines, and best practices for use) ultimately reduces the customer's cost of ownership and increases the tester's value to the business.
Myth: We don't have enough resources or time to fully test the product.
Reality: You don't need to fully test the product -- you need to test the product sufficiently to reduce the risk that a customer will be negatively affected.
The reality of changing market demands generally means that, indeed, it's actually not possible to exhaustively test a product in the specified timeframe. This is why we need a pragmatic approach to testing. Focus on your customers' business processes to identify your testing priorities. Incorporate internal customers to system test your product. These steps increase your testing resources, while providing real-world usability feedback. You can also do your system testing at an external customer lab to boost your real-world environment experience without increasing your maintenance or system administration activities.
Myth: Testing should take place in a controlled environment.
Reality: The more the test environment resembles the final production environment, the more reliable the testing. If the customer's environment is very controlled, then you can do all your testing in a controlled environment. But if the final production environment is not controlled, then conducting 100 percent of your testing in a controlled environment will cause you to miss some important test cases.
While unpredictable events and heterogeneous environments are difficult to emulate, they are extremely common and therefore expected. In our current global market, it is very likely that your application will be used in flexible, distributed, and diverse situations. In iterative testing, we therefore schedule both business usage model reviews and system testing activities with customers whose environments differ. The early business usage reviews identify the diversity of the target customer market, prior to coding. System testing at customer sites exercises our product in the real world. Although these "pre-released" versions of the product are still in the hands of our developers and running on our workstations, they are tested against customer real-world office (or lab) environments and applications. While this strategy doesn't cover every contingency, it acknowledges the existence of the unexpected.
Myth: All customers are equally important.
Reality: Some customers are more equal than others, depending upon the goal of a particular release. For example, if the release-defining feature for the January release is the feature that converts legacy MyWidget data to MyPalmPilot data, then the reactions of my customers that use MyWidget and MyPalmPilot are more important for this particular release than the input of other customers.
All our customers are important, of course. But the goal of iterative testing is to focus on testing the most important features for this particular iteration. If we're delivering feature XYZ in this iteration, we want expert customer evaluation of XYZ from users familiar with prior XYZ functionality. While we welcome other feedback, such as the impressions of new users, the XYZ feature takes precedence. At this stage of development, users new to the market cannot help us design the "right XYZ feature."
Myth: If we're finding a lot of bugs, we are doing important testing.
Reality: The only thing that finding a lot of bugs tells us is that the product has a lot of bugs. It doesn't tell us about the quality of the test coverage, the severity of the bugs, or the frequency with which customers will actually hit them. It also doesn't tell us how many bugs are left.
The only certain way to stop finding bugs is to stop testing. It seems ridiculous, but the thought has merit. The crux of this dilemma is to figure out what features in the product actually need to work. I've already mentioned that there are many workflows in a product that aren't actually used -- and if they aren't used, they don't need to work. Incorporating customer usage knowledge directly into your test planning and defect triage mechanism improves your ability to predict the customer impact and risk probability associated with a defect. Incorporating both risk- and customer-based analysis into your test plan solution will yield a more practical and pragmatic test plan. Once you're confident in your test plan, you can stop testing after you've executed the plan.
How do you build that kind of confidence? Start, in your test planning, by identifying all the areas that you need to test. Get customer review and evaluation on business processes and use cases so that you understand the frequency and importance of each proposed test case. Take special care to review for test holes. Continually update and review your test plans and test cases for each iteration. Your goal is to find what's not covered. One way to do this is to map bug counts by software areas and categories of tests. If a software area doesn't have defects logged against it, it could mean that this area is extremely solid or that it hasn't been tested. Look at the timestamps of the defect files. If the last defect was reported last year, maybe it hasn't been tested in awhile. Finding patterns of missing bugs is an important technique to verifying test coverage.
Myth: Thorough testing means testing 100 percent of the requirements.
Reality: Testing the requirements is important, but not sufficient. You also need to test for what's missing. What important requirements aren't listed?
Finding what's not there is an interesting challenge. How does one see "nothing?" Iterative testing gets customers involved early. Customers understand how their businesses work and how they do their jobs. They can tell you what's missing in your application and what obstacles it presents that stops them from completing their tasks.
Myth: It's an intermittent bug.
Reality: There are no intermittent bugs. The problem is consistent -- you just haven't figured out the right conditions to reproduce it. Providing serviceability tools that continually monitor performance, auto-calibrate at the application's degradation thresholds, and automatically send the proper data at the time of the degradation (prior to the application actually crashing) reduces both in-house troubleshooting time and customer downtime. Both iterative testing and iterative serviceability activities reduce the business impact of undiscovered bugs.
Better diagnostic and serviceability routines increase the customer value of your product. By proactively monitoring the environment when your product starts to degrade, you can reduce analysis time and even avoid shutdown by initiating various auto-correcting calibration and workaround routines. These types of autonomic service routines increase your product's reliability, endurance, and run-time duration, even if the conditions for reproduction of a bug are unknown.
In a sense, autonomic recovery routines provide a level of continuous technical support. Environment logs and transaction trace information are automatically collected and sent back to development for further defect causal analysis, while at the same time providing important data on how your product is actually being used.
If we acknowledge that bugs are inevitable, we also need to realize the importance of appropriate serviceability routines. These self-diagnostic and self-monitoring functions are effective in increasing customer value and satisfaction because they reduce the risk that the customer is negatively affected by bugs. Yet even though these routines increase customer value, few development cycles are devoted to putting these processes in place.
Myth: Products should be tested under stress for performance, scalability, and endurance.
Reality: The above is true. But so is its opposite. Leaving an application dormant, idle, or in suspend mode for a long period emulates customers going to lunch or leaving the application suspended over the weekend, and often uncovers some issues.
I recommend including sleep, pause, suspension, interrupt, and hibernating mode recovery in your functional testing methods. Emulate a geographically-distributed work environment in which shared artifacts and databases are changing (such as when colleagues at a remote site work during others' off-hours) while your application is in suspend or pause mode. Test what occurs when the user "wakes it up," and the environment is different from when they suspended it. Better yet, put your product in a real customer environment and perform system testing that emulates some of the above scenarios.
Myth: The customer is always right.
Reality: Maybe it's not the right customer. You can't make everyone happy with one release. Therefore, be selective in your release-defining feature set. Target one type of customer with a specific, high-profile testing scenario. Then, for the next release or iteration, select a different demographic. You'll test more effectively; and, as your product matures, you'll increase your customer base by adding satisfied customers in phases.
Myth: Iterative development doesn't work.
Reality: It's human nature to be skeptical of the unknown and the untried. In the iterative development experience, benefits accrue gradually with each successive phase. Therefore, the full benefit of the iterative approach is appreciated only towards the end of the development cycle. For first-timers, that delayed gratification can truly be an act of faith. We have no experience that the approach will work, and so we don't quite trust that it will. When we perceive that time is running out, we lose faith and abandon the iterative process. In panic mode, we fall back into our old habits. More often than not, iterative development didn't actually fail. We just didn't give it a chance to succeed.
Iterative testing provides visible signs at each iteration of the cumulative benefits of iterative development. When teams share incremental success criteria (e.g., entrance and exit criteria for each iteration), it's easier to stay on track.
Because we are continually monitoring our results against our exit criteria, we can more easily adjust our testing during the iterations to help us meet our end targets. For instance, in mid-iteration, we might observe that our critical and high defect counts are on the rise and that we're finding bugs faster than we can fix them. Because we've recognized this trend early, we can redistribute our resources to sharpen our focus on critical-path activities. We might reassign developers working on "nice-to-have" features to fix defects in "release-defining" features or remove nice-to-have features associated with high defect counts.

Thursday, October 12, 2006

Mercury launches new software testing products

Mercury released on Monday two testing products, an upgrade of the Systinet component registry, and new SOA management capabilities within the Business Availability Center, a suite of software management tools. SOA is a type of distributed computing in which standards-based interfaces are used to connect application functionality. Data is traded by way of technology based on extensible markup language, or XML.

SOA, formerly called distributed objects architecture, was often used initially to carry data to in-house Web portals. The technology is gradually moving up the IT food chain to handle integration of supply-chain applications between companies, and to automate in-house business processes.

With the increasing complexity has come the need for better tools to manage the spider web of components, often called services, trading XML data packets. Mercury starts its mission with version 2.1 of Systinet 2, a registry and repository with a set of governance capabilities that include service publishing and discovery, policy management, contract management, interoperability and lifecycle management.

On the testing side, Mercury unveiled its new Service Test 8.1 and Service Test Management products. The former conducts both functional and performance tests for services, and the latter manages the process of testing new services and service changes.

Service Test Management is an integrated module of Mercury Quality Center 9.0. The new tool can be used to perform service test planning, execution, defect evaluation and analytics.

Service Test is a standalone product built on Mercury’s LoadRunner technology for testing applications. The new product can automatically generate tests for services without graphical user interfaces; and can perform functional, boundary, compliance and interoperability testing.

Tuesday, October 10, 2006

Apollo 13 and acceptance testing

Let’s look back again at the Apollo 13 “missed opportunities”.

During the routine countdown rehearsals, oxygen tank #2 was filled with oxygen but could not be emptied. The ground crew thought a loose nozzle fitting was the source of the difficulty, but there was no thorough investigation. The loose fitting problem was not fixed since gaseous oxygen still passed through the nozzle as needed. Instead of thoroughly investigating the problem, they worked around the problem.

When the normal procedure to empty the tank failed to work, ground crews improvised a procedure and used heaters and fans to empty the tank. (Please notice a similarity of living with and working around an Apollo program unknown problem is similar to the Space Shuttle program accepting the unknown problem of foam strikes.)

The improvised detanking procedures had never been used before, and the tank had not been qualified for the conditions experienced. (Notice a similarity between this and the Challenger launching in cold temperatures for which the vehicle had not been qualified.)

In reviewing the improvised procedures, officials at NASA, North American, Beach, and even the flight crew did not recognize the hazard of overheating.

Many of the managers were not even aware of the extended heater operation.

Neither qualification nor acceptance testing required switch cycling under load as should have been done. This was a missed opportunity.

The problem could have gone completely unnoticed and the Apollo 13 flight completed without the anomaly if the special detanking improvised procedure had not been done, because the switch remained cool and closed during flight and could take a momentary or short 65 V DC charge and probably not fail. Imagine if the Columbia foam strike had just been slightly smaller; perhaps the incident never would have occurred and would have gone unnoticed.

The thermostatic switches failures probably would have been captured if the heater current readings had been checked during the detanking operation.

The oxygen #2 tank had been dropped during installation at North American Aviation, which caused the fitting to become loose, but there was no investigation.

The tank heaters were equipped with 28-volt thermostatic switches supplied by the spacecraft fuel cells. But during the countdown rehearsal they were powered by 65-volt ground power supply. The 65-volt load likely caused the switches to fail. The ground crew kept the heaters on assuming the thermostatic switch would trigger if the tank temperature exceeded 80° Fahrenheit, but the heaters did not shut off and temperatures reached 1000° F. This heat burned the Teflon insulation off the fan motor wiring, leaving bare wires that would short circuit during the mission. The ground crew should have noticed this high temperature or burning smell. Apparently nobody was aware the temperature had reached such a high reading, or else they just did not report the anomaly. Maybe they were in a hurry to complete the task. After all, the heaters had been on for six hours! The electrical parts were damaged and the stage set for potential disaster.

The thermostatic switch 28-volt specification, dating to 1962, was revised in 1965 to carry the 65-volt Kennedy Space Center ground supply. However, Beach Aircraft Corporation, which manufactured the switches, did not make the needed change to the switches. This opportunity was missed by Beach, either intentionally or as an oversight, and also missed by North American and NASA in all of the design, documentation, and flight review systems.

The Apollo 13 problem was right in front of everyone, including the astronauts, just as the foam strike problems had been known and accepted since the very first space shuttle flight.

Monday, October 09, 2006

Software Testing Principles

Software testing is an extremely creative and intellectually challenging task. When testing follows the principles given below, the creative element of test design and execution rivals any of the preceding software development steps.
Testing must be done by an independent party.
Testing should not be performed by the person or team that developed the software since they tend to defend the correctness of the program. The developer is driven by delivery and he will try to finish the testing as early as possible. The developer is gentle in testing his code and has a soft corner for his code. The independent tester is driven by quality and will try to break the code. Hence testing should be done by an independent person.
Assign best personnel to the task.
Because testing requires high creativity and responsibility only the best personnel must be assigned to design, implement, and analyze test cases, test data and test results. Also note that from point 1, the testing is being done by an independent tester, the system is new to him and he has to understand the entire system to test it. It is possible to find bugs and issues with software only when you know the system thoroughly.
Testing should not be planned under the tacit assumption that no errors will be found.
Testing is the process of executing a program with the intent of finding errors. Good testing will find bugs… and testing is never complete, it is infinite. There should be no assumption that the software has no errors. Such an assumption would leave tremendous amount of holes in the system.
Test for invalid and unexpected input conditions as well as valid conditions.
The program should generate correct results when a valid input is given, this is called as positive testing; Also the software should give correct error messages when an invalid test is encountered, this is called as negative testing. We need to give a different range of inputs with varying sets of values. For example, if the input field is taking a positive integer, we should try with all sorts of integers, positive, negative, zero, large positive, large negative etc. Also we should try giving a character, string etc and the expect the correct error message.
The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found.
If you find a bug in a particular module, the developers just tend to resolve that issue and close the bug. But there is a more probability that there will be more issues in that area. You need to look around, why that particular bug is present and look for similar issues in that module, you are going to hit many issues.
Keep software static during test.
The program must not be modified during the implementation of the set of designed test cases. Modifying the program while the testing has started will leave a lot of loop holes in the testing; you would have finished testing module A and started testing on module B, but by that time, module A would have changed and the changes are not tested!!!
Document test cases and test results.
This is very important for testers, you need to document the test cases, which can be used for later testing, may be for the next release. The test results should be documented so that they can used for analyzing if required at a later stage.
Provide expected test results if possible.
A necessary part of test documentation is the specification of expected results, even if providing such results is impractical. Using the expected results you can verify the software which is under test and declare whether the software meets the expectations or not.

Include Google Gadgets Into Any Page

Google announces that some of the gadgets available for Google Personalized Homepage can be embedded into any web page. The list of the gadgets includes: Google Calendar Viewer, Google Calculator, US Traffic Information, Moon Phase, Picasa Album Viewer and more. This way, you can enrich your web page with live information.
Google gives an example of usage:

"For example, let's say you are in charge of your club soccer team's website, and you want to add a current weather forecast so your fans can plan for your games or you want to include a daily brainteaser on your site without having to come up with something new everyday. Google Gadgets lets you do this easily. Just visit the directory of "Google Gadgets for your webpage" to find gadgets that you'd like to add to your own page and select your preferences for how the gadget will appear on your page. Then, copy and paste the HTML from the window into the HTML code for your own website. It's an easy way to get the content you need and want without spending hours writing code!"

Speeding up Software Testing

How do we do that ? Barbara on Business Analyst Blog has an answer - “Write Better Requirements“.

This post throws up a very fundamental question : “If you don’t have excellent requirements, how do you evaluate the software’s ability to address the business problem?”

Poor requirement definition by the stakeholders and poor understanding (and hence documentation) of requirements by the development team are two major reasons why testers take so much time in testing (and also why so many software projects fail).

Everyone has suggestions about how to improve your testing—implement a testing process or methodology, utilize IEEE standards, work towards CMMI compliance, etc. No one mentions that improving requirements will improve testing!

Without testable and verifiable requirements, testing is always a difficult and time consuming job.

Fatal bugs in history of software

In an overly software dependent world, software quality has been more critical than ever before. Just came across an article in Baseline Magazine listing the "Eight Fatal Software-Related Accidents". Of them, the most devastating has been a software bug which hobbled a radar causing a Korean Jet crash killing 225 people in 1997.

On similar lines, I read an interesting article on Wired News entitled "History's Worst Software Bugs".

The article states,
"Sixty years later, computer bugs are still with us, and show no sign of going extinct. As the line between software and hardware blurs, coding errors are increasingly playing tricks on our daily lives. Bugs don't just inhabit our operating systems and applications — today they lurk within our cell phones and our pacemakers, our power plants and medical equipment. And now, in our cars."

While it is an accepted fact that a software can never be completely bug free, QA and test teams should atleast be able to ensure that the resultant bugs are not fatal to individuals and businesses. And thats where the real challenge lies.

Raising alarm bells

For members of test team, it is very important to raise the alarm bells at right instances. With right instances, I mean that test team should raise the alarm bells well in advance even if they have slightest of apprehension about the product quality and resolution of bugs therein.

Raising alarm bells is a very important technique for test team since they have a clear perspective of the overall product quality in terms of functional stability and correctness. While project manager’s perspective is largely governed by task orientation and execution, test team has the exact status on the overall quality status. This means, to an extent, test engineers are also risk analyzers. They strive to weed out majority of project risk by identifying most crucial and important bugs.

Raising alarm well in advance helps the project manager to manage client expectations. When client is sitting at the other end expecting a product to be shipped, it really does not make sense to raise an alarm the last moment. This paralyzes the project manager and leaves little time for him and client to manage the delays. Most often, clients become difficult to manage when product shipment is withheld at the last moment and a manageable situation turns into a crisis.

Don’t test more, develop better...

The following quote by Steve McConnell emphasizes on importance of processes in software projects. Poor quality results from weak process structure. Processes that enable teams to do "right" things first time not only improve the overall effectiveness but also improve the quality. Test results are just a measure of the overall product quality.

Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don't improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don't buy a new scale; change your diet. If you want to improve your software, don't test more; develop better.-
Steve C McConnell, "Code Complete: A Practical Handbook of Software Construction" by Steve C McConnell, ISBN: 1556154844

Bug Reporting is an Art !!!

The work of a software tester on software projects is much like diagnosis doctors do. There is no scope for ambiguity when a doctor diagnoses. Before a doctor reveals his diagnosis, a lot of symptom analysis and mind work goes into it. A doctor has to be very precise in reporting the problems as he diagnoses. “Patient X has something like cancer” will kill the patient just out of shock. In any field, effective reporting is much of an art.

Similarly, when diagnosing issues with software while testing, there is no scope of ambiguity. As testers, we have to remember that each bug reported involves some work for developers. They have to understand the context of issue, try to reproduce it and resolve it after it is reproduced. May a times, a project manager needs to understand what are severe issues open and manage the project accordingly.

For this, testers must clearly and succinctly report each of the finding with appropriate severity and priority assigned. Suppose you are a developer and you see a bug stating “The values in category combo box are not proper”. How would you react to it?

Following are pointers to effectively report software issues:

1. Each bug should be clearly identifiable by the bug title
2. Each bug should be reported after building a proper context. What are the pre-conditions for reproducing the bug?
3. Write down steps to reproduce the bug.
4. Be very clear and precise. Use short and meaningful sentences.
5. Cite examples wherever necessary, especially if bugs are a result of a combination of values.
6. Give references to specifications wherever required. E.g. Refer invoices module on page 14 of SRS.
7. Keep the descriptions simple.
8. Be objective. Report what you are supposed to without passing any kind of judgement in the bug descriptions.
9. Thoughtfully assign severity and priority to the bug. Remember, a minor issue may have high priority and vice versa.
Reporting a bug is no rocket science but it surely requires a lot of common sense. I have seen people writing mini-essays in bug reports and also the ones who report one-liners.

Reported bugs should not add an overhead of understanding to the developers but help them instead to reproduce the bug and effectively resolve it.

Google Code Search

Google launched a new code search feature day before yesterday. At least two sites already offer this functionality, but a great deal of attention follows Google wherever they go.

Code search is a great resource for web developers and programmers, but like the making available of all previously unsearched bodies of information, it's given lots of flashlights to people interested in exploring dark corners. Here are some things that people have uncovered already:

1. Key generation algorithm for WinZip (via airbag)
2. Wordpress usernames and passwords. Looks like a lot of these are the result of people zipping/tarring up their Wordpress files and putting the zip/tar file in a publicly accessible directory. I imagine other such applications are just as susceptible to this issue. (via airbag)
3. Like
Movable Type. This only turns up one username/password, but it's for Gawker. Which in turn reveals this open directory with all sorts of code and u/p goodies...but they restricted access to it after being notified of the problem.
4. Possible buffer overflow points. (via live aus der marschrutka)
5. Tons of nerd jokes like
"here be dragons".
6. Confidential code and code with restricted rights. (via digg)
7. Coders complaining about
stupid users.
8. All sorts of code that
needs to be fixed.
9. Programmers who want to get a new job. In the office just now, we were talking about turning Google Code Search into a job posting board by inserting "Like our code? Come work for us!" text ads in the comments of source code which is then distributed and crawled by Google.
10. Kludge-y code.
11. You can also use it for vanity searches. A surprisingly small amount of code is returned on
a search for Linus Torvalds. Jamie Zawinski. Alan Cox. There have to be more prolific programmers out there...
12. Programmers
coding while drunk. Also: "I am drunk and coding like I am the greatest coder of all time."
13. Customer databases with names, addresses, zip codes, phone numbers, and weakly encrypted passwords. Ouch. (No link to this one because I don't really want to get anyone's data out there.)
14. Expression of which programming language sucks more. For instance,
Python sucks.
15. Code vulnerabilities:
"this will crash".
16. Listing of some backdoor passwords.
Got any other Google code search goodies? Send them along.