Wednesday, January 17, 2007

Software Testing: Why is it so Difficult?

Software testing is probably the most complex task in the software development cycle. It often takes longer to test the software than it does to write the code. The problems discovered during the software testing phase add more time onto the coding phase, resulting in further delays in the product’s release, and so this vicious cycle goes.
It’s nearly impossible to attribute the problems that arise during the software testing cycle to any single factor. Before going any further, it’s important to clarify one point. Software that works, but not the way it is supposed to work, is not considered an error in the coding (also known as a bug). Rather, this situation is the result of an error in the design phase. Software engineers refer to these differences as a “fault” vs. a “failure”. Faults can turn into failures, but that’s a subject for another time.

So why is software testing such a time-consuming and frustrating process? One factor is the complexity of the software being developed. The more complex the project is, the more there is to test. Also, complex projects typically involve multiple development team members, including those working directly for the company and those working as sub-contractors.

Software testing troubles directly attributable to the human factor include poorly documented code and employee turnover. And if the person who is not properly documenting the project is the same person who leaves the company midway through the project cycle, the problem quickly compounds.

Another software testing difficulty arises when those developing the software are the same ones testing that software. These individuals have a much higher level of understanding about software and computer system. They’re not likely to make the same types of mistakes during the software testing phase that end users might make using the finished software. For example, software engineers understand that you never press the power button before you properly close all applications. End users, in their rush to burst out of the office at 5:00 pm just want the computer off, and some won’t wait until every application closes properly.

Plenty of software testing applications are available to help during the software testing phase. These products are designed to facilitate and enhance the software testing phase, provided they can be made to work with the software being tested.

The main purpose of software testing is to discover software failures. As failures are discovered, they can be corrected. However, software testing cannot guarantee that the software is problem-free. Testing finds many bugs, but even the most extensive testing processes can’t fix failures that are not uncovered. Never rely on software testing as proof that no problems exist. That’s considered software verification, an entirely different process.

Regardless of the difficulties involved with software testing, one truth remains. Software failures discovered early on cost far less to correct than those that occur in latter stages of software development, or worse, after the software has been released to the general public.

Tuesday, January 16, 2007

Black Box Testing Video Tutorial !

This educational video clarifies the difference between various types of software testing, including black box testing, glass box testing, acceptance testing, unit testing, system testing, integration testing and other types of software testing. Watch the Black Box Testing video.

Story of a Software Tester !

On a dark and foggy night, a small figure lay huddled on the railway tracks leading to the Chennai station. At once I was held back to see someone in that position during midnight with no one around. With curiosity taking the front seat, I went near the body and tried to investigate it. There was blood all over the body which was lying face down. It seemed that a ruthless blow by the last train could have caused the end of this body which seemed to be that of a guy of around my age. Amidst the gory blood flow, I could see a folded white envelope which was fluttering in the midnight wind. Carefully I took the blood stained envelope and was surprised to see the phrase "appraisal letter" on it. With curiosity rising every moment, I wasted no time in opening the envelope to see if I can find some details about the dead guy. The tag around the body's neck and the jazzy appraisal cover gave me the hint that he might be a software engineer. I opened the envelope to find a shining paper on which the appraisal details where typed in flying colors. Thunders broke into my ears and lightening struck my heart when I saw the appraisal amount of the dead guy!!!!! My God, it was not even, as much as the cost of the letter on which the appraisal details were printed.... My heart poured out for the guy and huge calls were heard inside my mind saying "no wonder, this guy died such a miserable death"... As a fellow worker in the same industry, I thought I should mourn for him for the sake of respect and stood there with a heavy heart thinking of the shock that he would have experienced when his manager had placed the appraisal letter in his hand. I am sure his heart would have stopped and eyes would have gone blank for few seconds looking at the near to nothing increment in his salary.

While I mourned for him, for a second my hands froze to see the employee's name in the appraisal letter... hey, what a strange co-incidence, this guy's name is same as mine, including the initials. This was interesting. With some mental strength, I turned the body upside down and found myself fainted for a second. The guy not only had my name, but also looked exactly like me. Same looks, same built, same name.... it was me who was dead there!!!!!!!! While I was lost in that shock, I felt someone patting on my shoulders. My heart stopped completely, I could not breathe and sprung in fear to see who was behind......... splash!!! Went the glass of water on my laptop screen as I came out of my wild dream to see my manager standing behind my chair patting on my shoulder saying, "wake up man? Come to meeting room number two. I have your appraisal letter ready"!!!

Friday, January 12, 2007

I have found this bug in Google !

Unbelievable! But 100% TRUE. I myself is a big fan of Google. Infact, my dream is to work as a Software Tester in Google. But today I was shocked (or rather delighted !) to see this bug in Google Suggest. I have already blogged about in my other blog. You can read it here .
I am proud to be able to catch a bug in Google, the great !
Please revert back with your opinions.

Thursday, January 11, 2007

Why Software Quality Stinks – Is it due to lack of Adequate Testing ?

According to a recent survey, it may not get better until attitudes change—from the top down. The Cutter Consortium, an Arlington, Mass.-based IT consulting and advisory service, recently surveyed more than 150 software development organizations across several industries (computer software, financial services, education and more) and found that 38 percent of developers said their companies do not have an adequate software quality assurance program. A startling 31 percent said their companies had no quality assurance personnel at all. In spite of this, the perception among developers is that most senior managers are satisfied with the quality of software that their companies are producing. It's time for senior management to provide visible support for software quality.

Top 5 Best Practices
If your organization doesn't have a software quality team in place, follow these five steps to get an effective group up and running.
1. Get support from senior management. If developers know that a CIO, CTO or CEO is backing the software quality assurance manager, they'll be more likely to produce cleaner code. Get the attention of executives by connecting software quality to the bottom line.
2. Establish a quality organization (with processes, staff and an experienced manager). You may be able to form a group from in-house staff; however, E.M. Bennatan, senior consultant at the Cutter Consortium, says having an experienced, strong quality manager is vital. "You need someone who has spent a few years in the trenches and has gotten products out the door," he says.
3. Train developers too (alongwith your testing team). Don't save quality training just for the quality assurance group. Developers will pay closer attention to quality issues if they know what to watch out for.
4. Listen to your customers (or user group). Get customers involved in the development process. Offer them a beta version of software to test. "Their feedback is invaluable," says Bennatan.
5. Collect metrics. The quality process should be data-driven, according to Bennatan. Demonstrate that the quality of your products is improving.

Saturday, January 06, 2007

How to survive the Testing Tsunami? A Tester’s View !

As a Software Test Engineer, when I’m hired to test an application under development, I am aware of the looming testing tsunami. Unfortunately, many organizations are not aware of this phenomenon, so they don’t plan for it, which puts their projects at risk.

What is the testing tsunami? A tsunami is a tidal wave. The testing tsunami is the tidal wave of testing work that occurs at the end of development. Consider the workload curves in software development. The developers’ workload starts high and progressively decreases until all work is completed. I will ignore feature creep here, because this does not change the fundamental curve. Ditto for those who might say, “Doesn’t it spike here and there?” Fluctuations are not important – just the overall trend.

This is just the opposite of the testers’ workload. As more code is completed, the testers’ workload increases. New features must be tested. Old features must be retested. A full regression test should be performed before each release into production. Integration testing is required if this interfaces with other systems. Specialized testing such as load, stress, performance and security usually begins towards the end of development. Making matters worse, the largest amount of testing happens as the delivery deadline and/or budget is coming to an end. Armed with this knowledge, what should an organization do to mitigate the effects of the testing tsunami?

The way to lessen the effects of the testing tsunami is to move testing activities up as soon as possible. This means hiring testers early on in the project. While this advice is by no means new, few companies follow this practice (I may have a skewed view of this, since I am under an impression that testers tend to be hired late in the project’s lifecycle or on projects already in trouble). Make sure testers are budgeted for starting from day one of the project.

What would testers do this early in development, when the requirements haven’t even been gathered? Set up the testing infrastructure. Software development is a chaotic process. Testing requires a controlled environment, which translates into a large investment in testing infrastructure. The infrastructure needs will vary depending on the company’s testing maturity. Are defect tracking processes, dedicated test servers, testers’ workstations, test tool licenses, test databases and development processes already in-place? Project archeology (digging and understanding artifacts) may be needed if this is another attempt at a failed project or a new version of an old system.

Testers should be involved in requirements gathering meetings, but as an observer. During this time, they would be getting familiar with the application and end-users and beginning to think of some test case ideas. Requirements and specifications are usually the source material for test cases, but the information density in these documents is usually sparse and needs to be boiled down into test cases. These documents usually focus on positive scenarios. For each positive scenario, usually many negative scenarios exist that must be developed and tested.

A Proof of Concept (PoC) is another best practice for reducing the testing tsunami, especially if specs are weak or non-existent. The sooner the testers get their hands on the application under test (AUT), the better. If test automation tools are being used, the PoC becomes key, as these tools should be evaluated against it. If the tool has already been purchased, you will get a good idea of how well it understands the AUT. What will you do if your test automation tool doesn’t work with your application? Select a different tool, change your application to increase its testability or abandon test automation?

Daily builds are another best practice, especially automated daily builds. Developers check in their code whenever they finish it and each night a build is made. This has many advantages. First, developers get used to performing builds (or having builds performed). Builds are no longer a big deal, fraught with errors. Secondly, this allows testers to log bugs against these features earlier, rather than later. Why wait for five features to be completed before building and releasing into test; why create a mini-tsunami? Get it into test ASAP. Finally, it gives faster feedback to the developers, which also gives an accurate status of the project to management. It’s frightening to hear about testing organizations that will not accept code to test until the application is complete or that stop testing as soon as they find a bug. Both of these are real examples I’ve encountered.

Instead of viewing testing as a quality function, view it as a project management function. Testing is a tool that provides a true, up-to-date status of the project. When wondering, “When should we start testing?” just ask yourself “When would I like to find out something is wrong or incomplete?”

By being aware of the massive workload that awaits testers at the end of the project, do everything possible to move up testing activities as early as possible. This will lessen, although not eliminate, the testing tsunami.

Tested by “Tester Tested” !

I am a regular reader of Pradeep Soundararajan's Blog. Today I was reading his older posts and I was surprised (or rather shocked) to see my own name in his post Indian testing community - start blogging! !!!

I am quoting few lines from his blogpost. I hope Pradeep doesn’t mind that :).

.
.
.
Why testers who tried aping Tester Tested unable to continue? ( could be one of the following...)

1. They wanted to get success in a short span without putting much effort.
2. They thought it is easy to write a blog and maintain it.
3. They started with an intention to have a huge reader base.
4. They started because it worked for me.
5. They weren't passionate about testing

Recently a tester from Chennai named Debasis recieved a surprise comment from James Bach on his blog and that must have made him one of the happiest tester for the day.

Another Indian tester who has blogged with real passion is
Shrinivas Kulkarni . He is a Senior Test Manager in iGate, Bangalore. He gets James Bach to comment on his blog on a regular basis.
.
.
.

Nice see that some serious testers are reading my blogs too…:) Pradeep has been hired by
James Bach as an Indian Representative of Satisfice Inc. Read this in James Bach's Blog! . I wish him all the best for becoming the "first Indian" representative of Satisfice in India !!! All the Best Pradeep…

Thursday, January 04, 2007

What do you do to deliver a High Quality Software?

How many times have u heard this question being asked in your work place? You might have heard this in an interview, in a meeting, from your manager, from your client or may be from your colleagues… While the question remains more or less the same, the answer differs. I have heard a variety of answers to this single question. Sometimes funny and sometimes rather pathetic. Some interesting answers being "Oh, the programmers are very sincere. They work really hard." or "After we write the code we run a lot of tests we found somewhere." or my favorite one, "Well, you see, I am Swiss, and I am very precise."

When I get an answer like this I smile more, and say, "Yes, but what exactly do you do to make your product high quality?" Some people still don't get it. I asked one prospective supplier my question, and they answered with a lot of charts about the bug rate in their shipped products. They were very proud that as the product matured, the bug rate went down. Another prospective supplier had a better answer: they talked about their systematic testing, generated from an assertion language and validated by a coverage tool. But these are both about how they recover from quality problems, not about how they put quality in.

If anybody asks me this nasty question, I want to be ready with a good solid answer. Ideally, everybody in my team should know the answer, believe in it, contribute to it, and be able to evangelize it. Our answer should be true, and way beyond what the competition can claim. It should be obvious to the user that our quality is phenomenal. We should do four things:

1. Commit to zero defects
2. Proceed systematically
3. Check everything
4. Improve continuously

Zero defects: A friend was telling me about the landscaping he was having done. The landscapers had made some mistakes, and he said, "I paid for a perfect job. They'll have to fix it." The customer expects a perfect job from us. Even in teams that already have wonderful commitment to quality, some folks are scared of promising zero defects... but that's what we expect from others. And any tiny defect can blow the reputation of the whole system, in the user's eyes. (This is the cockroach theory: if you see a roach in a restaurant, you don't say, "There's the roach in this place." You say, "Let's get out of here, this place is infested.")

Proceed systematically: Architecture, procedures, well-reviewed designs, high-level languages and discipline for using them, plans for thorough testing, and measurement of results are all evidence for a development process that's under control. In addition, the team should have documented methods for design, defect prevention, and inline testing, and tools to mechanize these.

Check everything: Every work product should be checked somehow. Most teams review designs, review source code, and test compiled code. The principle can be applied to other things we do for a "client" anywhere that quality might suffer. If this principle becomes part of our culture, each step can concentrate on preventing or removing its own defects, counting on clean input from the prior step.

Improve continuously: If we do all the above, it won't be enough. We need to learn from our mistakes, look for better ways, and question authority. Our methods and processes have to keep evolving to meet changing conditions.

Wednesday, January 03, 2007

Role of Technical Reviews in Software Testing

Technical Reviews:
Technical reviews include all the kinds of reviews that are used to detect defects in requirements, design, code, test cases, and other work products. Reviews vary in level of formality and effectiveness, and they play a more critical role in maximizing development speed than testing does.

The least formal and most common kind of review is the walkthrough, which is any meeting at which two or more developers review technical work with the purpose of improving its quality. Walkthroughs are useful to rapid development because you can use them to detect defects earlier than you can with testing.

Code reading is a somewhat more formal review process than a walkthrough but nominally applies only to code. In code reading, the author of the code hands out source listings to two or more reviewers. The reviewers read the code and report any errors to the code’s author. A study at NASA’s Software Engineering Laboratory found that code reading detected about twice as many defects per hour of effort as testing (Card 1987). That suggests that, on a rapid-development project, some combination of code reading and testing would be more schedule-effective than testing alone.

Inspections are the most formal kind of technical review, and they have been found to be extremely effective in detecting defects throughout a project. Developers are trained in the use of inspection techniques and play specific roles during the inspection process. The "moderator" hands out the material to be inspected before the inspection meeting. The "reviewers" examine the material before the meeting and use checklists to stimulate their reviews. During the inspection meeting, the "author" paraphrases the material, the reviewers identify errors, and the "scribe" records the errors. After the meeting, the moderator produces an inspection report that describes each defect and indicates what will be done about it. Throughout the inspection process you gather data about defects, hours spent correcting defects, and hours spent on inspections so that you can analyze the effectiveness of your software-development process and improve it.

Because they can be used early in the development cycle, inspections have been found to produce net schedule savings of 10 to 30 percent (Gilb and Graham 1993). One study of large programs even found that each hour spent on inspections avoided an average of 33 hours of maintenance, and inspections were up to 20 times more efficient than testing (Russell 1991).

Comment on Technical Reviews:
Technical reviews are a useful and important supplement to testing. Reviews find defects earlier, which saves time and is good for the schedule. They are more cost effective on a per-defect-found basis because they detect both the symptom of the defect and the underlying cause of the defect at the same time. Testing detects only the symptom of the defect; the developer still has to isolate the cause by debugging. Reviews tend to find a higher percentage of defects (Jones 1986). And reviews serve as a time when developers share their knowledge of best practices with each other, which increases their rapid-development capability over time. Technical reviews are thus a critical component of any development effort that aims to achieve the shortest possible schedule.

Tuesday, January 02, 2007

Agile testing strategies


An article about different approaches to testing on agile development projects. The philosophy outlined is:

1. First, you want to test as early as you possibly can because the potential impact of a defect rises exponentially over time (this isn’t always true, but it’s something to be concerned about). In fact, many agile developers prefer a test-first approach.
2. Second, you want to test as often as possible, and more importantly, as effectively as possible, to increase the chance that you’ll find defects. Although this increases your costs in the short term, studies have shown that greater investment in testing reduces the total cost of ownership of a system due to improved quality.
3. Third, you want to do just enough testing for your situation: Commercial banking software requires a greater investment in testing than membership administration software for your local Girl Scouts group.
4. Fourth, pair testing, just like pair programming and modeling with others, is an exceptionally good idea. My general philosophy is that software development is a lot like swimming—it’s very dangerous to do it alone.

Automated test tool that supports manual processes


While some software testing will always be manual, elements of automation practices can be readily applied to any software quality assurance environment. Original Software is delivering active support for manual practices through its TestDrive application suite. In particular, manual testing practices will see an increase in productivity and reduced errors by automating test step execution, content verification, database verification, error reproduction, defect creation and documentation.

TestDrive delivers active support for manual testing using industry-proven capabilities such as the company’s BusySense, Self-healing and Code-free innovations. By enabling manual testers to use these functions with or without full automation, Original Software provides a small footprint, quick-start learning curve for quality professionals within any IT organization to deliver reliable applications to help their organizations stay competitive.

HP ready for Vista !

From: Ameinfo

HP has announced its portfolio of ‘Microsoft-Ready’ products, services and solutions to help ensure customers experience a seamless transition to Windows Vista, the 2007 Microsoft Office system and Microsoft Exchange Server 2007.

HP’s early installation of beta releases for both Microsoft Office SharePoint Server 2007 and Microsoft Exchange Server 2007 provided an opportunity for preliminary performance and scalability testing. HP has deployed both applications in a pilot phase and plans to upgrade to each next year as part of its own IT transformation and data center consolidation.