Tuesday, September 26, 2006

SQA & Testing – Some Myths

Quality Assurance is comparatively new phenomenon as to testing. I would like here to be specific while defining quality, its assurance and differences from traditional testing methodologies.

Glance at Software Industry and Quality artifacts:

Unlike the industrial revolution in “West” software revolution didn’t take centuries to sow the seed and then boom with a vertical graph, instead it took the software industry less almost half a century to evolve, flourish, and experience the crests and troughs. Quite rightly this is the offspring of baby boomers generation, post world war scenarios, super powers’ marathon, and industrial revolution.

Although the software industry learned fast but still the path it followed was same as was taken by its predecessor industries. While saying that I mean look at the traditional industry, it took them centuries to understand and digest the need of quality product and satisfied customer. Similarly software industry started without having any knowledge of the customer’s exact needs, without any framework, without any diversified targets.

I think many of the readers would agree with me if I say that software industry had the sole aim of superiority of weapons, in its infant days. Fact of the mater is the today’s software industry owes a lot to the “Department of Defence” (DoD) and its subsidiaries. All the buzzwords we utter today like CMM, Malcom Baldrige (MBNQA), and SPICE are the product and requirement of DoD.

To stream line the process of its software manufacturing, and outsourcing, DoD came up with these different sorts of parameters, standards, and tools to measure where the organization stands which intend to do business with DoD by providing the solution towards the software needs of DoD.

But still do you know what is quality? How and why would different people and organizations have different set of processes and standards? If they all don’t have the same framework then who is performing the actual quality work and how can we categorize them.

To deduce the actual trend and definition of quality lets discuss a few examples. For NASA software’s reliability, and performance matters most. For Mr. X software is good enough if it looks good, GUIs are aesthetic; performance and reliability are secondary issues for him in the initial stage. Hence for two different customers primary and secondary issues swap. Same is the case with the quality of these soft wares, if NASA’s software is not very aesthetic in look and feel but it performs excellently with accuracy and is 100% reliable they would say it is of high quality whereas if Mr. X is made to use a very reliable but difficult to use software with crude interfaces, he would definitely going to resist. For Mr. X the quality parameters are not the same as for NASA.

So we conclude that quality is a relative term and it is a variable, which keeps on changing. That’s why I keep on saying “Quality lies in the eyes of beholder”.

Now if we try to define a few major quality artifacts we would have a checklist that would look some thing like this:

Quality software must
1. Meet the requirements of user
2. Have sufficient and up to date documentation.
3. Be cost effective.
4. Be user friendly
5. Easy to learn
6. Reliable
7. Manageable
8. Maintainable
9. Be Secure


“To every quality artifact there is an equal and opposite resisting force”

Interestingly within the same organizations quality definition changes and it becomes a nightmare for the software engineers and project managers to fulfill each user’s needs. For example the Manager wants to control the security system and don’t want any body to breach it where as the data entry operator or the accountant says; “What the hell! I am human and have made a mistake now why can’t I change my posted data with out bringing it into the notice of my bosses, I always do that when I worked manually.” See there is a clash of requirements, which is, can in longer run be treated as a quality artifact, and to resolve this, software houses need good software developers, project managers but also “The Best Analysts”.

Quality Assurance group contain among themselves some of the best analysts, who can fore see and help in removing not only the potential bugs but also can put themselves in different users’ shoes, try to work with the software as per the user’s perspective, and come up with potential problems that the software may cause with in the organization and if unfortunately the tussle would not let the software run and it would be complete loss on part of software firm. This can only be achieved if you have quality assurance group not mere testers in your organization, and the QA group is involved in all phases of SDLC.

This was a case on client end, what about quality awareness on software development end? If you visit or work in any software house you would be told that “QUALITY is our main IDEA” yet unfortunately one finds that most of these are mere lip servicing slogans and the management has only one goal “To earn Money”. Yes money is an important thing in business but it’s not the only thing. What if you earn millions in a day and due to poor quality of your product or lack of customer satisfaction you end up in loosing market for good, or in a courtroom, where you may have to repay your unsatisfied client more than what you have earned? Most importantly if any such disaster occurs it brings bad reputation to company and its employees too. This may end up in good human resources leaving the company as no one want to be a part of a bad team.

It has also been experienced quite often that in times of crunch the department or people who suffered first and the most, is the quality assurance department. This act in its self shows top management’s lack of commitment and vision towards quality. Quite interestingly it is not the case if the organization have a small testing group and no QA group. Do you know why, just because the difference between QA and testing is still jumbled in the minds and testers are given more credit than QA people who are not only good testers but also good analytics.

Reader may have also experienced that if the conflict arises between different teams / Departments, the management or Project managers are often more inclined towards developers than quality assurance group. I have even had the experience of working with some of the top IT professionals who think quality assurance or testing people are just there to prove some point otherwise the code was of very high quality and has defects which are of no importance.

This attitude towards QA and testing is also reflected in our educational institutes where during the 3 or 4 years degree program students hardly learn quality assurance courses. Even if some institute offers the course the faculty and the management fails in bringing forth the importance of this subject, the potential in the field of SQA.

So why testing is be different than QA. To test anything one need not t know or think or cater about the different analytical aspects, hence any one with domain knowledge, and software functional documents. He or she is not concerned what the actual requirements of software were. What went wrong and where and why this is not the actual client need. The only thing he / she would verify is that the software is working as per FS and if Fs is wrong it is not his / her headache. A tester can become a good QA resource if he / she has the God gifted analytical skills but this needs dedication, motivation, drive and support from the top management too. A tester needs to analyze only that specific area and how to crash the application remain within the scope.

Lets take the example of the automobile industry, interestingly the person who is driving the car for test driver is just a tester and he need not to be an expert or automobile engineer. What he / she can report or verify is that the automobile is smooth, performing well, comfortable and makes the driver happy. Now the automobile engineers who are watching the whole development cycle of the car and test drive they are the one who can get the feed back from the testers, mechanics, engineers and designers and make the recommendations for enhancing the features and quality of the machine.

Another basic difference between a tester and a QA personal is that tester is a black box for client, client may not need to know or never come to know who is the tester who is working on his / her software. Similarly for tester client is never a directly approach able interface. From experience I have learned that there should always be a QA personal included in the team, which is in direct contact with the end client. This will help in numerous ways, which I will discuss sometimes later.


Quality is a relative term, a variable that can vary and change its definitions according to needs, scope, culture, environment and geo-political effects.

Quality Assurance group people are not mere critics or testers; incidentally you can find the best of analysts from the QA department.

A good tester can become a brilliant QA personal but it is not possible to be an excellent QA personal without having a testing experience.

A Tester’s Tips for Dealing with Developers

Is the tester doing a good job or a bad job when she proves that the program is full of bugs? It’s a bad job from some developers’ points of view. Ridiculous as it seems, there are project managers blaming testers for the late shipment of a product and developers complaining (often jokingly) that “the testers are too tough on the program.” Obviously, there is more to successful testing than bug counts. Here are some tips about how testers can build successful relationships with developers.

When I started my career as a software tester, I was made aware of an ongoing antagonism between developers and testers. And it took me no time or effort to be convinced that this is all too common. I received the kind of unwelcome response from developers that I think all testers experience at some point during their careers.

From indifferent shrugs to downright hostility (sometimes cloaked as sympathetic smiles), a tester has to endure a lot from developers. It can be hard to keep a positive attitude. But it’s up to us to keep our priorities straight, and push toward a quality project.
I picked up a beautiful line from Cem Kaner’s Testing Computer Software: “The best tester is not the one who finds the most bugs or who embarrasses the most developers. The best tester is the one who gets the most bugs fixed.”
So how can we do that?

Be Cordial and Patient
As a tester you may find it more difficult to convince a developer about a defect you’ve found. Often, if a tester exposes one bug, the programmer will be ready with ten justifications. It’s sometimes difficult for developers to accept the fact that their code is defective—and someone else has detected it.
Developers need support from the testing team, who can assure them that finding new bugs is desirable, healthy, and important in making the product the best it can be. A humanistic approach will always help the tester know the programmer better. Believe me, in no time the same person could be sitting with you and laughing at mistakes that introduced bugs. Cordiality typically helps in getting the developer to say “yes” to your bug report. An important first step!
Be Diplomatic
Try presenting your findings tactfully, and explaining the bug without blame. “I am sure this is a minor bug that you could handle in no time. This is an excellent program so far.” Developers will jump and welcome it.
Take a psychological approach. Praise the developer’s job from time to time. The reason why most developers dislike our bug reports is very simple: They see us as tearing down their hard work. Some testers communicate with developers only when there is a problem. For most developers, the software is their own baby, and you are just an interfering outsider. I tell my developers that because of them I exist in the company and because of me their jobs are saved. It’s a symbiotic and profitable relationship between a tester and a developer.
Don’t Embarrass
Nobody likes mistakes to be pointed out. That’s human nature. Try explaining the big-picture need for fixing that particular bug rather than just firing bulky bug reports at developers. A deluge of defects not only irritates the developer, it makes your hard work useless for them. Just as one can’t test a program completely, developers can’t design programs without mistakes, and they need to understand this before anything else. Errors are expected; they’re a natural part of the process.
You Win Some, You Lose Some
I know of testers who make their bug reports as rigid as possible. They won’t even listen to the developer’s explanations for not being able to fix a bug or implement a feature. Try making relaxed rules for yourself. Sit with the developer and analyze the priority and severity of a bug together. If the developer has a valid and sensible explanation behind her reluctance to change something, try to understand her. Just be sure to know where to draw the line in protecting the ultimate quality of your product.
Be Cautious
Diplomacy and flexibility do not replace the need to be cautious. Developers often find an excuse to say that they refused to fix a bug because they did not realize (or you did not tell them) how serious the problem was. Design your bug reports and test documents in a way that clearly lays out the risks and seriousness of issues. What’s even better is to conduct a meeting and explain the issues to them.
A smart tester is one who keeps a balance between listening and implementing. If a developer can’t convince you a bug shouldn’t be fixed, it’s your duty to convince him to fix it.

Thursday, September 21, 2006

What makes a good software tester?

Many myths abound, such as being relatively sadistic or able to handle dull, repetitive work. As a one-time test manager and currently as a consultant to software development and testing organizations, I've formed a picture of the ideal software tester-they share many of the qualities we look for in programmers; but there are also some important differences. Here's a quick summary of the sometimes-contradictory lessons that I've learned.

1. Know Programming.

Might as well start out with the most controversial one. There's a popular myth that testing can be staffed with people who have little or no programming knowledge. It doesn't work, even though it is an unfortunately common approach. There are two main reasons why it doesn't work.
(1) They're
testing software. Without knowing programming, they can't have any real insights into the kinds of bugs that come into software and the likeliest place to find them. There's never enough time to test "completely", so all software testing is a compromise between available resources and thoroughness. The tester must optimize scarce resources and that means focusing on where the bugs are likely to be. If you don't know programming, you're unlikely to have useful intuition about where to look.
(2) All but the simplest (and therefore, ineffectual)
testing methods are tool- and technology-intensive. The tools, both as testing products and as mental disciplines, all presume programming knowledge. Without programmer training, most test techniques (and the tools based on those techniques) are unavailable. The tester who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most simplistic tools.
Does this mean that testers must have formal programmer training, orhave worked as programmers? Formal training and experience is usually the easiest way to meet the "know programming" requirement, but it is not absolutely essential. I met a superb tester whose only training was as a telephone operator. She was testing a telephony application and doing a great job. But, despite the lack of formal training, she had a deep, valid, intuition about programming and had even tried a little of it herself. Sure she’s good-good, hell! She was great. How much better would she have been and how much earlier would she have achieved her expertise if she had had the benefits of formal training and working experience? She would have been a lot better a lot earlier.
I like to see formal training in programming such as a university degree in Computer Science or Software Engineering, followed by two to three years of working as a programmer in an industrial setting. A stint on the customer-service hot line is also good training.
I don't like the idea of taking entry-level programmers and putting them into a test organization because:
(1) Loser Image. Few universities offer undergraduate training in testing beyond "Be sure to test thoroughly." Entry-level people expect to get a job as a programmer and if they're offered a job in a test group, they'll often look upon it as a failure on their part: they believe that they didn't have what it takes to be a programmer in that organization. This unfortunate perception exists even in organizations that values testers highly.
(2) Credibility With Programmers. Independent testers often have to deal with programmers far more senior than themselves. Unless they've been through a coop program as an undergraduate, all their programming experience is with academic toys: the novice often has no real idea of what programming in a professional, cooperative, programming environment is all about. As such, they have no credibility with their programming counterparts who can sluff off their concerns with "Look, kid. You just don't understand how programming is done here, or anywhere else, for that matter." It is setting up the novice tester for failure.
(3) Just Plain Know-How. The
programmer's right. The kid doesn't know how programming is really done. If the novice is a "real" programmer (as contrasted to a "mere tester") then the senior programmer will often take the time to mentor the junior and set her straight: but for a non-productive "leech" from the test group? Never! It's easiest for the novice tester to learn all that nitty-gritty stuff (such as doing a build, configuration control, procedures, process, etc.) while working as a programmer than to have to learn it, without actually doing it, as an entry-level tester.
2. Know the Application.
That's the other side of the knowledge coin. The ideal tester has deep insights into how the users will exploit the program's features and the kinds of cockpit errors that users are likely to make. In some cases, it is virtually impossible, or at least impractical, for a tester to know both the application and programming. For example, to test an income tax package properly, you must know tax laws and accounting practices. Testing a blood analyzer requires knowledge of blood chemistry; testing an aircraft's flight control system requires control theory and systems engineering, and being a pilot doesn't hurt; testing a geological application demands geology. If the application has a depth of knowledge in it, then it is easier to train the application specialist into programming than to train the programmer into the application. Here again, paralleling the programmer's qualification, I'd like to see a university degree in the relevant discipline followed by a few years of working practice before coming into the test group.
3. Intelligence.
Back in the 60's, there were many studies done to try to predict the ideal qualities for programmers. There was a shortage and we were dipping into other fields for trainees. The most infamous of these was IBM's programmers' Aptitude Test (PAT). Strangely enough, despite the fact the IBM later repudiated this test, it continues to be (ab)used as a benchmark for predicting programmer aptitude. What IBM learned with follow-on research is that the single most important quality for programmers is raw intelligence-good programmers are really smart people-and so are good testers.
4. Hyper-Sensitivity to Little Things.
Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not bugs. We know that a given bug can have many different symptoms, ranging from innocuous to catastrophic. We know that the symptoms of a bug are arbitrarily related in severity to the cause. Consequently, there is no such thing as a minor symptom-because a symptom isn't a bug. It is only after the symptom is fully explained (i.e., fully debugged) that you have the right to say if the bug that caused that symptom is minor or major. Therefore, anything at all out of the ordinary is worth pursuing. The screen flickered this time, but not last time-a bug. The keyboard is a little sticky-another bug. The account balance is off by 0.01 cents-great bugs. Good testers notice such little things and use them as an entree to finding a closely related set of inputs that will cause a catastrophic failure and therefore get the programmers' attention. Luckily, this attributecan be learned through training.
5. Tolerance for Chaos.
People react to chaos and uncertainty in different ways. Some cave in and give up while others try to create order out of chaos. If the tester waits for all issues to be fully resolved before starting test design or testing, she won't get started until after the software has been shipped. Testers have to be flexible and be able to drop things when blocked and move on to another thing that's not blocked. Testers always have many (unfinished) irons in the fire. In this respect, good testers differ from programmers. A compulsive need to achieve closure is not a bad attribute in a programmer-certainly serves them well in debugging-in testing, it means nothing gets finished. The testers' world is inherently more chaotic than the programmers'.A good indicator of the kind of skill I'm looking for here is the ability to do crossword puzzles in ink. This skill, research has shown, also correlates well with programmer and tester aptitude. This skill is very similar to the kind of unresolved chaos with which the tester must daily deal. Here's the theory behind the notion. If you do a crossword puzzle in ink, you can't put down a word, or even part of a word, until you have confirmed it by a compatible crossword. So you keep a dozen tentative entries unmarked and when by some process or another, you realize that there is a compatible crossword, you enter them both. You keep score by how many corrections you have to make-not by merely finishing the puzzle, because that's a given. I've done many informal polls of this aptitude at my seminars and found a much higher percentage of crossword-puzzles-in-ink aficionados than you'd get in a normal population.
6. People Skills.
Here's another area in which testers and programmers can differ. You can be an effective programmer even if you are hostile and anti-social; that won't work for a tester. Testers can take a lot of abuse from outraged programmers. A sense of humor and a thick skin will help the tester survive. Testers may have to be diplomatic when confronting a senior programmer with a fundamental goof. Diplomacy, tact, a ready smile-all work to the independent tester's advantage. This may explain one of the (good) reasons that there are so many women in testing. Women are generally acknowledged to have more highly developed people skills than comparable men-whether it is something innate on the X chromosome as some people contend or whether it is that without superior people skills women are unlikely to make it through engineering school and into an engineering career, I don't know and won't attempt to say. But the fact is there and those sharply honed people skills are important.
7. Tenacity.
An ability to reach compromises and consensus can be at the expenseof tenacity. That's the other side of the people skills. Being socially smart and diplomatic doesn't mean being indecisive or a limp rag that anyone can walk all over. The best testers are both-socially adept and tenacious where it matters. The best testers are so skillful at it that the programmer never realizes that they've been had. Tenacious-my picture is that of an angry pit bull fastened on a burglar's rear-end. Good testers don you. You can't intimidate them-even by pulling rank. They'll need high-level backing, of course, if they're to get you the quality your product and market demands.
8. Organized.
I can't imagine a scatter-brained tester. There's just too much to keep track of to trust to memory. Good testers use files, databases, and all the other accouterments of an organized mind. They make up checklists to keep themselves on track. They recognize that they too can make mistakes, so they double-check their findings. They have the facts and figures to support their position. When they claim that there's a bug-believe it, because if the developers don't, the tester will flood them with well-organized, overwhelming, evidence.A consequence of a well-organized mind is a facility for good written and oral communications. As a writer and editor, I've learned that the inability to express oneself clearly in writing is often symptomatic of a disorganized mind. I don't mean that we expect everyone to write deathless prose like a Hemingway or Melville. Good technical writing is well organized, clear, and straightforward: and it doesn't depend on a 500,000-word vocabulary. True, there are some unfortunate individuals who express themselves superbly in writing but fall apart in an oral presentation- but they are typically a pathological exception. Usually, a well-organized mind results in clear (even if not inspired) writing and clear writing can usually be transformed through training into good oral presentation skills.
9. Skeptical.
That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all is fit to be questioned. Only tangible evidence in documents, specifications, code, and test results matter. While they may patiently listen to the reassuring, comfortable words from the programmers ("Trust me. I know where the bugs are") and do it with a smile-they ignore all such in-substantive assurances.
10. Self-Sufficient and Tough.
If they need love, they don't expect to get it on the job. They can't be looking for the interaction between them and programmers as a source of ego-gratification and/or nurturing. Their ego is gratified by finding bugs, with few misgivings about the pain (in the programmers) that such finding might engender. In this respect, they must practice very tough love.
11. Cunning.
Or as Gruenberger put it, "low cunning." "Street wise" is another good descriptor, as are insidious, devious, diabolical, fiendish, contriving, treacherous, wily, canny, and underhanded. Systematic test techniques such as syntax testing and automatic test generators have reduced the need for such cunning, but the need is still with us and undoubtedly always will be because it will never be possible to systematize all aspects of testing. There will always be room for that offbeat kind of thinking that will lead to a test case that exposes a really bad bug. But this can be taken to extremes and is certainly not a substitute for the use of systematic test techniques. The cunning comes into play after all the automatically generated "sadistic" tests have been executed.
12. Technology Hungry.
They hate dull, repetitive, work-they'll do it for a while if they have to, but not for long. The silliest thing for a human to do, in their mind, is to pound on a keyboard when they're surrounded by computers. They have a clear notion of how error-prone manual testing is, and in order to improve the quality of their own work, they'll find ways to eliminate all such error-prone procedures. I've seen excellent testers re-invent the capture/playback tool many times. I've seen dozens of home-brew test data generators. I've seen excellent test design automation done with nothing more than a word processor, or earlier, with a copy machine and lots of bottles of whiteout. I've yet to meet a tester who wasn't hungry for applicable technology. When asked why didn't they automate such and such-the answer was never "I like to do it by hand." It was always one of the following: (1) "I didn't know that it could be automated", (2) "I didn't know that such tools existed", or worst of all, (3) "Management wouldn't give me the time to learn how to use the tool."
13. Honest.
Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously agonize over it. This fundamental honesty extends to a brutally realistic understanding of their own limitations as human beings. They accept the idea that they are no better and no worse, and therefore no less error-prone than their programming counterparts. So they apply the same kind of self-assessment procedures that good programmers will. They'll do test inspections just like programmers do code inspections. The greatest possible crime in a tester's eye is to fake test results.

Gmail now plays MP3s

Microsoft just became very slightly redundant. If you send a MP3 file as an attachment to someone's Gmail account, they can now play it from within Gmail.

Presumably if it is a song you'll want to store it, rate it, organize it... but this is just a tiny example of how online apps are eating away at Windows...

A Guide to Game Design

Play games. No seriously, play games! You should develop your interest and knowledge of games for inspiration. Looking at other games helps you see what works, what doesn’t, what has been popular, and what has sunk without trace.
Taking inspiration from other games needn’t be a euphemism for stealing. Honest! There are times when the way a game implements something really seems like the optimum way of doing it, so why try to change it? Take the player energy bar in say, Tekken. When your character gets hit it shows the amount of damage the hit inflicted before shrinking it. Can you think of a better way of showing energy in a fighting game? If it isn’t, don’t change it!
Concept is everything. Pick your favourite game. I bet you can describe the game concept in a paragraph or so. Of course there’s lots more to the game than that, but the general premise is always pretty simple. If you’re trying to put together a design for your game (and these days they really are mandatory) then you need to be able to sum up the concept succinctly. If you can’t then the chances are that you’re trying to do something way too complicated. But more importantly, if someone can’t be bothered to read your concept, then they sure as hell won’t bother playing your game.
So you have your concept. Flesh this out to a couple of pages, including only the necessary details. At this stage you don’t need to know about the reflective particles that spray forth from the great sword when you pick up the orb of destiny, or whatever.
Don’t run off and start building the game just yet. Don’t even fire up your C compiler. You need to run it by a few folks before you start going mad and begin coding. Getting it all down on paper will help organize your ideas, and will probably highlight a few omissions on the way. Show it to your friends, and see what they think. Take on board suggestions because from here on in there’ll be lots more.
Getting started
But let’s begin very simply. You can’t plough on in there and just write a game. Even the best-laid plans will encounter some problems along the way. Implementation is the proof of your concept, and it may need altering.
So you should begin by making a test-bed, and you might have to write some tools or converters to help you. It's probably worth mentioning at this point that games are all developed on a PC, not a Mac, so getting a good PC is an important first step. To kick off with, how are you making your graphics? (Even if you’re not going to draw the final graphics yourself, you can use "placeholders" that give the gist of what you’re trying to achieve). But are you using PaintShop Pro, DPaint, Photoshop? And how are you getting the BMPs, GIFs, TIFs, TGAs or whatever into your code? A browse on the web should get you some free code that’ll allow you to decompress graphics formats, or even a paint package if you need one.
Your test-bed program should deal with the simple, yet crucial, elements of your game concept. Displaying your game, the controls, etc. Don’t be surprised if it doesn’t actually work out quite how you expected it to. This is where you can tweak it to make sure everything works and that it’s fun. Again, get other people to look at it and give you feedback.
The nightmare scenario at this point is that even after tweaking no-one thinks the game is any fun to play at all. It does happen, even with the most promising game concepts, and it’s better to find out quickly than waste even more time and money forcing it to work.
Play the game. Play it some more. Play it until you bleed. And have everyone else do the same. This is when you'll find bugs in the code, your logic and so on. You may end up changing the design, but so long as the concept remains intact you’re laughing. By the end of this stage your game play should be spot on. (Here’s hoping!)
Art & Twinkly Stuff
If you’ve reached the lucky state where your game is playing well, and the feedback from your guinea pigs is positive, then it’s about time you got some proper graphics in there. If you’re not an artist then don’t draw your own - "programmer graphics" are fine for placeholders, but leave it at that. So you need to get some high quality graphics in there, they’re not going to help the game play, but they will transform your game into something people will be drawn to.
Now is a great time to add all the superfluous, but nice, effects to your game, whether that is particle system explosions, flashy lighting, 3D effects or whatever else. And put those game enhancing sound effects in, they really are important, yet so many people just whack them in at the last minute with no regard for their quality.
Interface your game
Play the game even more. And put your user interface in. And test it. The user interface is going to vary a great deal, depending on the type of game you’ve done. If it’s some kind of point-and-click game (perhaps real time strategy) then you’re going to be using icons. They must be really, really clear.
Another aspect to user interface is your front-end. This is really important. You should assume that the attention span of games players is absolutely minimal. How many button presses does it take to start a new game? It should be as few as possible. When they fire up a game the user should be presented with a small number of choices, start game, load game, options, for example. If you have a menu with pages of different choices then you’re over-complicating things. As before, look at other games, and see how they do it.
Well, the game is pretty much there, but you need to get the levels built. Sadly this is the tedious bit. All the excitement of starting the game has gone, and the long haul has begun. But don’t despair, you’ve come this far, and you believe in your game (don’t you?).
So get to designing and building those levels. And yes, play them. Predictably you’re going to have to get other people to play them and react to their comments. You won’t design the perfect level straight away. You won’t build a level and never touch it again. Don’t even think about that, it’s not going to happen. You will get sick of your game, and especially the levels you have built, but everybody does, and the pain has to come before the pleasure, OK?
Back to Reality
You’ve been working for some time now, you’re probably swearing quite a lot, but getting this game finished is the only thing on your mind. But you have to take a moment and stand back. You can be your own worst critic, and you have to be. How good is your game? Be honest with yourself and don’t shirk the responsibility of going back and changing stuff if it needs changing! So, deep breathe now…
Once you get through this stage then treat yourself: do some more twinkly stuff, you deserve it.
Finish the bloody thing!
Fix your bugs. Tweak your code. Touch up your graphics. Clip your sound. Ensure that the game is easy to get into, and gets progressively harder. Make sure that none of your testers have died of boredom. Test it, test it, test it.
It’s all over!
The game has left your hands. It’s a tearful farewell, but you’re glad to see the back of it. Never again. Never again! But then you had this idea about a game where you’re a plumber right, and… Of course, the development of a major game probably won’t follow this exact route but it’ll certainly include all these stages in some order or other. Hopefully this has given you an idea of the complexity of what goes into making a game in the 21st century. So, do you think you can cut it?

Tuesday, September 19, 2006

Internet Project Risks and Mitigation Strategies:

Risk: Personnel shortfalls
Bring on a skilled core team.
Have the team mentor new people.
Make training and teamwork part of the culture.
Hire top-notch personnel while the market remains soft.

Risk: Misalignment with business goals
Align developments with business goals and highlight importance of development.

Risk: Unrealistic customer and schedule expectations
Make the customer part of the team.
Set schedule goals around frequent deliveries of varying functionality.

Risk: Volatile technology
Introduce new technology slowly, according to a plan.
Use technology because it supports business goals, not because it is the latest and greatest thing to do.

Risk: Unstable software releases
Stabilize requirements and designs as much as practical.
Plan to re-factor releases from start.
Don't deliver applications when quality is poor and systems crashes (say "no").

Risk: Constant changes in software functionality
Managing functionality using releases.
Deliver working prototypes before you target new functionality.

Risk: Even newer methods and more unstable tools
Introduce new methods and tools slowly, as justified by the business case, not merely because they are new and appealing.
Make sure methods and tools are of production quality.

Risk: High turnover
Set clear expectations and measures of success.
Make staff feel they are learning, growing, and gaining valuable experience.

Risk: Friction within the team
Staff the team carefully with compatible workforce.
Build team and provide it with leadership.
Manage conflicts to ease friction.

Risk: Unproductive office space
Acquire dedicated workspace for the team.
Appropriate collaboration team.
Lot of space available for meetings and pizza.

Source: "Ten Deadly Risks in Internet and Intranet Software Development", Donald Reifer, IEEE Software, March/April 2002
Methods & Tools - News, Facts & Comments Edition - November 2003

As Internet-based applications are forming a major trend in software development these days, many of these risks are faced by new projects. Some of the risks and the mitigation's strategies have perhaps lost some of their pertinence (like the high turnover for instance), but most of them could still be valuable.

Test Driven Development proves useful at Google:
Google is building very sophisticated products, with complex cutting edge technologies, never tried before algorithms, optimizations, heuristics. Their applications have significant scalability needs and have to deal with difficult issues such as spam, bots and attacks.

Testing is very important because of all of these challenges, plus the software needs to be durable--it can't crash.

Google believes that great code comes from happy engineers. Their engineering structure is very very flat. Engineers are largely self-managing and take on a lot of responsibility. There is a very strong peer review culture, and engineers are empowered to set their own goals. This structure creates an organization of very motivated and productive engineers. This makes the engineers feel empowered to build quality software.

Google has gone through tremendous growth in code base, users, engineers. More systematic processes for testing an analysis have been added.

Google has focused a lot on the early part of the development process-quality via design and review process. Design documents are required for all non-trivial projects and a formal peer review process is done. All changes to code base require peer review. Strict programming style guidelines and formal initiation to those guidelines for all new engineers. Great code comes from a good early design and review process! Process moves a bit slower because of thise, but quality and end-results are better.
Goals of testing and analysis--smooth development process without build breakage (Unit Testing and XP have made a big impact here), functional correctness and backward compatibility, robustness scalability and performance, understanding user needs/improving functionality.

Standard Google practices include unit tests and functional tests, continuous builds, last known good build, release brances with authorized bug fix check-ins, focused teams for release engineering, production engineering and QA, bug tracking and logging of production runs.

Google brings in XP consultants to educate engineers, employs extreme feedback mechanisms like monitors and ambient orbs for visual feedback. They have specialized test and analysis tools during production and prior to production. Sometimes, they have fix-it weeks for fixing bugs, writing tests, improving documentation, etc.

When Google introduced XP to improve quality and other metrics, they hired a team of XP consultants and paired consultants with engineers. They created short projects for the employees with testing/XP as a theme. The teams focused on understanding code base and building good unit tests, and other TDD aspects. How to use infrastructure such as JUnit. XP was introduced at Google about 8 months ago. Engineers are not forced to use XP, but XP adoption is going well. Already, they have seen improvements in key metrics and in stability.

First steps were to build functional tests for existing code, develop tests that fail for bugs in the bugs database, unit tests for existing servlet handlers, unit tests and TDD for new code, and fix-it weeks devoted to developing unit tests and functional tests.

current status: very stable builds due to unit tests, better stability in backwards compatibility due to unit and functional tests. More TDD in future, with many more tests. Goal is to get to a stage where XP and testing offers benefits beyond build stability and backwards compatibility--much more quality in production software.

Google does a lot of logs of production runs and have tools and APIs to process them. Rule based and strema-based tools feed off logs and produce graphs, call pages, etc. Exception traces during production are extracted from the logs, and each stack traces is assigned to a particular engineer for further analysis (done wtih clever correlation with the SCM system to find the right engineer)
Servers are packaged togher with heap inspection methods, which can be invoked through special commands to the server. This produces a full memory dump which is then analyzed off-line by tools to locate problems. This is necessary because it is impossible to replicate the production system in its full complexity.

Google has many databases and uses an O-R mapping layer to hide details of access. Often performance issues are related to database access. Tools are used to identify database queries resulting from different servlet handlers. Ratchet tests that fail if database activity exceed thresholds.

Multi-threaded program behavior is a difficult area to test for, as these programs may demonstrate bad behavior (e.g. race conditions) only under certain circumstances that may be impossible to reproduce. Static analysis can't be used to check for this behavior. Google has a hybrid system that runs on the system being tested for a short time and then performs static analysis on the resulting execution trace.

Key messages: self-motivated engineers, grass roots adoption of XP and other new techniques, productivity and quality continues to improve, and is what helps Google keep up with their tremendous growth. Unit testing has helped us improve our infrastructure and to work better together.

Monday, September 18, 2006

Contingency Planning:
Your Organizational How-To Guide

At first pass, your organization's contingency planning and testing seems time-consuming and non-value added. And it can be! It also seems pessimistic: planning worst-case scenarios can be depressing work when most folks would rather be entering the future with a spirit of optimism. However, the very process of contingency planning can get an entire organization positively thinking about the importance of various business systems. In a fast-paced environment, the contingency planning exercise can lead to implementing better systems and processes overall.

A few years ago, Y2K underscored the urgency for contingency planning. Today, most quality driven organizations will have a contingency plan and contingency planning process. Why?
1. You need to effectively deal with a rapidly changing business and technology environment
2. You need to understand and document the business processes that are vital to your company's business.
Bottom line? An organizational contingency plan can reduce business risk.
A solid procedure can make contingency planning a manageable and positive experience that produces a workable plan.

Steps for creating a contingency plan
First, senior staff needs to decide who is the lead for contingency planning. Usually a Strategic Planning or Quality department manager is best suited for this task. A contingency plan is a requirement for many quality systems - you may want to go to your Quality Department for guidance on plans or processes that are already in development.
The company-wide contingency plan leader provides tools, skills and a knowledge base so that each department can write its own contingency plans. (A common misconception is that the contingency plan leader should be writing all contingency plans. This would be near-impossible: subject matter experts closest to the system have the best working knowledge, and therefore are best suited to writing and brainstorming with their department.) The leader's key functions are to provide a common means for writing and reporting; to train; to set deadlines; to promote enthusiasm and to mentor.
For example, there are many ways to write and store plans. Many templates and databases are available for an organization. The lead decides how plans are organized: will the organization use a similar set of folders? A database? A special network drive? The intranet? The company-wide lead provides the organization with common tools and training so that everyone is following a similar process that produces a standardized plan.
After the leader trains and equips a person in every department to act as an area leader, the localized contingency planning process includes the following elements:
1. List every business process in the department. (Example: Payroll might be listed in the Human Resource’s plan.)
2. List the tasks for every business process and the steps it takes to complete these tasks.
3. For every step, list every dependency (computer hardware, software, external & internal suppliers.)
4. Rate the likelihood for each dependency to fail (Prioritize! Usually a 1-High, 2-Medium or 3-Low works well. Alphabetizing with H, M or L usually doesn't work as well, because these three letters - alphabetically - don't follow your priority. Remember this when you design your database! )
5. Assume that every dependency will fail, beginning with 1-High dependencies. Write a contingency action that accomplishes the task without relying upon the dependency.
Once you’ve analyzed business functions this way, you’ll be able to create contingencies at the appropriate places. In many areas, the contingency will be at the task level; in other areas at the process level; still others may be at the department level.
In some cases, no viable contingency is possible. If power goes down, and you have no generator, you aren't doing any business. If this is the situation with any specific process, make a note of it and describe what you’ll do if the dependency fails.
Structure your contingency plan positively - involve the appropriate people and the right amount of people - it’s a big task, after all. It will require input from many people.

Testing Your Contingency Plan
Testing every contingency in your plan is time- and cost-prohibitive. To make testing manageable, test in four stages. Each stage should build on the results of the previous stage. If an area proves to be unsound, or if it conflicts with other contingency plans, you can re-write and re-test the plan.
Stage 1 - Senior Staff Review
The senior staff selects an internally-publicized date and time to review all contingency plans. Aside from ensuring overall business soundness, this review also serves to recognize people who have thoughtfully completed their assignment. Knowledge of a firm date for a senior staff review will increase quality, accuracy and timeliness.
Stage 2 - Interdepartmental Reviews
Each department should review another department’s plans. The goal of this stage is to find bottlenecks, identify conflicts and allocate resources. If possible, departments that are "downstream" in the business process can review the plans of "upstream" departments.
Stage 3 - Failures in Critical Systems
This testing can be localized within departments. It involves simulating system or vendor failures. You don't actually have to shut down critical equipment or processes - you can role-play a "what if" scenario. You can either run a "surprise" drill or plan a role-playing event for a specific time.
Stage 4 - The Real Deal
This testing involves short-term shutdowns in key areas. If possible, these tests should be conducted in a real-time environment. The goal, of course, is to fully test the contingency plan. Concentrate this last phase of testing only on areas that have a high business priority and a high risk for failure.
By implementing testing in four stages, you can optimize your time and accomplish the goal of proving that the contingency plan is valid.

Creating and Testing: Summary
While creating and testing contingency plans may seem like a time-consuming, non-value-added investment in resources, it can be planned to create positive change within a company. When people take a closer look at their everyday assumptions about work to ask a variety of "What if. . . .?" type questions, the results can often lead to more efficient processes.

Remember: the Chinese symbol for "crisis" and "opportunity" are the same.

Top 20 Funny Replies by Software Programmers to Software Testers when their programs don’t work...
20. "That's weird..."
19. "It's never done that before."
18. "It worked yesterday."
17. "How is that possible?"
16. "It must be a hardware problem."
15. "What did you type in wrong to get it to crash?"
14. "There is something funky in your data."
13. "I haven't touched that module in weeks!"
12. "You must have the wrong version."
11. "It's just some unlucky coincidence."
10. "I can't test everything!"
9. "THIS can't be the source of THAT."
8. "It works, but it hasn't been tested."
7. "Somebody must have changed my code."
6. "Did you check for a virus on your system?"
5. "Even though it doesn't work, how does it feel?
4. "You can't use that version on your system."
3. "Why do you want to do it that way?"
2. "Where were you when the program blew up?"
1. "It works on my machine"

The Innovative Tester
"After becoming a QA professional, you started finding fault in everything." This was my mother’s response when I pointed out the excess salt in my dinner. I then told her the story of the testing company that lost its business when the representative of the company claimed that they only "test to pass the system."
Initially the testing was a sub-set of the Software Development Lifecycle (SDLC) and the developers performed the validation in the software programs coded by them. They then released them to the users for verification of the critical business flaws before starting production. The volume of business losses due to the inherent bugs in the systems has lead businesses to get independent verification and validation.
Stand Over the Developers
While the developers concentrate on the allocated programs and modules, the testers need to see the overall perspective of the application. If a programmer understands the functional and design specifications of the functions allotted to him, he can do a better job. But testers should broaden their vision to understand the entire functionality of the application. Testers should be comfortable with inputs, expected results, and data flows between modules and interfaces.
Unless you have a thorough knowledge on the functionality of the system, it is difficult to gain overall insight into the application. So, before you involve yourself in testing an application, make sure you spend enough time understanding the complexities of the functionality of the application under testing. Knowledge of the system will help you: suggest valid improvements to the system, perform a complete meaningful test on the system, improve your leadership capabilities by extending help to other testers, and substantiate your arguments while defending the defects found in the system.
Plan to Crack the System
Perhaps you may have read the 70:30 Project Management story in circulation on two different types of Project Managers. Structural Planning is the basis for sound testing. Instead of tardy, unstructured planning, you should spend 70 percent of your time in systematic planning and preparation of the test strategy, testware, and test execution methodologies. By doing that, you can execute the test unblemished within the scheduled timelines. Always remember to:
Plan the execution schedule.
Review all the test documents.
Add traceability to ensure coverage.
Prepare the test cases and scripts.
Plan the data requirements and availability.
Decide on the proper strategy depending upon the types of testing required.
Crack the System
Change yourselves. Crack the system. Create implicit test conditions and cases. See the system from the user's perspective. The role of QA is gaining more importance since the various systems in production are still inflicted with bugs. These defects lead to unexpected down time to the business. The financial loss caused due to bugs and downtime is higher. Bugs in mission critical applications may be catastrophic. And the board is becoming responsible for the unexpected business loss due to bugs.
As a test engineer you perform your role. Verify and validate the system for the business user requirements. Even if you detect hundreds of bugs nobody will appreciate it because you are performing your job. But when one unknown defect is unearthed in production it will fire back on the entire QA team. Your role is ensuring that no known bugs exist in the system.
Identify and Improve Processes
Although test engineers need to work within the defined boundaries of the processes and procedures established within their work environment, there is always room for continuous process improvement. Expect the unexpected. Identify the loose ends in the process and bridge the gaps with innovative quality improvement processes. Analyze the pattern in the defects identified in the previous releases of the application. Tighten the test cases to capture the hidden defects. Perform a complete review for ambiguous statements in the requirements documents, which may give rise to different interpretations and ultimately to bugs in the system.
Develop a Killer Instinct
Follow this list to help you develop a killer instinct.
1. Deliver quality.
2. Avoid repetition.
3. Test to destroy.
4. Improve the process.
5. Create knowledge base.
6. Analyze the test results.
7. Verify the test environment.
8. Help others to learn more for you.
9. Do not hesitate to take help from others.
10. Read between the lines in the base line documents.
11. Generate required data to execute all the test cases.
12. Emancipate lateral thinking in developing test cases.
13. Arrive at the gap between various base line documents.
14. Learn the new testing technologies and tools around you.
15. Continue updating your business knowledge. It is because of that business we survive.
16. Improve proper communication. Improper presentation has sent several testers home.

You cannot have your cake and eat it too. Unless the system is put through rigorous testing, the credibility of the QA team will be undermined. Place yourself in the shoes of the business users. Your interaction with others may give rise to different perspectives, which will fine-tune your test cases and help you unearth a hidden bug.