September 26, 2008

A bit about ITIL

The company I work for creates software that help organizations with huge IT infrastructures manage their IT and tie it very strongly to their business needs.

As IT organizations grow mature and stressed to improve and bring more value for money, we see they tend to look for "best practices" for managing their IT business. ITIL is one of the sources they go to, so we decided to learn ITIL in order to "speak" the language our customers are speaking and understand the way they see their domain.

ITIL stands for Information Technology Infrastructure Library. It is actually a collection of books, describing different aspects of managing an IT organization. It is organized into several parts, including:
  • Service strategy - how to decide what services to provide to bring value to your customers
  • Service design - how to design and implement the service
  • Service transition - how to move the new or updated service from the lab to production
  • Service operation - how to operate the service and support the users on an ongoing basis
  • And...
  • Continuous service improvement - how to keep improving these process and make them even better (maybe "bester"?)

BTW, "Best Practices" reminds me of dilbert, asking the pointed hair boss: if everybody is doing a "best practice", doesn't it become a "mediocre practice"?

I rather enjoyed the training - we had an excellent instructor with YEARS of tough industry experience, and excellent examples for everything.

To read more about ITIL go:
ITIL official site - http://www.itil-officialsite.com/AboutITIL/WhatisITIL.asp
Wikipedia - http://en.wikipedia.org/wiki/Information_Technology_Infrastructure_Library

ISTQB? Check. ITIL? Check

I am now ISTQB and ITIL foundation certified, and got an observation about these certifications. In both cases I studied and took the exam with people that are new to the field of IT and software testing. I saw that for the experienced pros, the study was focused on agreeing on words and terms to describe processes and objects they work with daily. These falks had a rather easy time when preparing and in the exam itself.

The other part of the crowd were inexperienced and tried to learn the glossaries by-heart, to memorize them the best they can for the exam. Some of them didn't manage to do so. I believe that learning glossaries by-heart, without an experience base to tie the concepts to is futile.

The test itself was hard for non-English speakers. It is a time-limited multiple-choice test, filled with semi-tricky questions. I happen to be quite good at this sort of tests, but others were dissapointed and said the ITIL test was "nasty".

Bottom-line: in the future, I will recommend 2 things to my team members and collegues:
1) go do the certification when you have enough experience to make it valuable
2) arrange study groups - this way you can share experience with others and enjoy the company while learning for the exams

P.S.
The ISTQB certification comes framed in a nice glass frame, looking very professional - a lot more "impressive" than the academic degree that took me years to complete. Feels more like PR than a certificate of acheivement...

August 01, 2008

More bugs in the ISTQB certification exam

In the last post I wrote about 2 bugs found in the ISTQB certification exam. Well, turns out there was another one, or actually 2:
  1. In the ITCB site (Israeli Testing Certification Board), it said that results will be emailed after 2 weeks. The instructor that supervised the exam told us it would take 3 weeks. This sort of bug is called an inconsistency bug. The main effect on inconsistency bugs is user confusion and loss of trust.
  2. The results were promised 3 weeks after the test. This date is behind us now. This is a bug, since a bug can be defined as a gap between user expectations to actual behavior.

So, at this point I am a bit dissapointed of not getting a score for a 40 question multiple selection exam in 3 weeks. The surprise for me is that I am actually waiting to get the score - not so cool :-P

July 14, 2008

ISTQB certification exam - done

I took the ISTQB foundation level last Thursday. I feel pretty confident about my answers, but must admit that I had to guess a few answers.

As you might have guessed, you cannot gather a bunch of testers in a room without them finding bugs and down-talking the product quality :-) Got 2 bugs in the test:
1. Medium (and resolved on the spot): 1 character ommited, but it was a significant one, differentiating between sentence II to sentence III...
2. High (and gave us a free correct answer): a question regarding a certain decision table did not include... the decision table! Well, we are going to get this one for free.

June 30, 2008

Some updates and promises for posts-to-come

Been away for a while - funny thing of broken habbits: our wireless router broke down, and I didn't manage to fix it (or get myself to get a new one), so I stopped connecting to the internet with my laptop when I am home. Immediate result: lost my home-internet habbits. Call it vacation. (BTW, I write this post from the kids PC).

Things that I need to update about in posts-to-come:
  1. ISTQB self study group: certification exam is next week (July 10th). I feel ready, and got things to say about the ISTQB sylabus.
  2. Ruby: had some progress (thanks for the reference to Aptana studio Rich!), need to share something I don't get in Ruby, as well as rcov - a nice (too?) simple code coverage tool.
  3. SIGIST conference: been to the Israeli Special Interest Group In Software Testing (SIGIST) conference. I gave a presentation about how to extract test cases from UML.
  4. Mind maps: I am hooked up on mind maps, using them as my new external brain and memory. Is this a passing thing?

May 07, 2008

Cool things in Ruby

I have been using Ruby for quite some time now, but only now I begin to see what I missed...

It started when we began using WATIR for shooting HTTP transactions to test our application. WATIR is based on Ruby, activating Internet Explorer's COM object - I recommend it for quick useful scripting.

What caught my attention was the Ruby under WATIR - the scripts were easy to write, readable, and contained clever "tricks" that saved a lot of scripting time (I'll give examples later).

Well, as they say: "I came for WATIR and stayed for Ruby".


cc Tom Goskar

Ruby is a cute scripting language that gives a much nicer feeling from a usability perspective than perl. For starters, it is OO. It is very flexible in regard to syntax. It is rich with functionality and extensibility options.

Don't worry, there are down sides too (remember that I am only starting to understand Ruby, so there is a high chance I am BSing here). For example, I still do not have a convenient working environment for scripting and debugging. The editor that comes packed with Ruby for Windows (SciTE) colors keywords and lets you run the script from the editor, and the interactive Ruby command line (irb) is a nice option that let's you script and run interactively, but this is not enough for me - I am used to the rich coding experience of Eclipse, so what I did now is to install the Ruby plug-in for Eclipse, and I'll update later about how convenient it is.

Well, I want to give a few examples for why I like Ruby and why it is cool.

I like Ruby's arrays. They come with many services built in as methods and let you use the array as stack or queue with no effort. Also, you can add arrays, subtract them, keep only unique elements, sort, etc. - ultra convenient.
myArray = [1,2,3,4,5]
myOtherArray = [2,4]
myDiffArray = myArray - myOtherArray # result will be [1,3,5]

I also like Ruby's built-in iterators, allowing you to split anything (more or less) into pieces and operate on each piece until you do them all:
myArray = [1,2,3,4,5]
myArray.each do |element| # element isa variable that gets each array element by turn
puts element # puts prints to the console
end

Another thing I like about Ruby is its rich core library and even richer extensions collection, covering (probably) everything I need. As in other user contributed libraries, the quality and completeness is not guaranteed and you are required to work hard to differentiate the good things from the bad.

I just started climbing the learning curve here, so I'll update later on things I find.

If you want to see Ruby in action in a few minutes, do the Ruby in 20 minutes tutorial.

April 29, 2008

Back from family trip to the USA

Back from a ten days tour of New York City and Orlando - now I understand why some say that Disney World is a "once in a life-time" experience - it will take me a long time to recover from the masses of people and the long lines.

But enough crying, got a bug and a feature to share.

Bug:
See if you can spot the bug in this postcard (click to enlarge):


Feature:
The ride I enjoyed most in Disney World was Epcot's "Test Track". It is simulating a General Motors test department, with loads of displays telling the story of automotive testing industry, and its silent heroes - the crash-test dummies. Among the things I enjoyed to see were test result cards, formatted as you would expect with test ID, test scenario, expected result, actual result, passed/failed, etc. - the entire story! I felt at home.



When you get to the actual ride, you sit in a car and participate in a test session directed by a test manager and a test engineer, that take you and the car through a series of challenges, including steep hills, extreme weather, emergency braking (with and without ABS) and extreme speed drive around the facility.

What more can a tester want? (hint: not to find real bugs in the ride :-#)

April 06, 2008

"Error Guessing" as a software testing method

I admit that good testers can guess where bugs will be found (or at least say "I told you!" after they find them). BUT, I don't like seeing "Error Guessing" listed as a test technique in software testing books. If it can't be taught, it is not a technique (IMHO).

Look at ISTQB glossary definition (pdf!):
error guessing: A test design technique where the experience of the tester is used to
anticipate what defects might be present in the component or system under test as a result
of errors made, and to design tests specifically to expose them.

So, the tester should apply the old intuition to draw on her past and guess where the errors will hit this time. This sounds trivial on the one hand (learning from experience should be a basic human feature), but also very vague: suppose I have a tester with zero experience - can I teach her to guess errors? If I have 2 testers with the same experience - will they guess the same errors?

So, what can be done to improve your error guessing techniques?
  • Improve your memory:
  • Improve your technical understanding:
    • Go into the code, see how things are implemented, understand concepts like buffer overflow, null pointer assignment, array index boundaries, iterators, etc.
    • Learn about the technical context in which the software is running, special conditions in your OS, DB or web server.
  • Remember to look for errors not only in the code
    • Errors in requirements
    • Errors in design
    • Errors in coding
    • Errors in build
    • Errors in testing (we never make mistakes, do we?)
    • Errors in usage
An error-guessing story (wake up ego!):
A new team member was asked to plan tests for a web application screen that displayed items in batches of 25. I reviewed his test cases and asked him to add a test to search for missing items, or duplicated items around the "stitch" between the batches. After executing the tests, he came to me, saying that indeed the last item in each batch was duplicated, and appeared again as the first item in the next batch. He looked at me with awe and said "how did you know that this will happen?" I knew, because this is exactly what I missed when testing a paged-list a few years back - a customer found it for me... (Ego, back to sleep).


(CC)

April 05, 2008

ISTQB self-study group

We started an ISTQB self-study group at work last week. We got around 18 people on the interested-leads list, I expect the actual number will be much lower.

The idea is to use this book "Foundations of Software Testing: ISTQB Certification" as a study guide, meeting once a week to discuss current sections/chapters.

I read the first 3 sections, which cover the basic questions "what is testing" and "why testing is necessary". I felt the book was oversimplified and naive. To give you an example, the authors tend to go into lengthy analogs that bring little value (for example, comparing software product to a tomato in order to discuss definitions of "quality".

On the other hand, I think that for beginners it may do the trick, and serve as a clear introduction to software testing. I will write more as we progress.

April 01, 2008

bug/feature: dusty BSOD, Many Eyes

Bug:
Lengthy and detailed report of analyzing a Blue Screen Of Death situation, ends up with cleaning dust off the CPU fan and radiator, and presto! BSOD is gone.
Memo to myself - try it on my laptop (got a couple of BSODs last week).

Feature:
Many Eyes is a wonderful site that let's you upload data to share with others and use visual analysis in order to study your data and bring out points about it to the world in a persuasive way. You can also play around with data provided by others, to see what you can extract from it.
The collaborative nature of this site allows users to discuss the data itself, and the statistical inferences made by data users.

Innovative data visualization tools are a must in this age of accelerating data accumulation. The best example I saw Hans Rosling presentation in TED - I suggest to invest 20 minutes in this entertaining and genius presentation. Done? Go play with data in Gapminder website.

March 21, 2008

Amazing art/tech: animatic sculptures

Go see this genius in action, building impossible animal-like sculptures.
So we got animatic inputs from military-industry (see post) and art.

Following last post: the nature of testing

People asked me what I meant about the all-terrain robot being tested. Besides the obvious (lab footage of the robot completing tasks in the lab, such as climbing bricks and jumping over an obstacle), there is a single moment in the video that represent for me an inner truth about testing:

Testing requires certain cruelty...

Look how the tester kicks the robot on 00:35. Let me describe it for you: the team is working on this ingenious invention, a wonderful machine with unbelievable capabilities, they must love it (ok, except the annoying sound). I bet they have a nick name for it. This is a condition in which you want to see your loved one succeeds.

Instead, they take it out to test it in the worst ever conditions - in this case literally walking on thin ice. To top it, this guy just kicks it out of balance, in the most vicious way!

We are required to do the same thing with the software we test: bring it to the limit, then push and observe if it falls gracefully, crashes into ugly crisis or recover brilliantly. Living the sentiments aside - we must kick it hard. If we don't, reality will.

March 19, 2008

Testing an all-terrain robot

Amazing video of military all-terrain robot (looks like a headless donkey). Video also covers tests in lab, interesting to see how it handles different surfaces.

March 17, 2008

feature: www.lexisum.com - quick wiki definitions

Feature only this time....
www.lexisum.com is a simple UI that allows you to type a word or concept and get definitions for it from Wikipedia. I usually use Google's "define:" for that, but this little gadget is nice.

March 13, 2008

James Lyndsay: Why Can't Testers Code?

A new testing magazine is accessible online at www.testingexperience.com - congratulations!

James Lyndsay, a British software testing consultant, caught my eye with a challenging short piece called "Why can't testers code?".

The main claim is that there is a growing skills gap between testers and developers, because tester can't code and developers can test. This skills gap pushes testers into manual choirs, while developers write and execute large numbers of high quality test cases, usually on the Unit Test level. Let me add that there is a growing number of sophisticated tools that allow developers to automatically create, run and measure Unit Tests (Agitar, TestNG and Clover are a few examples).

Although in many testing teams this is not the case, I had a live-demo of what James was talking about: a friend called me and said that his management decided to move all test automation activities from the test team to the development team. In James Lyndsay's words:
"this will be a call to arms: your colleagues are doing the interesting parts of your job, and you're rolling over to let them"

I totally agree with James and his call to testers to take back test coding into their own hands. A tester that cannot express himself in code is limited in his capabilities and dependent on others. A tester who cannot code is limited in her understanding of things that can go wrong (and will...) in different application areas. A tester who can't automate will burn-out quickly, repeating again and again the same boring tasks.

We are not a bunch of button-pushers, we are software people, capable of writing testware ourselves.

March 08, 2008

bug/feature: Acid3 test, avoiding left-turns

Bug:
I am usually pretty happy with my Firefox 2. Today I ran Acid3 test on it to discover that it scored 51% (failed...). The test includes 100 automated browser spec compatibility tests, that cover DOM structure and functionality, advanced CSS and other browser features that I am not familiar with. Try to run Acid3 on your browser.

What puzzles me is this: if Firefox scored 100% instead of 51%, would I be twice as happy with it?


Image source: http://en.wikipedia.org/wiki/Acid3


Feature:

You know what it feels like waiting for the opposite lane to clear while making a left-turn, right? UPS added a feature to its navigation software to avoid left-turns as much as possible, leading to vast savings in time and money over their huge truck fleet.

I got this one from Slashdot, where the reporter speculated that this feature was probably devised by a grad-student instead of the traditional traveling agent problem...

March 06, 2008

bug/feature: crashing voting machines, cool wii 3d

Bug:
Reports on voting machines crashing due to unhandled event: users dragging their fingers on the touch-screens fired-up "drag&drop" event which was not handled. Boom.

Feature:
Genious Johnny Chung Lee presents head tracking for navigation in 3d worlds, based on wii remote. Hackerish, wild and inspiring.


(screen shot from the youtube clip)

bug/feature: Nissan Navara, Nokia Morph

Bug
Nissan Navara was graded poor by NCAP crash tests. Turns out this was due to software failure, firing the airbags too late (see impact on driver's head in NCAP video). This bug was fixed in later versions, says Nissan.



Feature

Nokia presents "Morph" a futuristic combination of cellular technology with nanotech c
omponents. Morph can be charged by sunlight energy using nano-scale grass (!), fold into any shape, be worn and even smell. See demo, read more.

Post 1: What's it all about

bug/feature presents pairs of interesting bugs and features that I stumble upon from time to time.
Other posts may suggests test-related ideas or insights.
I am a software test engineer by trade. This is my playing field.