Rather than stand around watching, I headed back to my desk to burn some time checking email.
A few testy minutes later I returned to my quest for morning caffeination. The newly refurbished Keurig single-serve coffee machine stood ready to dispense its liquid black magic. I lifted the handle, inserted a single serve package (aka K cup) of Breakfast Blend, and closed the lid, prepared to press the blinking button with the icon of a large steaming cup.
Instead, the small text display sneered at me. "No K cup detected. Continue anyway?"
Instead, the small text display sneered at me. "No K cup detected. Continue anyway?"
A colleague standing nearby chimed in. "It did that to me too. Just hit continue and it will brew."
I asked, "The guy just installed it. Didn't he test it?"
"He made sure it powered up okay. And then he checked that it would dispense hot water."
"But he didn't actually try making a cup of coffee?"
So the technician had run the smoke test and unit test and was satisfied. This seemed reasonable to my colleague (a developer :) ). But the tech had not tried the machine in the environment it was intended to run, in the manner in which customers were likely to use it. Nor, apparently, had the technician who refurbished it.
As the long-awaited caffeinated liquid lubricated my brain, I could not help but relate this incident back to my own role as a software tester. We increasingly mandate that testing, particularly system testing, be done with realistic customer scenarios and data. Why? Not only is this effective at finding bugs, but it finds the bugs that would be most problematic to our customers.
And bugs found before morning coffee are evil.
And bugs found before morning coffee are evil.