Apple, Inc. learned the hard way what happens when engineering isn’t complete. In particular, when verification and/or validation aren’t performed thoroughly.
Verification is ensuring that what you’re up to meets requirements. “ON PAPER.” BEFORE you commit to making the product. It’s that part where you do some analysis to figure out whether what you think will work, will actually do what you expect it to do. Such as, walking through an algorithm or an equation by hand to make sure the logic is right or that the math is right. Or, stepping through some code to see what’s going on before you assume that it is behaving. Just because something you built passes tests, doesn’t mean it is verified. All passing tests means is just that: you passed tests. Passing tests assumes the tests are correct. If you’re going to rely on tests, then the tests need to be verified if you’re not going to verify the requirements or the design, etc. Another problem with tests is that too many organizations only test at the end. Verification looks a lot more like incremental testing. Hey wait! Where’ve we seen that sort of stuff before?
Had Apple’s verification efforts been more robust, they would have caught the algorithm error that incorrectly displays the signal strength (a.k.a., “number of bars”) on the iPhone4. This is why peer review is so central to most verification steps. The purpose of peer review, and of verification, is to catch defective thinking. OK, that’s a bit crude and rude… it’s not that people’s thinking is defective, per se, but that thinking alone didn’t catch everything, which is why we like to have other people looking at our thinking. Even Albert Einstein submitted his work for peer review.
Validation is ensuring the product will work as intended when placed in the users’ environments. In other words, it’s as simple as asking, “when real users use our product, how will they use it, and will our product work like we/they expect it to work?” Sometimes this is not something that can be done on paper, and you need some sort of “real” product, so you build a prototype. Just as often it’s not something that can be done “for real” because you don’t get an opportunity (yet) to take your product into orbit before it has to go into orbit to work. Sometimes you only get one shot, and so you do what you can to best approximate the real working environment. But neither of these extreme conditions can be used by Apple as excuses for not validating whether or not the phone will work as expected while being held by the user to make calls.
Had Apple’s validation been operating on all bars, they likely would have caught this while in the lab. When sitting in its sterile, padded vice, in some small anechoic chamber, after taking great care to ensure there are no unintended signals and nothing metallic touching the case, someone might’ve noticed, “gee, do you think our users might actually make calls this way?” And, instead of responding, “that’s not what we’re testing here”, someone might’ve stepped up and said, “hey, does our test plan have anything in it where we’re running this test while someone’s actually using the phone?”
Again, testing isn’t enough. Why not!? After all, isn’t putting it in a lab with or without someone holding the phone a test? True… However, I go back to the same issue we saw when using testing as the primary means of performing verification… Testing is too often at the end. Validating at the end is too late. You need to validate along the way. In fact, it’s entirely possible that Apple *did* do validation “tests” of the case separately from the complete system, and, in *those* tests — where the case/antenna were mere components being tested in the lab — performed fine, and, then only when the unit was assembled and tested as a complete system would the issue have been found. In such a scenario we learn that component (elsewhere known as “unit testing”) is not enough. We also need system testing (in the lab) and user testing (in real life). Back we go to iterative and incremental…
So you see… we have a lot we can apply from ordinary engineering, from agile, and from performance improvement. Not only does this… uh… validate(?) that “agile” and “CMMI” can work together but that for some situations, others can learn from applying both.
In full disclosure, as a new owner of an iPhone 4, I am very pleased with the device. I can really see why people love it and become devotees of Apple’s products. Honestly, it kicks the snot out of my prior “smart” phone in every measurable and qualitative way. And, just so I’m not leaving anything out, the two devices are pretty much equally balanced in functionality (web, email, social, wifi, etc.) – even with the strange behaviors that are promised to be fixed. For a few years, this iPhone will rule the market and I’ll be happy to use it.
Besides embarrassing, this will be an expensive couple of engineering oversights for Apple to fix. And, they were entirely avoidable for an up-front investment in engineering at an infinitesimal fraction of the cost/time it will take to fix. For even less than one day of their engineering and deployment team’s salary, AgileCMMI can make this never happen again.
In this quote, CAPT Kirk wants Dr. Bones McCoy to do something he feels he’s not-qualified to do because he doesn’t know how to treat the species.
I’m using it to explain that organizations looking for a lead appraiser to work with them towards an appraisal and/or to perform an appraisal ought to think of what we do as they would think of a doctor, not a laborer or vendor.
Do you really want the lowest price doctor?
For that matter, is the highest price doctor necessarily the best in town?
When reaching out and interviewing for a lead appraiser or CMMI consultant, you:
Want the person who is the right person for the job.
Want someone who is qualified (definitely not under-, but preferably not over- either).
Not the lowest bid.
Seriously, whoever you hire for this effort has in their power the ability to make or break your future. They literally have the health and well-being of your organization in their hands. They can put you in the dump just as easily as they can take you to the next level.
They should see themselves that way as well.
Unfortunately I’ve got too many sad stories of appraisers/consultants who definitely see that they can make or break you, but they don’t feel like they personally own the responsibility for what happens to you when they’re done.
If it costs too much? So what? If you get no value? Not their problem. Didn’t see any benefit? Didn’t learn anything? Things take longer and cost more and you’re not seeing internal efficiencies improve? YOU must be doing something wrong, not them.
In an AgileCMMI approach, your CMMI consultant and/or lead appraiser would see themselves as and act like a coach, and would put lean processes and business value ahead of anything else. And, an AgileCMMI approach would know that when the processes work, they add value; when they add value people like them and use them; when people like and use them, the next “level” is a big no-brainer-nothing. You get it in your sleep.
Let me know if you want help finding the right lead appraiser or consultant.
The opinions expressed here are the authors' and contributors' and do not express a position on the subject from the Software Engineering Institute (SEI) or any company or SEI Partner affiliated with the SEI.
Your Host
Hillel Glazer Principal & CEO Entinex, Inc. Email me.