First, implement lean, Goldratt’s TOC, Deming’s ideas, Kanban, and other related concepts, then get busy with CMMI.
What you may not know is that lean is easier, faster, and generates better performance results sooner than CMMI.
Lean improves delivery issues sooner than process improvement alone. Improved deliveries improves revenues, stabilizes cash flow, increases margin, makes customers happier and results in more sales.
In other words, lean means better flow and better flow means better business.
CMMI is great, but is often attempted as a first line of offense to issues it’s not meant to deal with. CMMI is meant to improve flow, not define it, and, lean helps define flow. (Yes, I know I said "theory of constraints" twice.)
Assuming there are unfulfilled orders in the sales pipeline, lack of revenue is due to lack of flow. Typically, this is due more to what’s in the flow, how much is in it, and the clarity and cleanliness of how the operation’s flow is aligned. Using CMMI to "fix" issues with flow is like using the Brownian motion of steeping tea to power a random-number generator. It’s just too much too soon. Process issues are themselves symptoms of flow issues.
Deal with the symptoms first. Then, tackle the processes.
Two events to put on your radar:
Lean Software and Systems Conference: Boston, 13-18 May (Lean Camp & Lean Action Kitchen, Sunday, Conference Monday-Wednesday, and Tutorials Thursday & Friday). I’m helping to organize and speaking at the conference, and running a tutorial on this topic on Thursday.
Kanban Change Agent Masterclass: Miami, 23-25 May. I’ll be participating as a special guest to demonstrate how Kanban helps achieve CMMI ratings, including High Maturity.
Apple, Inc. learned the hard way what happens when engineering isn’t complete. In particular, when verification and/or validation aren’t performed thoroughly.
Verification is ensuring that what you’re up to meets requirements. “ON PAPER.” BEFORE you commit to making the product. It’s that part where you do some analysis to figure out whether what you think will work, will actually do what you expect it to do. Such as, walking through an algorithm or an equation by hand to make sure the logic is right or that the math is right. Or, stepping through some code to see what’s going on before you assume that it is behaving. Just because something you built passes tests, doesn’t mean it is verified. All passing tests means is just that: you passed tests. Passing tests assumes the tests are correct. If you’re going to rely on tests, then the tests need to be verified if you’re not going to verify the requirements or the design, etc. Another problem with tests is that too many organizations only test at the end. Verification looks a lot more like incremental testing. Hey wait! Where’ve we seen that sort of stuff before?
Had Apple’s verification efforts been more robust, they would have caught the algorithm error that incorrectly displays the signal strength (a.k.a., “number of bars”) on the iPhone4. This is why peer review is so central to most verification steps. The purpose of peer review, and of verification, is to catch defective thinking. OK, that’s a bit crude and rude… it’s not that people’s thinking is defective, per se, but that thinking alone didn’t catch everything, which is why we like to have other people looking at our thinking. Even Albert Einstein submitted his work for peer review.
Validation is ensuring the product will work as intended when placed in the users’ environments. In other words, it’s as simple as asking, “when real users use our product, how will they use it, and will our product work like we/they expect it to work?” Sometimes this is not something that can be done on paper, and you need some sort of “real” product, so you build a prototype. Just as often it’s not something that can be done “for real” because you don’t get an opportunity (yet) to take your product into orbit before it has to go into orbit to work. Sometimes you only get one shot, and so you do what you can to best approximate the real working environment. But neither of these extreme conditions can be used by Apple as excuses for not validating whether or not the phone will work as expected while being held by the user to make calls.
Had Apple’s validation been operating on all bars, they likely would have caught this while in the lab. When sitting in its sterile, padded vice, in some small anechoic chamber, after taking great care to ensure there are no unintended signals and nothing metallic touching the case, someone might’ve noticed, “gee, do you think our users might actually make calls this way?” And, instead of responding, “that’s not what we’re testing here”, someone might’ve stepped up and said, “hey, does our test plan have anything in it where we’re running this test while someone’s actually using the phone?”
Again, testing isn’t enough. Why not!? After all, isn’t putting it in a lab with or without someone holding the phone a test? True… However, I go back to the same issue we saw when using testing as the primary means of performing verification… Testing is too often at the end. Validating at the end is too late. You need to validate along the way. In fact, it’s entirely possible that Apple *did* do validation “tests” of the case separately from the complete system, and, in *those* tests — where the case/antenna were mere components being tested in the lab — performed fine, and, then only when the unit was assembled and tested as a complete system would the issue have been found. In such a scenario we learn that component (elsewhere known as “unit testing”) is not enough. We also need system testing (in the lab) and user testing (in real life). Back we go to iterative and incremental…
So you see… we have a lot we can apply from ordinary engineering, from agile, and from performance improvement. Not only does this… uh… validate(?) that “agile” and “CMMI” can work together but that for some situations, others can learn from applying both.
In full disclosure, as a new owner of an iPhone 4, I am very pleased with the device. I can really see why people love it and become devotees of Apple’s products. Honestly, it kicks the snot out of my prior “smart” phone in every measurable and qualitative way. And, just so I’m not leaving anything out, the two devices are pretty much equally balanced in functionality (web, email, social, wifi, etc.) – even with the strange behaviors that are promised to be fixed. For a few years, this iPhone will rule the market and I’ll be happy to use it.
Besides embarrassing, this will be an expensive couple of engineering oversights for Apple to fix. And, they were entirely avoidable for an up-front investment in engineering at an infinitesimal fraction of the cost/time it will take to fix. For even less than one day of their engineering and deployment team’s salary, AgileCMMI can make this never happen again.
Sorry, folks, no fun (or not-so-fun as you may prefer) video today. Not even any pictures I took at SEPG. In fact, as far as today went, I don’t have much to report from the sessions.
Again, I missed the plenary session. This time on account of a phone meeting with a client in another time zone. So, my first session to attend was the other of my two collaborative efforts with Judah Mogilensky on SCAMPI Evidence from Agile Projects. As anything with Judah in it, it went rather nicely. Many generous bits of feedback. I felt really good about my role, and Judah was his usual incomparable self.
My friend and colleague, Eileen Forrester of the SEI was kind enough to give me some supremely powerful feedback. I am, and will be, grateful for it. I was then roped into shop talk about CMMI for Services in advance of the 2nd half of the orientation workshop I’m helping her with. Thus, my missing out on my buddy, Jeff Dalton’s, excellent (so I’m told from many reports) job with Encapsulated Process Objects.
One point made to me later by another of the few “agile-friendly” lead appraisers, Neil Potter, about a bit of content in the presentation does require some follow-up. In the presentation we short-cutted the details on a discussion regarding the potential design aspects of test-driven development with an engineering design. I should say that TDD is NOT the same as a design, but that depending on how TDD is planned and performed, it can include design-like attributes which could accomplish design expectations in the engineering process areas of CMMI-DEV. So don’t anyone out there go around blabbing some “Hillel said TDD is Design!” crap. Mm’K?
After lunch, my job was to keep people from falling asleep with a session on Love and Marriage: CMMI and Agile Need Each Other. From the response, I think it went went rather well. I, personally, was quite pleased with how it came off from a “talk per slide” metric. A good friend, Tami Zemel, later admitted that she “takes back” her earlier criticism of Monday’s presentation. She said it had too many words and didn’t believe me when I told her why. She complemented not only the picture-centricity of today’s pitch but also the delivery, style, and content. That was very generous, thank you.
From then to the end of the day, I spent scheming, strategizing, shmoozing, and networking with too many people to mention. (No offense.) A client who came to the conference (who never holds back and only inflates the truth when it’s funny to do so) got very serious when a prospect I’d recently met off-the-cuff asked whether he’d recommend me. I won’t repeat his answer because it really was just crazy nice. Today’s interesting photo is in his honor. (And also because my boys love transportation.)
The last “session” was a Peer 2 Peer double-header on the topic I mentioned on Monday which I co-created with Michele Moss. She and I are also on the SEPG Conference Program Committee. We used the feedback and other data from the Peer 2 Peer as input to a retrospective on this year’s conference, which will be used for strategies for next year’s conference in Portland, OR.
You can also read an entry I gave to the SEI for their official blog about my impressions of this year’s conference-goers.
Dinner conversation back at the hotel with Michele was back on the subject of our Peer 2 Peer session. Net result: We single-handedly wrote the 1-3-5 year plan for all SEPG’s. Or at least we think so.
The opinions expressed here are the authors' and contributors' and do not express a position on the subject from the Software Engineering Institute (SEI) or any company or SEI Partner affiliated with the SEI.
Your Host
Hillel Glazer Principal & CEO Entinex, Inc. Email me.