Tuesday, November 25, 2008

Senate Bill S.3384

In reading Michael Krigsman's blog, I was introduced to Senate bill S.3384, "Information Technology Investment Oversight Enhancement and Waste Prevention Act of 2008." The bill was placed on the Senate Legislative Calendar on 10/1/08.

This bill was introduced by Senator Thomas Carper (D-DE) and cosponsored by Sen. Norm Coleman (R-MN), Sen Joseph Lieberman (ID-CT), Sen. Susan Collins (R-ME), Sen Claire McCaskill (D-MO), and Sen. George Voinovich (R-OH). The bipartisan support (two democrats, two republicans, and an independent) in itself makes the bill noteworthy.

The bill requires that any government IT investment project that has "significantly deviated" from its projections must be reported up the command chain, ultimately ending with the responsible CIO. Any project that is off its projections by more than 20% is considered "significantly deviated." Projects that are in "gross deviation" are further reported to the appropriate congressional committee. "Gross deviation" is considered more than a 40% deviation from its projections.

Deviations are measured relative to the Earned Value Management (EVM) benchmark. This benchmark includes timely measures for project expenditure, project completion, and project deliverables.

S.3884 requires that this reporting be done on a quarterly basis. This means that deviations must be measured (and reported) long before the project is actually completed.

State governments often follow the lead of the federal government, especially in "watch-dog" type legislation. Therefore it seems likely that S.3884 may have an impact beyond just the federal government and into all areas of the public sector.

EVM benchmarks are highly dependent on the ability of an organization to accurately forecast both deliverables and costs. These are both areas in which the government has historically been weak. I have frequently written about governmental IT projects that have gone under-promise, over-budget, past-due, or all of the above. If S.3884 is going to be successful, it will require a major new approach in how government does IT.

The biggest problem that the government has with IT is its chronic failure to manage complexity. The more complex a project, the harder it is to project cost and predict deliverables, two absolute requirements for an EVM analysis. There is no possible way that S.3384 can be implemented successfully unless the government first takes steps to understand and manage IT complexity.

I strongly support any requirement that the government do a better job with IT. But I think the Senate needs to be realistic. Passing S.3384 without giving governmental IT bodies the tools they need to address IT complexity is setting them up for failure.

Let's phase in S.3384. First, introduce SIP (or some other equivalent methodology) to the government to show how highly complex systems can be organized as much smaller, simpler, autonomous systems. Then hold the government accountable for how it delivers those simpler systems.

If the government takes the two steps of IT complexity management and IT accountability. we can transform how the government does IT. We will have a government that delivers IT projects on-time, on-budget, and meeting the needs of the citizens that pay for those projects. That is you and me.

Wednesday, November 12, 2008

Carving up an Enterprise Architecture

Melissa A. Cook recently published an article in ComputerWorld titled Enterprise Architecture: Lessons from the cutting board. In this article, Melissa likens developing an enterprise architecture to cutting up a chicken for dinner. She gives three lessons for creating an enterprise architecture.

Lesson 1: Cut through the joints, in other words, look for natural boundaries around business processes. As she points out, “if you get the natural boundaries right, it simplifies the number and complexity of interfaces. How do we do this? Melissa says “we need to take a gander at the enterprise goose, the whole thing.” A good check point is looking at data: “a ton of data elements or columns of data on an interface, you probably have the boundaries wrong.”

Lesson 2: Natural breakpoints or boundaries are similar for similar organizations. In other words, see what others that are in similar businesses have done and use this for a model.

Lesson 3: Boundaries don’t change based on size of the organization. Big turkeys carve much like little chickens.

I agree with Melissa strongly on two points. First, if you get your enterprise architecture right, you will make a great leap forward in managing the complexity of your systems. Second, your success in getting the architecture right is reflected in the data structures moving around your system.

Having said that, I feel several points need more comments. Let’s start with her thoughts on data. While I agree that poor enterprise architectures result in complex data patterns, I disagree that this is a good test to use to evaluate your enterprise architecture. The reason is simple. By the time you can notice the problems with data, it is far too late to do anything about the problems with the architecture. You have already implemented your systems and any changes in the enterprise architecture at this point are going to be very expensive.

So what is the solution? Any enterprise architecture should be tested against theoretical models for enterprise architectures based on the mathematical laws of complexity. These tests can be conducted BEFORE the architecture is implemented and can pinpoint problem areas before money is spent on their implementation at a time when they are still relatively inexpensive to fix.

Another area that we need to look at more closely is her lesson 2. While it is true that similar organizations have similar boundaries, this is not particularly helpful information. Why? Because “similar” is not good enough. Even small errors in “cutting up the chicken” can lead to huge problems with complexity. I personally have seen millions of dollars lost due to a single poor function placement. Again, the answer is to methodically check each function for correct placement, not based on a particular chef’s preferences, but based on the strict mathematics of synergies.

Don’t get me wrong. Theoretical analysis can only take you so far. But it can take you a lot further than we can go by guess-work, checking data patterns, or even looking at industry patterns.

Monday, November 10, 2008

The Job of the Enterprise Architect

ZDNet recently published an article on My Awesome IT Job: Enterprise Architect, IBM. The article is an interview with Martine Combes, an Enterprise Architect working with the CIO Organization of IBM in France. In the article, Martine says, "It is very easy to become overwhelmed by complexity and it is very difficult to maintain simplicity, especially when the scope is large, the number of team members is high. So better to have a simple architecture from the start, clearly showing the benefits for the Business."

On one hand, I applaud Martine's awareness of the importance of simplicity when it comes to Enterprise Architecture. On the other hand, I feel there is some degree of naivety in this understanding of the nature of complexity.

Marine appears to be saying that if you build your architectures simple from the beginning, you will have addressed the problem of complexity. There are several problems with this understanding of complexity. First of all, making an architecture "simple" depends on the eyes of the viewer. What is simple to me may be complex to you. A better goal than making architectures "simple" is to make architectures "as simple as possible." It is not possible to prove that an architecture is "simple", since "simple" is subjective. But it is possible to show that an architecture is "as simple as possible." The test for "as simple as possible" involves showing that the functionality follows the rules of synergistic partitioning. If you can find a subset of functionality that violates synergistic partitioning, then you know the architecture can be made simpler. If there is no such subset, the architecture is as simple as it can possibly get, and any further efforts to simplify the architecture will actually make it more complex.

My second issue with Martine's understanding of complexity involves the implicit assumption that just because an architecture starts simple that it will stay simple. In fact, architectures tend towards increasing complexity unless ongoing effort is put into keeping them simple. It is harder to keep an architecture simple than it is to make it simple in the first place. It requires an continuing analysis of new functionality and rigorous placement of all new functionality according to the rules of synergistic partitioning.

Whose job is it to keep make sure than architectures start out "as simple as possible" and stay that way for the system's lifespan? In my view, this is the primary responsibility of the enterprise architect. Only this person has the business perspective needed to identify synergies and the technical perspective needed to understand partitioning.

An organization that lacks a viable program in enterprise architecture will pay a severe cost in IT complexity. Complexity is the enemy. Enterprise Architecture is the solution. The only solution.

- Roger Sessions