Wednesday, December 3, 2008

ObjectWatch Newsletter: Enterprise Architecture in Hard Times

We are facing some very difficult financial times. Can enterprise architecture survive? From the article: EA can survive hard times, even thrive. But first we must make some major changes in how we view EA. No more poorly defined payoffs that are years in the future. The payoffs must be immediate. They must be real. And they must be compelling. What are these necessary changes? Read the article here and find out.

Tuesday, November 25, 2008

Senate Bill S.3384

In reading Michael Krigsman's blog, I was introduced to Senate bill S.3384, "Information Technology Investment Oversight Enhancement and Waste Prevention Act of 2008." The bill was placed on the Senate Legislative Calendar on 10/1/08.

This bill was introduced by Senator Thomas Carper (D-DE) and cosponsored by Sen. Norm Coleman (R-MN), Sen Joseph Lieberman (ID-CT), Sen. Susan Collins (R-ME), Sen Claire McCaskill (D-MO), and Sen. George Voinovich (R-OH). The bipartisan support (two democrats, two republicans, and an independent) in itself makes the bill noteworthy.

The bill requires that any government IT investment project that has "significantly deviated" from its projections must be reported up the command chain, ultimately ending with the responsible CIO. Any project that is off its projections by more than 20% is considered "significantly deviated." Projects that are in "gross deviation" are further reported to the appropriate congressional committee. "Gross deviation" is considered more than a 40% deviation from its projections.

Deviations are measured relative to the Earned Value Management (EVM) benchmark. This benchmark includes timely measures for project expenditure, project completion, and project deliverables.

S.3884 requires that this reporting be done on a quarterly basis. This means that deviations must be measured (and reported) long before the project is actually completed.

State governments often follow the lead of the federal government, especially in "watch-dog" type legislation. Therefore it seems likely that S.3884 may have an impact beyond just the federal government and into all areas of the public sector.

EVM benchmarks are highly dependent on the ability of an organization to accurately forecast both deliverables and costs. These are both areas in which the government has historically been weak. I have frequently written about governmental IT projects that have gone under-promise, over-budget, past-due, or all of the above. If S.3884 is going to be successful, it will require a major new approach in how government does IT.

The biggest problem that the government has with IT is its chronic failure to manage complexity. The more complex a project, the harder it is to project cost and predict deliverables, two absolute requirements for an EVM analysis. There is no possible way that S.3384 can be implemented successfully unless the government first takes steps to understand and manage IT complexity.

I strongly support any requirement that the government do a better job with IT. But I think the Senate needs to be realistic. Passing S.3384 without giving governmental IT bodies the tools they need to address IT complexity is setting them up for failure.

Let's phase in S.3384. First, introduce SIP (or some other equivalent methodology) to the government to show how highly complex systems can be organized as much smaller, simpler, autonomous systems. Then hold the government accountable for how it delivers those simpler systems.

If the government takes the two steps of IT complexity management and IT accountability. we can transform how the government does IT. We will have a government that delivers IT projects on-time, on-budget, and meeting the needs of the citizens that pay for those projects. That is you and me.

Wednesday, November 12, 2008

Carving up an Enterprise Architecture

Melissa A. Cook recently published an article in ComputerWorld titled Enterprise Architecture: Lessons from the cutting board. In this article, Melissa likens developing an enterprise architecture to cutting up a chicken for dinner. She gives three lessons for creating an enterprise architecture.

Lesson 1: Cut through the joints, in other words, look for natural boundaries around business processes. As she points out, “if you get the natural boundaries right, it simplifies the number and complexity of interfaces. How do we do this? Melissa says “we need to take a gander at the enterprise goose, the whole thing.” A good check point is looking at data: “a ton of data elements or columns of data on an interface, you probably have the boundaries wrong.”

Lesson 2: Natural breakpoints or boundaries are similar for similar organizations. In other words, see what others that are in similar businesses have done and use this for a model.

Lesson 3: Boundaries don’t change based on size of the organization. Big turkeys carve much like little chickens.

I agree with Melissa strongly on two points. First, if you get your enterprise architecture right, you will make a great leap forward in managing the complexity of your systems. Second, your success in getting the architecture right is reflected in the data structures moving around your system.

Having said that, I feel several points need more comments. Let’s start with her thoughts on data. While I agree that poor enterprise architectures result in complex data patterns, I disagree that this is a good test to use to evaluate your enterprise architecture. The reason is simple. By the time you can notice the problems with data, it is far too late to do anything about the problems with the architecture. You have already implemented your systems and any changes in the enterprise architecture at this point are going to be very expensive.

So what is the solution? Any enterprise architecture should be tested against theoretical models for enterprise architectures based on the mathematical laws of complexity. These tests can be conducted BEFORE the architecture is implemented and can pinpoint problem areas before money is spent on their implementation at a time when they are still relatively inexpensive to fix.

Another area that we need to look at more closely is her lesson 2. While it is true that similar organizations have similar boundaries, this is not particularly helpful information. Why? Because “similar” is not good enough. Even small errors in “cutting up the chicken” can lead to huge problems with complexity. I personally have seen millions of dollars lost due to a single poor function placement. Again, the answer is to methodically check each function for correct placement, not based on a particular chef’s preferences, but based on the strict mathematics of synergies.

Don’t get me wrong. Theoretical analysis can only take you so far. But it can take you a lot further than we can go by guess-work, checking data patterns, or even looking at industry patterns.

Monday, November 10, 2008

The Job of the Enterprise Architect

ZDNet recently published an article on My Awesome IT Job: Enterprise Architect, IBM. The article is an interview with Martine Combes, an Enterprise Architect working with the CIO Organization of IBM in France. In the article, Martine says, "It is very easy to become overwhelmed by complexity and it is very difficult to maintain simplicity, especially when the scope is large, the number of team members is high. So better to have a simple architecture from the start, clearly showing the benefits for the Business."

On one hand, I applaud Martine's awareness of the importance of simplicity when it comes to Enterprise Architecture. On the other hand, I feel there is some degree of naivety in this understanding of the nature of complexity.

Marine appears to be saying that if you build your architectures simple from the beginning, you will have addressed the problem of complexity. There are several problems with this understanding of complexity. First of all, making an architecture "simple" depends on the eyes of the viewer. What is simple to me may be complex to you. A better goal than making architectures "simple" is to make architectures "as simple as possible." It is not possible to prove that an architecture is "simple", since "simple" is subjective. But it is possible to show that an architecture is "as simple as possible." The test for "as simple as possible" involves showing that the functionality follows the rules of synergistic partitioning. If you can find a subset of functionality that violates synergistic partitioning, then you know the architecture can be made simpler. If there is no such subset, the architecture is as simple as it can possibly get, and any further efforts to simplify the architecture will actually make it more complex.

My second issue with Martine's understanding of complexity involves the implicit assumption that just because an architecture starts simple that it will stay simple. In fact, architectures tend towards increasing complexity unless ongoing effort is put into keeping them simple. It is harder to keep an architecture simple than it is to make it simple in the first place. It requires an continuing analysis of new functionality and rigorous placement of all new functionality according to the rules of synergistic partitioning.

Whose job is it to keep make sure than architectures start out "as simple as possible" and stay that way for the system's lifespan? In my view, this is the primary responsibility of the enterprise architect. Only this person has the business perspective needed to identify synergies and the technical perspective needed to understand partitioning.

An organization that lacks a viable program in enterprise architecture will pay a severe cost in IT complexity. Complexity is the enemy. Enterprise Architecture is the solution. The only solution.

- Roger Sessions

Thursday, October 30, 2008

The Future of Enterprise Architecture

Kristian Hjort-Madsen recently posted a blog entry about the future of Enterprise Architecture in the government. Having done some work on architectural simplification for several public sector clients, this got me thinking about not only the future of EA in the government, but the future of EA in general.

It is my belief that we need to rethink Enterprise Architecture from being something that is focused on understanding the Enterprise to something that is focused on delivering value to specific projects. To some extent, this calls into question the name, "Enterprise Architecture", but I still think the name has value because it implies that we are considering project level issues that can only be addressed at the enterprise level.

To go even further, the main value that "Enterprise Architecture" can deliver to projects is in reducing their overall complexity. There are a number of secondary issues on which we can also focus, such as improving business/IT communications, but I believe that most of these will greatly benefit just from reduced project complexity.

Why do we need to focus on complexity at the Enterprise Level? Why not the IT level or the business level? The answer to this requires an overview of the Science of Simplicity. This is a bit much to cover here, but is the topic of my last book, Simple Architectures for Complex Enterprises and numerous white papers at my web site. In brief, simplification requires a good understanding of both synergistic relationships (which are best understood by the business side of the enterprise) and strong partition boundaries (which are best understood by IT side of the enterprise). We can thus only address these issues at the juncture point between business and IT, and that juncture point is what we usually call "Enterprise Architecture".

I think that government is perhaps the area that can particularly benefit from a focus on simplification. Government projects tend to be highly complex and often involve multiple vendors. When complexity is not managed methodically and early on, failure is too often the result. An excellent example of the problems inherent in such projects is the UK NHS system known as NPfIT (National Programme for Information Technology). This system will likely cost 50-100 billion dollars, and most likely will end up in failure. Why? Catastrophic complexity.

Why is complexity such a problem? Several studies (and numerous personal observations) have shown that increasing the functionality of a system by 25% increases the complexity of the system by 100%, unless steps are taken to manage the complexity. This means that taking a system from 10 functions to 100 functions increases the complexity by over 1000 times! Clearly this is not acceptable. The only way we can dampen this increase is by focusing on the problem of complexity. And Enterprise Architecture is where we have the opportunity to do so.

So let me summarize my thoughts on where Enterprise Architecture needs to go in the near term, especially in the public sector. First, it needs to move from an enterprise focus to a project focus. Second, it needs to narrow its focus to those problems that can only be solved from an enterprise (that is, a broad business/IT) perspective. Third, it needs to hone in on the one major problem that can only be solved from an enterprise perspective: complexity.

If we, as Enterprise Architects, make these adaptations, we will start making important, measurable project contributions and delivering highly visible short term value. And, in doing so, ensure our own long term survival.

Tuesday, October 21, 2008

Can an EA be "correct"?

In a recent blog, Nick Malik asserted that EA is not about building apps right, but is about building the right apps. And that the correctness of an EA was a point of view.

This was my response:

In my view, EA is not about either building apps right OR building the right apps. It is about creating a structured framework within which meaningful and collaborative dialogues can occur between the business and IT groups. The term that I use to describe this "structured framework" is Enterprise Architecture.

In most organizations, "meaningful and collaborative dialogues" between business and IT do not occur. You must ask why this is so. Is it because the business folks are overly demanding and unreasonable? Is it because the IT folks are unable to understand what business is saying?
In my view, the problem is neither the business nor IT. The problem is that EA has either not been done at all or, if it has been done, has been done incorrectly. Which brings us back to Nick's original point. Is there a "correct" way to do EA?

The answer is absolutely, yes. But the starting point needs to be an understanding of what we mean by "correct". I define "correct" as "as simple as possible". So of two EAs that both accurately define the enterprise, the more correct one is the simpler of the two. And the EA that is "absolutely" correct is the EA that is the simplest possible EA.

Why do I believe that simplicity is the raison d'etre for EA? The answer goes back to my original question. What is it that gets in the way of meaningful dialogues between business and IT? In my opinion, what gets in the way is complexity.

When IT blames the business for fuzzy requirements, IT is wrong. When the business blames IT for inability to deliver, the business is wrong. The problem is neither IT nor the business. The problem is complexity. Until complexity is solved, dialogue is impossible. Complexity is the enemy, and it is the common enemy of both IT and the business.

So can you determine if an EA is as simple as possible? Again, the answer is yes. But to do so requires both an understanding of the mathematics of complexity and a rational process for testing an EA for being "as simple as possible".

These are both topics I have covered extensively in my book, "Simple Architectures for Complex Enterprises", so if you are interested in these ideas, that is where to go. I also have a number of papers on this topic at my website, http://www.objectwatch.com/.

Best Wishes,
Roger Sessions

Thursday, August 21, 2008

Complexity is not a technology problem

In a recent copy of CIO Magazine, John Parkinson wrote an article titled Simplifying IT Management is Anything But. In this article, John discussed the difficulty in seeing through the vendor presentations to see which technologies would really make a difference to an organization.

In my view, if you are trying to understand what technologies to use to address complexity, you are probably too late. The solution to complexity is never found in technology. It is found in architecture. And architecture is mostly technology neutral. My perspective on complexity is that it can only be addressed at the enterprise architectural level, and then only if an organization specifically focuses on that as the overriding issue. In my book, Simple Architectures for Complex Enterprises, I address the issue of architectural complexity in depth. In summary, we must first understand complexity, then model it, then build processes that can be proven to address it. But the single most important factor is attitude. We need to view complexity as an enemy and simplicity as a core business asset. Every business analyst and IT architect needs to unite in the battle against complexity. Most IT systems fail, and the bigger they are, the harder they fall. When an IT system fails, the reason is almost always unmanaged complexity.

Friday, August 15, 2008

Simple Architectures - The Book (Discussion)

Simple Architectures For Complex Enterprises is now available. You can order it at Amazon. This book introduces a new way of thinking about Enterprise Architecture. The focus of Enterprise Architecture should not be on efficient use of IT, improving alignment between IT and the business, or any of the other traditional concerns of Enterprise Architecture. Why? Because none of these are the real problem, they are merely symptoms of the problem. The real problem is enterprise complexity. This book discusses the nature of enterprise complexity, why complexity causes so many problems for IT, how complexity can be understood, and, most important, how complexity can be attacked.

Would you like to discuss some of the issues raised in Simple Architectures? This is as good as place as any!

Thursday, May 29, 2008

The Five Causes of IT Complexity

Complexity is a big problem for IT. Complex systems cost too much, fail too often, and usually do not meet basic business and architectural requirements.

I believe that all IT complexity can be traced to five causes. Eliminate the five root causes of complexity, and you can eliminate unnecessary complexity.

These root causes are as follows:

  • Partitioning Failures – that is, systems in which either data and/or functionality has been divided into subsets that do not represent true partitions.
  • Decompositional Failures – that is, systems that have not been decomposed into small enough subsets.
  • Recompositional Failures – that is, systems that have been decomposed into subsets that are too small and have not been recomposed appropriately.
  • Boundary Failures – that is, systems in which the boundaries between subsets are weak.
  • Synergistic Failures – that is, systems in which functionality and/or data placement does not pass the test for synergy.

I’m planning on exploring these five causes in my next ObjectWatch Newsletter, so if you are interested, stay tuned.

Can anybody think of a cause of IT complexity that is not covered above?

Tuesday, January 22, 2008

How Not To Make IRS Systems Secure

The GAO recently came out with a study (GAO-08-211) that highlights a problem that is frequently associated with IT complexity: poor security. The GAO, for those of you who are not familiar with it, is the United States Government Accounting Office. This is the organization charged with making sure that the U.S. Government is spending our tax dollars wisely.

In this report, the GAO severely chastised the Internal Revenue Service for “pervasive weaknesses” in the security of the IRS IT systems. According to this report, these weaknesses “continue to threaten the confidentiality and availability of IRS’s financial processing systems and information, and limit assurance of the integrity and reliability of its financial and taxpayer information.”

Now you might wonder why the IRS would do such a poor job with IT security. Surely the IRS is aware of the need for IT security!

The reason for the IRS problems is simple. The IRS systems are highly complex. And highly complex systems are notoriously difficult to make secure. Among the problems noted by the GAO:
­
  • The IRS does not limit user rights to only what is needed to perform specific job functions
  • The IRS does not encrypt sensitive data
  • The IRS does not effectively monitor changes on its mainframe.

This is ironic, given how important the IRS considers security. But the GAO finding is an excellent illustration of a point I make many times: controlling complexity is more important than controlling security. A system whose complexity has been controlled can be made secure relatively easily. A system whose complexity has not been controlled cannot be made secure, regardless of how much effort is expended.

Those readers familiar with my approach to controlling complexity (SIP) know that I advocate a form of mathematical partitioning to greatly reduce an IT system’s complexity. This process results in a number of sets of synergistic business functionality. I call these sets ABCs for autonomous business capabilities.

These sets represent a mathematical partition and that partition extends through the data ownership. Because of this, it is relatively easy to fix all three of the problems noted above with the IRS system.

For example, it is relatively easy to ensure that a given user need be given no more access rights than they need to complete a specific job, since they are given rights to business functionality, not to data.

It is relatively easy to encrypt data, since data moves between business functions only though well-defined messages and these messages are easily encrypted.

It is relatively easy to effectively monitor all changes made to the mainframe and associate those changes with specific business events and specific users, since any data is owned by specific ABCs and is never visible outside the ABC (except by messaging contracts).

The good news is that the IRS has responded to the GAO report by stating that it is addressing all of the issues raised. The bad news is that it is going about this in exactly the wrong way.

According to Linda E. Stiff, Acting Commissioner of the IRS, “…the IRS has obtained additional expert-level technical support to assist in the development of a comprehensive security analysis of the architecture, processes, and operations of the mainframe computing center complex in order to develop a roadmap and strategy to address several of the issues noted by the GAO in the report.”

In other words, the IRS is going to continue making the same mistakes that led to its current problems: worrying about security and ignoring the real problem, complexity.

This is unfortunate. It means that for the near term, we can expect the IRS systems to continue to be “unprotected from individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks,” as the GAO describes the IRS systems today.

Friday, January 4, 2008

Feedback on The Top Seven Mistakes CIOs Will Make in 2008

My article on The Top Seven Mistakes CIOs Will Make in 2008 drew many comments with excellent observations. Let me respond to what I have seen so far.

Mark Blomsma questions whether we can partition an enterprise architecture into subsets that do not influence each other. He points out that a small failure in one subset can lead to a massive failure in another subset.

Mark is very correct in that it is difficult to create subsets that do not influence each other. I call the interactions between subsets thin spots. Our job in enterprise architecture is to minimize these thin spots. Partitioning is based on the theoretical ideal that there is no interaction between subsets of enterprise functionality (no thin spots). Of course, we know that this ideal is not attainable. So we need to compromise, allowing interoperability between subsets while minimizing the thin spot degradations.

In general, I believe our best hope of minimizing thin spot degradation of an enterprise architecture comes from having interactions between subsets occur through software systems rather than through business processes. There are two reasons for this. First, the more we can automate workflow, the better off we are in general. Second, we have a better understanding of managing software thin spots than we do for managing business process thin spots. As one such example, see the post I did on data partitioning.

One of the primary tasks of the Enterprise Architect is boundary management, that is, on architecting the boundaries between subsets of the enterprise. The stronger the boundaries, the better the partition and the better we can manage overall complexity. In my upcoming book, Simple Architectures for Complex Enterprises, I have a whole chapter dedicated to managing software boundaries.

The problem the enterprise architect faces is that the boundaries between subsets are hard to define early in the architecture and under a never ending attack by good intentioned developers for the entire life cycle of the architecture. One of the best approaches I have found to boundary management is getting the enterprise as a whole to recognize the critical importance of complexity control. Once complexity control has been embraced, it is easier to get people to understand the role of strong boundaries in controlling complexity.

KCassio Macedo notes the importance of tying EA to business value.

This is a great point. An enterprise architecture that does not deliver business value is not worth doing. One of the advantages of partitioning an enterprise architecture is that we end up with autonomous subsets, that is, subsets of functionality that can be iteratively delivered. I call these subsets ABCs, for autonomous business capabilities. These ABCs are very compatible with Agile implementation approaches.

Partitioning is our best approach to managing complexity and mapping the enterprise into autonomous bite-size pieces that can be delivered in an agile and iterative manner.

I believe business value need to be measured not only in ROI (return on investment) but on TTV (time to value). TTV is a measure of how quickly the enterprise sees value in having undertaken the enterprise architecture exercise. Most organizations are initially skeptical of enterprise architecture, and the shorter you can keep the TTV, the sooner you can transform skepticism into support. This builds momentum and improves even more the chances of future success.

Mrinal Mitra notes the difficulty in partitioning the business process and points out that some partitioning might even require a business reorganization. This is an excellent point, partitioning does sometimes require business reorganization.

However, why are we doing an enterprise architecture in the first place? As KCassio Macedo points out in the last comment, enterprise architecture must be tied to business value.

There is rarely any business value in merely documenting what we have today. We are more often trying to find a better way of doing things. This is where the business value of an enterprise architecture comes in. A partitioned enterprise is less complex. A less complex enterprise is easier to understand. The easier we can understand our enterprise, the more effectively we can use IT to meet real business needs.

When I am analyzing an existing enterprise for problems, I usually don’t start by partitioning the enterprise, but instead, analyzing how well the enterprise is already partitioned. If it is already well partitioned, then there is little value in undertaking the partitioning exercise. If it isn’t, then we can usually find areas for improvement before we actually invest in the partitioning exercise.

Anonymous gives an analogy of barnacles as an accumulation of patches, reworks, and processes that gradually accumulate and, over time, reduce a system that may have been initially well designed into a morass of complexity.

This is an excellent point. It is not enough to create a simple architecture and then forget about it. Controlling complexity is a never ending job. It is a state of mind rather than a state of completion.

In physics, we have the Law of Entropy. The Law of Entropy states that all systems tend toward a maximal state of disorder unless energy is continually put into the system to control the disorder. This is just as true for enterprise and IT architectures as for other systems.

We have all seen IT systems that were initially well designed, and then a few years later, were a mess of what Anonymous calls barnacles. Enterprise architecture is not a goal that is attained and then forgotten. It is a continuing process. This is why governance is such an important topic in enterprise architecture. In my view, the most important job of the enterprise architecture group is understanding the complexity of the enterprise and advocating for its continual control.

Anonymous also points out that complex IT solutions often reflect poorly conceived business processes. Quite true. This dovetails nicely on Mrinal Mitra observation in the previous comment. And this is also why we need to see enterprise architecture as encompassing both business processes and IT systems.

Alracnirgen points out that the many people involved in projects, each with their own perspective and interpretation, is a major source of complexity.

This is another good point. There are enterprise architectural methodologies that focus specifically on addressing the issues of perspective, interpretation, and communications. Perspective, for example, is the main issue addressed by Zachman’s framework. Communications is the main issue addressed by VPEC-T.

My own belief is that we must first partition the enterprise, focusing exclusively on the goal of reducing complexity. Interpretation will still be an issue, but we can limit the disagreements about interpretation to only those that directly impact partitioning. Once we have a reasonable partition of subsets, then we can, within those autonomous regions, focus on perspective, interpretation, and communications in the broader picture.

Terry Young asks if the relationship between complexity and cost is linear. Terry’s experience is that “doubling the complexity more than doubles the cost to build and maintain the system.” This relationship between complexity and cost that Terry brings up is very important.

I agree with Terry that the relationship between complexity and cost often appears to be non-linear, however I believe that this is somewhat of an illusion. Let me explain.

I believe that the relationship between complexity and cost is, in fact, linear. However the relationship between functionality and complexity is non-linear. Adding a small amount of functionality to a system greatly increases the complexity of that system. And since complexity and cost are linear, adding a small amount of functionality to a system therefore greatly increases its cost. It is this exponential relationship between functionality and cost that I believe Terry is observing.

Just to give you one example, Cynthia Rettig wrote a paper in the MIT Sloan Management Review (The Trouble with Enterprise Software by Cynthia Rettig) in which she quoted studies showing that every 25% increase in the functionality of a software systems increase the complexity of the software system by 100%. In other words, adding 25% more functionality doubles the complexity of the software system.

Let’s assume that we start out with a system that has 100 different functions. Adding 25% more functionality to this would be equivalent to adding another 25 functions to the original 100. Rettig tells us that this 25% increase in functionality doubles the complexity, so the software with 125 functions is now twice as complex as the one with 100 functions. Another 25% increase in functionality gives us a total of 156 functions, with four times the original complexity. Another 25% increase in functionality gives us a total of 195 functions with eight times the original complexity.

This analysis shows us that by doubling the functionality in the system, we increase its complexity (and cost) by 800 per cent. If we continue this analysis, we discover that tripling the functionality of the original system increases its complexity by 126 times. In other words, system B with 3 times the functionality of system A is 126 times more complex than system A. And, since complexity and cost are linear, System B is also 126 time more expensive than system A.

But this is the pessimistic way of looking at things. The optimistic way is to note that by taking System B and partitioning into three sub-systems, all about equal in functionality to System A we reduce its complexity (and cost) to less than 3% of where it started.

If you are trying to explain the value of partitioning to your CEO, this argument is guaranteed to get his or her attention!

Wednesday, January 2, 2008

The Top Seven Mistakes CIOs Will Make in 2008

CIO Magazine recently interviewed 250 CIOs from a variety of organizations and asked what their top ten goals are for 2008. As I read this article, I realized that of the top ten goals, seven have virtually no hope of being attained. Why is this?

In my January ObjectWatch Newsletter, I discussed why so many CIOs are heading down a path of failure in the coming year. The article is available without cost or registration at the ObjectWatch Web Site. Feel free to comment on the article here.