Monday, April 28, 2008

Political correctness is just noise

In signal processing there's a measurement called Signal to Noise Ratio or SNR. A given 'channel' can only carry a limited, finite amount of information during a given time period and will have a particular SNR. Here's a practical example of SNR. Let's say you are talking on a phone to someone with a cellular phone. They're standing on a windy street. Some of what you hear is the wind blowing across the microphone. This is noise. Your voices are the signal. You can talk louder and that may help but as the wind gets louder you will have to start repeating yourselves in order to be understood. It will eventually get to a point, if the wind is strong enough, at which you will not be able to understand anything he/she is saying. As the noise increases, the amount of information that can be conveyed drops.

The same is true of political correctness. Have you ever felt you couldn't speak directly about a problem because if you did people would become offended or angry? In consequence they may dismiss what you're saying because their defensiveness will get in the way of them placing any value on what you're saying. This need to couch uncomfortable realities in niceties is a problem. All of the finessing that goes into our politically correct conversations is noise, preventing the information - that which is valuable - from getting through. This doesn't mean we should dispense with manners or politeness. As Peter Drucker says in his essay, Managing Oneself,
"Manners are the lubricating oil of an organization. It is a law of nature that two moving bodies in contact with each other create friction."

Rudeness or insensitivity are no more justifiable if done in the name of 'honesty'. Let's consider the following two approaches:
"Although it was a sound decision to delay the move on our process improvement effort until some of the uncertainty around our contracts and revenue was sorted out at the time, we really need to reconsider our position on this given the new competitive information we have..."

"We made a mistake in delaying this decision. We're now behind the eight-ball and haven't got the efficiencies we should have. Our bid on the such and such contract failed because we were too expensive, too inefficient. Now that we know this a) we'd better act to fix it and b) we need to challenge our thinking more. We shouldn't have delayed. What other negative outcomes are facing us now because of this inability to make tough decisions we seem to have? How are we going to learn from this?"

The first approach is soft and non-confrontational. No one could possible get offended. No one will feel any urgency to change. The sins of the past/present will be repeated in the future. You can bet on it. The honesty and directness of the second approach lets us see the problems so we can make the changes we need to. It also shows you have confidence in the maturity of your colleagues and that they are capable of confronting and dealing with reality - even if it's unpleasant.

Sunday, April 27, 2008

Why cloud computing is risky

I enjoy Nicholas Carr's blog Roughtype as he addresses a lot of issues IT professional's / vendors are not willing to look at because a) if they did they might have to change and b) if they did they might have to change. He's being doing some writing about cloud computing of late prompting my response to one of his posts. Nicholas believes IT Doesn't Matter. If you haven't heard of his articles and book on this subject you can look it up here.

He discusses the move of IT from a strategic model to a utility model where it is no less important than electricity. Similarly what company gains strategic value today from being hooked up to the power grid?

Here is my response to his blog post:

You say, "When the Amazon system was only used by the Amazon store, in contrast, its diversity factor and capacity utilization were woefully low - a trait it had in common with most private corporate IT operations." I see your point. If we liken IT to a utility model - the more customers you have with demand coming at different hours, different intervals, different volumes - the smoother your demand (I wonder how you translate power factor to IT?). Ultimately with an infinite number of customers spanning the globe your load will be flat, allowing you to right-size your supply vs. oversize to deal with spikes.

Correct me if I'm wrong but this can be achieved at the enterprise level - by consolidating loads from different applications with varying demands on the same physical box (virtualization of memory, cpus, networks, storage). It gets even better if the loads are global and on the same hardware. Of course if you extend this far enough you''ll achieve the same loads and efficiencies as Amazon. Today's virtualization technologies are functional/valuable but still immature. Give them time and the efficiencies will improve. All that to stay I think Enterprises have a way to go before they run out of ways to manage capacity/value better.

The downside of all of this, including all the cloud efforts is the underlying complexity. Not only is Google's hardware investment growing yearly so will the support infrastructure required (people and systems). This leads me to a different point. IT utilities are different from electrical. Electrical are geographically limited while IT is not. You just need to put bigger 'pipes' in and the data could be flowing to servers across the globe versus across the city. Clouds can balance load across a geographically distributed infrastructure. This becomes problematic when you consider that more complex systems have a higher tendency for catastrophic failure. What would happen if half the world's computers shut down at the same time? This will never happen with local computing (except at that location) and is why inefficiency is desirable as it's required for redundancy. Imagine if that power outage that affected parts of Eastern Canada plus the US East Coast a couple of years ago (due to the system's complexity) had affected 1/4 of the planet. Hmmm

Wednesday, April 09, 2008

Massive project failures are really massive leadership failures in disguise

We can now add another colossal IT failure to the list already in the heads of CIOs. ZDNet has an excellent article chronicling the Heathrow Terminal 5 project, a joint British Airways and British Airports Authority £4.3bn ($8.5 billion) effort, of which a reported £175m ($346 million) represents IT systems. Apparently Queen Elizabeth herself gave the opening speech calling it a “a 21st Century gateway to Britain.” I'm sure that in her mind she was not thinking about "canceled flights (54 short-haul in one day), lost baggage, and substantial delays". Hmmm, maybe she should have picked up The Standish Group's Chaos Report first. I doubt BA's CEO was anticipating a 3% one-day share price drop when news of the extensive problems became public. Since I started writing this post (1 1/2 weeks ago) two senior executives, a director of operations and director of customer services have been fired over the T5 problems. I'm surprised the CIO still has a job...maybe they're afraid to fire him until the problems have been fixed.

We know about Nike's runaway i2 supply-chain implementation that resulted in excess inventory imbalances triggering a 20% share price drop and a $100 million quarterly earnings shortfall. It prompted then Nike Chairman Phil Night to ask, "This is what you get for $400 million?" Nicholas Carr, famous - or infamous - for his May 2003 Harvard Business Review article, "IT Doesn't Matter", wrote another article, "Does Not Compute" in the January 22, 2005 Op-Ed section of the New York Times. In it he details a number of high-profile IT failures including the FBI's 170 million dollar virtual paperweight and Ford's supply-chain project - abandoned when it was $200 million over budget.

Here's what I find alarming. We've known the root cause of project failures for some time. A number of really solid project management methodologies, proven to work, have arisen to counter these risks. So what's the problem?

The Challenger disaster was not ultimately a technology failure but was rather caused by broken and dysfunctional lines of communication through the NASA hierarchy.

Management, and I include all involved parties from the CEO and CIO down, continues to ignore best practices time and again or simply fails in their execution. I'm not going to get into iterative development and/or integration nor will I discuss the principal behind detailed (but iterative) analysis nor architectural prototypes. I'm not going to talk about project failure rates and their correlation with total person-years or budget - I'll review these in later posts. The failure is not due lack of data or sufficient guidance in effective implementation methods. Every time you see a project failure like this your are seeing evidence of management failure in the organization. Management who does not deal with employees who aren't equipped with the 'process' tools they need to get their jobs done. Who are not dealing with breakdowns in communication across the company's departments (looks like BA's failures are at least partly due to this). Management who are not hands-on and fully engaged in the project - passing the buck down through the organization instead.

What are the solutions? Make sure departmental responsibilities are not only defined but that there is also accountability. The CEO should be fully engaged in major projects. Deal with non-performers in your organizations. Make sure the project's business case is results-oriented. Once the project is over, compare actual to planned results. In short, senior management needs to create a culture of execution throughout the company that starts with the C-level and flows on down through the departments and project teams.

If you expect people to deliver results and they know it, the one's who you'll want to keep on in your organization are the same ones who will find the right methods to get those results. They're the people who hate to fail. Within a culture of execution they are also the ones who will thrive.

Saturday, March 03, 2007

The Buy versus Build debate

Most CIOs and senior IT managers would agree that it is better to buy already-built IT systems rather then design them from scratch. This decision becomes more of an imperative the more complex the system is. Not all business people understand this however. Some of them believe programmers can do anything and managers' refusal to develop 'from scratch' is either due to plain old unwillingness or even worse incompetence.

Over the years I have been involved with the Software Engineering Institute. I standardized the software development groups of a couple of organizations I led on SEI-inspired methods and on automation from Rational Software. While we never followed the CMM to the letter we did adapt it to our organization's size and needs. We also have used principals from another of their process models, EPIC (Evolutionary Process for the Integration of COTS-based Systems) in the IS Business Analyst section I ran for a Canadian Bank. In all of these experiences, with significant emphasis on software quality by design and by testing I have come to the conclusion that the development of quality software is not for the faint of heart. It requires exceptional rigor from all participants (management, architects, developers, and testers). It is expensive to do regardless of whether you build quality in or you pay through post-production failure-fix cycles. Built-in quality being not only the more preferable of the two but also the less expensive. Some general managers are not willing to pay the up-front costs for quality and then gripe after the software causes interruptions to their business. Many IS management are not insistent enough to the resourcing required to build it right as this insistence sometimes means a willingness to resign if necessary - and if you have to threaten this its better to just start looking. There are more than enough of these two cases to account for the numerous failed software projects we've heard about. By the way, the teams I've led have made good quality software. No business has experienced interruptions due to bugs in our code. I say this to assure you that a) good software can be built and b) I'm talking from a position of strength and not weakness. It has always been expensive though and I have often questioned whether we have delivered adequate value for the investment.

There are many examples of failed software. The more complex the more difficult it is to design. In this article we see what happens when a green-field software development project of a complex system goes awry post-production. What was probably very little malfunctioning code resulted in a potential billion-dollar disaster and more importantly the potential loss of ten or more lives. And in this case it is very likely the development team used a disciplined quality process to deliver the system. It's the only way to get something this complex to work at all.

SD is not, nor should be, the core competency of businesses not in the technology business. Business should adopt COTS (Commercial Off-The-Shelf) systems. If you need functionality not found in existing systems, find one that delivers most of what you need and build the small pieces that are missing. Or else integrate multiple systems using an iterative method (RUP, EPIC) where possible. The systems you do build should be either relatively simple or non-mission-critical. If you have to build a larger system, use an iterative process to deliver a smaller system and build on it. There are many solutions. The big-bang approach being the least likely to succeed. Keep in mind though that every time you build a medium-large complex system you take your career into your hands.

Monday, October 30, 2006

ERP Systems are Neither Silver Bullets nor the Bane of Business

CIO Magazine has a story entitled, "ERP Systems on Steroids, Is it time for a no-tolerance policy?" in which the author discusses the faults and failures of ERP systems. As I mention in my comment to the post, I am not ERP advocate however I found his arguments lacked consistency and depth. Read the article and then my comments following it.

Tuesday, October 03, 2006

Once You've Outsourced Everything, What's Left?

Many of us have witnessed the rise of outsourcing across numerous business sectors. IT outsourcing was big in the post-Y2K days. Outsource everything and cut your costs was the utopian's, I mean consultant's cry. Many companies did just that. I know of one organization that has since repatriated core strategic functions like Enterprise Architecture, Project Management, and Business Analysis from their outsourcer. Why? Architecture will determine your organizations flexibility and ability to adapt to new demands. It will also determine the cost burden you bear for the maintenance of your infrastructure. In the wrong hands (which are any that aren't directly attached to the organization), this can spell disaster, albeit stretched over a 5-10 year period. Similar things can be said for the Project Management and Analysis disciplines.

Since the early days of outsourcing and offshoring the focus has shifted from technology services to business services. Outsource your call/contact centre. Outsource HR, manufacturing, payroll, even your business strategy (many companies do this by bringing in management consultants to do what their own managers should be doing).

Some of these make a lot of sense. Payroll is largely a low value-add and commodity business process that can easily be outsourced without risking a loss in cost-effectiveness or strategic leverage. What about manufacturing?

We do not need to name the number of manufacturers that are moving their operations oversees. Goodyear is moving a portion of its manufacturing to China. IBM, Intel, and Cisco and others have or are building R&D centres in India. Offshore your knowledge! Hey, that's smart! Or is it? How will they even be able to answer these questions when they've offshored their brains?

When you remove manufacturing from North America, you remove the skills required to build plants, and develop factory automation from the society. People go where the job demands are high. How many MBA students with a focus on manufacturing are there in Canada and the US today? How about the rest of the world? Although I'm interested in knowing what the hard numbers are and what the last decade's trends show I don't really need to - it's clear. Read the newspaper and management journals and then go back to history. MIT's Sloan Management Review has an excellent article on what management 'gurus' did to (accidentally) shift the power over product pricing from the manufacturer to the distributor and mega-retailer (say Wal-Mart). These experts taught that to compete head-on with higher-quality manufacturers in Japan, companies had to divest themselves of activities that were not part of their core competencies. They did. They got rid of sales and distribution among other functions. From the article:
"Although Goodyear started using alternate distribution channels in the 1970s, the shift away from its dealer network accelerated dramatically in the early 1990s when the company introduced its tires to Sears and Wal-Mart stores. As a direct result, the company went from having a global network of loyal and faithful dealers and strong brand loyalty to becoming the manufacturer of a commodity that could be purchased at an ever-growing number of outlets for a lower price...The prices of Goodyear tires to consumers fell precipitously...A slow degeneration of the company began...unable to raise its prices ...Goodyear was faced with the inevitable: the removal of costly manufacturing centres from within the United States."
What some managers fail to account for when making major structural changes to their organizations is their systemic impact. Major changes, made within an isolated framework - such as only thinking about cost-cutting or improving quality (as in Goodyear's case)- can have significant negative impacts on an organization.

Government and policymakers need to start attacking this problem today. Again, it is systemic. Labor costs are only part of the problem. Education is another as is the relative complacency of today's 'Western' workforce. Is protectionism the answer? I don't think it is that simple but countries like Canada and the US are in the process of losing control over their economies, a fair extrapolation of the final outcome of offshoring, have to make some tough and likely radical decisions.

Sunday, August 20, 2006

Staffing in the 21st Century - Older Employees

Knowledge Workers are an important part of any IT strategy. Who you hire in key positions can make or break your organization. Who you hire in non-key but core positions can make or break its effectiveness and efficiency.

An excellent article, "Age at Work", in the June/July 2006 issue of Scientific American Mind discusses the differences between younger and older (over 50) workers:
...although older people may be slower at some tasks, they are actually faster at others, and in most cases they are less prone to mistakes. The research also reveals that only certain brain functions are affected by possible age-related deficits and that simple changes in the workplace can compensate for them.

The issue of hiring older workers will become more pertinent as the mean age of potentials employees continues its migration north. Lower birth rates and deferred retirements will mean that more of our potential pool of expertise will come from these people.

I have been involved in a number of engagements with various branches of, let's call them Company A,'s consulting arm and have been impressed by the number of older workers they employ. They are more seasoned, less likely to rush into things, bring a wealth of practical project experience to bear on any given task, and are revered by their younger colleagues. Of one Company A employee I heard a younger one say, when I mentioned that I thought his heat capacity calculations were wrong for our computer room, "Really, I don't think he's ever been wrong." In the end he was right - I was wrong (it's rare).

I have worked with more senior, over 50, individuals in this organization than in any other. When I discuss my concerns with younger individuals at Company A and other organizations they don't always understand simple things like how a commitment made by another of their colleagues and broken by them is a problem because in my mind it is clear that Company A made the commitment and not any individual. I look at them struggling to understand it on a gut, emotional, level. The older ones just get it and take ownership for their 'corporate entity's' actions and not just their own. This is a very simple, almost trivial example - there are many more. I am also only using Comnpany A as one example. I have worked with other organizations, both large and small, and have had similar experiences in all of them.

Older experienced seasoned staff can be worth their weight in gold.