Saturday, July 26, 2008

The Pendulum Swings

I think there are few industries – if any – that change their minds as fast as the IT industry.

I like to use the analogy of a pendulum to discuss this topic. Because at a high level, the decisions go back and forth, swinging right to left. And the one certain fact of a pendulum that you can count on is it will only swing so far to the right, stop, and swing back again to the left.

In this analogy, you can count on the fact that what is old will soon be new again.

Take the example of the mainframe computer.

For awhile – in the early years – it was the only game in town. The only device you could hook up to the first ones were card punch readers to act as your input, and a printers to act as your output.

Then the terminal came along. It had a keyboard that users could type their instructions into, and see the output on a monitor.

At the end of this great pendulum swing – the first swing – all data was centrally stored in one location with no redundancy of data in multiple places.

Then the personal computers started emerging. And the first applications available were word processing programs, spreadsheets, and database managers.

But the data the users of the PCs wanted was stored in the mainframe. A would user call the computer room, and request a report from the mainframe. The computer guys, after a little hassle, would send the PC user a printed report. The printed report was then typed into the PC – into a spreadsheet or database by the PC user.

The result? Redundant data resided in the corporation – all over the place. Every PC that contained a list of customers, products, sales forecasts or results had a different set that was updated only on that PC.

The PC – after a slight civil war between data processing department and every other department in the company – sparked the invention of the 'terminal emulation card' – which required cabling to be run throughout a building, poking out the walls of each office that had a PC, and plugging into the card that had been inserted in the PC.

The result? The mainframe was still a separate and independent data system. The PC user would have to switch their computer from PC mode to Terminal mode. There was still no bridge to get the data from the mainframe to the PC. But as these cards were evolving new features were added to allow the output from a mainframe batch process to be used by a PC. But the work for the PC user was still tedious.

And the data redundancy problem was getting worse – not better.

By the way this same era also introduce the popularity of the computer consultant, a supposedly skilled individual who claimed they could automate the tedious aspects for the PC user – but that is fodder for a later discussion.

This is the end of the first swing back - to a collection of desparate systems each holding unique resources.

Terminal cards evolved to become network cards and the Local Area Network was born. At first the LAN was used to allow multiple persons to share a common device, like a printer. And the mainframe was still a separate sacred entity.

But then network servers started to appear. PC's that had no user – but acted mostly as a central storage site for documents, spreadsheets, and PC databases.

The redundancy issue was reduced, but not eliminated. There were still no direct ties to the mainframe. So the term "data source of record" became popular to describe the elite status of the mainframe data content.

The pendulum was now in midway swing back to the central data repository state.

The company was now littered with two levels of program applications – those that ran on the Mainframe – and those that ran on the local PC's – and eventually across the LAN.

The ground was now set for the most daring venture of all. The ability to allow PC based applications to access the data on mainframe using various client (PC) to server (Host) data transfer methods – among them ODBC. Now the logic could reside on the PC – and the data was retrieved from the mainframe, to the PC – processed by the logic on the PC – and written back to the database on the mainframe.

The result? Much better access to the data by the company – and the distribution of responsibilities to maintain that data. But the problem was that the logic on the PC was often flawed and the integrity of the data on the mainframe was compromised.

This marks the end of the pendulum swing of the data processing department – and the birth of the Information systems department.

As time and technologies progressed – and the logic of programs running on PC's consistently tweaked – now by the IS department – the data integrity issues started to wane.

But then we saw the birth of the Internet. And the internet allowed people to share data in completely different ways. In fact more than data was being shared. Communications became an integral part of the company's Information Systems as email exploded.

But the most influential aspect of this paradigm shift was that it was now better understood how to have the logic of a program reside on the host server and downloaded to the PC every time it was to be run. And one central source for code.

And the pendulum had swung all the way back to a single central repository of logic and data.

But such a function was a bit to much processing to take place on the single host – it could not be expected to respond quickly to requests for data and for logic. So the two responsibilities were split across two different hosts.

But with the adoption of TCP/IP taking over corporate LANs, security became an issue. Anybody on the outside could get in, view and steal sensitive information, and even maliciously harm the system. So a new level of host was invented. The security server would sit independently of all other resources and act as a sentry to keep out all but the authorized network requests.

So the pendulum now is in mid-swing back to disparate systems – and again facing data rundundancy problems – albeit much smaller than the first swing back.

By this time, the IS department had grown significantly to support all these various services – from server administration, software development, PC application support and development, and training. In most circumstances, the IS department now employed more staff than any other departments.

Think about this.

Let's say your company makes widgets. You sell widgets. You fix widgets. You ship and receive. You do accounting. You do research and development to make better widgets.

Each of your departments at Widgets Inc. has totally different needs – and a few common needs. So each of those departments have totally different – yet urgent needs for their set of information system components. The IS department has now grown to support all those needs. And as technology changes so fast, much more time is spent training IT staff on new technologies than any other part of your business.

Because information systems give the widget company the potential to make better widgets and reach newer markets. Quite frankly – and perhaps generally – you can judge the success of the IS department by the success of the company.

So the key to optimal business success is to make your IS resources as efficient as possible.

So what does this pendulum swing allow us to forcast?

  1. You can only allow your IS staff to grow so large.
  2. This means the amount of effort to maintain so many disparate – yet tightly integrated systems must be eased.
  3. This means that simpler systems for security, server administration, database management, and source code logic must become more prevelant and usable.
  4. This means that IS staff can now manage more technologies easier.
  5. This means a shift back to the central repository of services is inevitable.

It will be interesting to see as we move forward how things will progress. It will be interesting to see how corporations like Microsoft – so dependent on the disparate systems model – will fare as other company's like Google move us closer to the central repository model.

By the way, is there a school left in the world that still teaches COBOL?

Wednesday, July 23, 2008

Incompetence Rises

Over the last thirty years of my working life, I have been continually amazed at how many times a person who fails or does poorly on a project or large task usually gets promoted to manage the outcome of their failure.

It happens time and time again.

The Peter Principle states that “In a Hierarchy Every Employee Tends to Rise to His Level of Incompetence“. In some cases though, The Dilbert Principle is more a appropriate scenario - where success occurs despite the earliest signs of incomepetence.

Such a scenario basically unfolds where there are limited resources to assign to a task of some visibility and importance. The management at the time has no option but to assign the task someone not who is either not properly trained, lacks a deep understanding of the task, or is simply incapable.

The projects of the task always run far beyond the projected timeline, and tremendously over budget. The planning that should have been put into the task at the beginning was either not performed, based on false assumptions, or was completely misleading in the result.

As the project progresses, sympathies increase for this person performing or leading the task. Their visibility increases as they are always the orator on the issues at hand – and as time progresses, they are seen to be the subject matter expert on the task.

At some time, the project completes. Everybody takes a deep sigh of relief, only to find shortly thereafter that the problems cascading from the result of the task now involve more and more persons to deal with them. More firefighters, if you will.

So this incompetent oaf who started the project, took it down the misguided roads to ruin, has actually made a positive name for themselves by the sheer genius of their incompetence.

As a result, they get promoted to manage the fiasco. And most of the time they do so lovingly and passionately. They do so in a defensive – reactive mindset. Suggestions made to fix the outstanding problems are often rebutted by the new manager by such phrases as “trust me, you don’t want to go there” , “that’s how we have always done it”, or “the problem is just too complicated for you to understand”.

Often times the succession of this person up the corporate ladder does not end there. They remain visible as a tough person who has fought battles and understands the underlying issues.

This is only one example of the Dibert extension of the Peter Principle in motion.

And this, my friend, is one way in which incompetence rises to the top of the heap.

The Main Three IT Professional Environments

Every IT shop I have ever seen is different.

The roles are different, as the purpose of IT in every environment is unique.

When I discuss this topic with people, I am always quick to point out that there are three main categories of environments that every shop falls under:

  • The software development company
  • The IT department in a company or corporate environment.
  • The IT consulting firm

The software development company

The software development company has a set of products that they have develope, packaged, and sell on the market. Their focus is to increase their market share of their niche consumer.

Their new products and enhancements to existing products are driven by the market, and their perception of what their target market want.

Creativity is encouraged in this environment and products are quite often put into the marketplace sooner to be the first to draw attention to their short-lived status of being “unique“.

The IT department in a company or corporate environment

The IT department in a company or corporate environment is more concerned with keeping the systems of the company up and working, and enhancing their systems where advantageous.

Quite often creativity is stifled as the concern is to get a project completed to meet the end user requirements. This environment is often caught with aging legacy systems with complicated hooks into new functions.

The consulting firm

The consulting firm, more often than not, is a hybrid of the software development company and the IT department of a company or corporation. Relationships with customers are most often short lived. But there are usually marketplace products developed and sold. Their approach is usually to sell a product to company or corporation, and leverage the implementation of that product in terms of customization, support, and extending their presence by performing needs analysis for potential projects the company or corporation may feel their own IT department is under-qualified to tackle.

I have worked in all three of these environments. All three have their benefits. All three have their drawbacks.
The roles of the employees in all three of these environments are as different to support the different purposes of each environment.

The ebb and flow of each of these three environments

The software development company environment is most often a pro-active one.

The software development company is usually driven by a marketing team who perform needs analysis .

They meet with a product manager to describe their concepts. The product manager uses those on their team to design a prototype to present to the marketing team and gain feedback and hopefully sign-off or commitment to proceed. From there the development team builds, tests, and continues to present to and get feedback from the marketing team. The finished product is test marketed, and if successful – put to market lead by the marketing team’s campaign.

Once on the market, the product manager in conjunction with the marketing team monitor the success and feedback of the product to determine a long term strategy for enhancements, support, and direction.

The IT department of a company or corporation’s environment is most often a reactive – although the corporate culture of the environment may be a pro-active one.

The IT department usually has two functions – to respond to the requests of company departments – within the constraints of that department’s budgets – for new software tools to perform tasks currently performed manually, or not satisfied by their existing tools.

The hierarchy of responsibilities is much deeper than that of either a development shop or a consulting firm. The CIO (Chief Information Officer) is usually a vice president level position, followed by various levels of managers depending on the size of the company and the breadth of the IT department.

Needs are supplied to the IT department by the business analysts of the requesting departments and supplied to the architects. The architects then blue-print a solution.

Components of the blue-print are then distributed to various teams, lead by a manager, systems engineer, or systems analyst who define the exact scope of their assigned components. Each sub-component is then designed and specked out.

The analysts and architects of these components meet frequently to review their specifications to ensure these components will fit together. The software developer is then let loose to write the code to meet the specifications – and nothing more than the specifications. The finished code is tested by both the developer and usually a person dedicated to the role of testing. Throughout the process, the various components are tested together to ensure cohesion of the overall system. Once testing is complete, the project as a whole is implemented in a staging environment to be tested by the future users of the system. When signoff is achieved by the users, the project is implemented and the system is put into production.

On the other side of the curtain, a team responds to problems the users of a system are experiencing. Problems are recorded, prioritized and fixed by either finding and resolving the bug in the system, or setting the data used by the system to a proper state so the system can resume operation. The cycle for such fixed is much shorter, with less testing. The priorities are based on the criticality of the system and the urgency for correction. This environment can be extremely stressful and wearing on the staff.

The consulting firm is usually a much looser organization than either the software development company or the IT departments of companies and corporations. This environment is very proactive and is geared towards establishing relationships with the customers which they hope they can translate into trust. Once the relationship exists, the consultant will search out flaws in both the customers process and systems, and ensure the customer that their solution will be inexpensive and effective.

But this perceived approach usually contradicts their intention of extending the duration of the relationship, exceeding time lines and budgets. The objective – truly and not facetiously – is to do as little as possible for as much as possible. In most cases there is no long term commitment to support the result.

Your experience with these groups may be different than mine. You may even identify more categories of environments than I have here.

Let us know what you think, what your experiences have been, and even your preferences or your opinions on each.

Let’s start ProjecTalking.

Welcome to ProjecTalk

There is more to having a successful IT team than simply knowing how to write source code.

Yet as I surf the Internet, the vast majority of discussion is how to write rock solid code using various techniques, technologies, and best practices.

But techniques, technologies and best practices are not constrained only to source code development. There are so many more roles in today’s IT environment, and it is my hope that we can use this space to discuss the fundamentals and intricacies of such work areas as:

  • Systems Architecture and the project identification process
  • Needs analysis and requirement gathering
  • Project scope control
  • Analysis and design
  • Integration, user, and post-implementation testing
  • Customer expectation management
  • Communication progress and status to project stakeholders
In short, we will focus on everything but the actual practice of writing code.

Over the last 20 plus years, I have held positions up and down the IT role ladder. So I will be sharing my thoughts and experiences. But I do not intend to simply tell you what I think.

Instead I am hoping I can inspire conversation and debate as we discuss these topics … but in a way uncommon to most IT environments – as calm, rational, professionals.

There are several blog sites I follow religiously – one sports blog in particular. And the lesson I have learned from observing these sites is that the while the authors of these blogs are knowledgeable in their writing, the real insight comes more often than not in the discussions and debates.

I will certainly do my best to keep the topics frequent, relevant, and my own insights as clear and concise as possible. I am also hoping that those of you that wish to participate as authors will send me a note.

I look forward to this new adventure. And I am really excited to meet those of you out there who share the same interest, and learn from your experiences and opinions.

As the same time, I will continue writing my essay’s on my original Head Stuffing site. Those stories, after all, is truly what I love to write.
© 2010 Fred Brill - all rights reserved