Legacy system

From Wikipedia, the free encyclopedia

A legacy system is an existing computer system or application program which continues to be used because the user (typically an organization) does not want to replace or redesign it. Many people use this term to refer to "antiquated" systems.

Legacy systems are considered to be potentially problematic by many software engineers (for example, see Bisbal et al., 1999) for several reasons. Legacy systems often run on obsolete (and usually slow) hardware, and sometimes spare parts for such computers become increasingly difficult to obtain. These systems are often hard to maintain, improve, and expand because there is a general lack of understanding of the system. The designers of the system may have left the organization, leaving no one left to explain how it works. Such a lack of understanding can be exacerbated by inadequate documentation or manuals getting lost over the years. Integration with newer systems may also be difficult because new software may use completely different technologies.

Despite these problems, organizations can have compelling reasons for keeping a legacy system, such as:

  • The costs of redesigning the system are prohibitive because it is large, monolithic, and/or complex.
  • The system requires close to 100% availability, so it cannot be taken out of service, and the cost of designing a new system with a similar availability level is high.
  • The way the system works is not well understood. Such a situation can occur when the designers of the system have left the organization, and the system has either not been fully documented or such documentation has been lost.
  • The user expects that the system can easily be replaced when this becomes necessary.
  • The system works satisfactorily, and the owner sees no reason for changing it – or in other words, re-learning a new system would have a prohibitive attendant cost in lost time and money.

If legacy software runs on only antiquated hardware, the cost of maintaining the system may eventually outweigh the cost of replacing both the software and hardware unless some form of emulation or backward compatibility allows the software to run on new hardware. However, many of these systems do still meet the basic needs of the organization. The systems to handle customers' accounts in banks are one example. Therefore the organization cannot afford to stop them and yet some cannot afford to update them.

A demand of extremely high availability is commonly the case in computer reservation systems, air traffic control, energy distribution (power grids), nuclear power plants, military defense installations, and other systems critical to safety, security, traffic throughput, and/or economic profits. For example see the TOPS database system.

The change being undertaken in some organizations is to switch to Automated Business Process (ABP) software which generates complete systems. These systems can then interface to the organizations' legacy systems and use them as data repositories. This approach can provide a number of significant benefits: the users are insulated from the inefficiencies of their legacy systems, and the changes can be incorporated quickly and easily in the ABP software. (At least, that's the intention.)

Note that "legacy" has little to do with the size or even age of the system — mainframes run 64-bit Linux and Java, after all, right alongside 1960s vintage code. In fact, some of the thorniest legacy problems organizations now face are in trying to leverage or replace existing "fat client" Visual Basic code as customers demand reliable Web access.[citation needed]

Contents

[edit] Alternative view

There's an alternate point of view — growing since the "Dot Com" bubble burst in 1999 — that legacy systems are simply (and only) computer systems that are both installed and working. In other words, the term is not at all pejorative — quite the opposite. Perhaps the term "legacy" is only an effort by computer industry salesmen to generate artificial churn in order to encourage purchase of unneeded technology.

IT analysts estimate that the cost to replace business logic is about five times that of reuse, and that's not counting the risks involved in wholesale replacement. Shareholders and managers are increasingly asking, "Why are we spending so much money on new technology with so little to show for it?" Ideally businesses would never have to rewrite most core business logic. After all, debits must equal credits — they always have, and they always will. Businesses and governments are also recoiling at well-publicized system failures and security breaches that all too commonly arrive with new software — failures which are utterly catastrophic in many cases. (A regional airline fired its CEO due to the failure of a relatively new crew scheduling system during Christmas, 2004, for example.[1]) There's also a growing backlash against large, packaged software products (SAP, Oracle, PeopleSoft, and others) which were oversold and in some cases have proven too costly, inflexible, and poorly matched to business needs.

Increasingly the IT industry is responding to these understandable business concerns. "Legacy modernization" and "legacy transformation" are now popular terms, and they mean reusing and refactoring existing, core business logic by providing new user interfaces (typically Web interfaces) and service-enabled access (e.g., through Web services). These techniques allow organizations to understand their existing code assets (using discovery tools), provide new user and application interfaces to existing code, improve workflow, contain costs, minimize risk, and enjoy classic qualities of service (near 100% uptime, security, scalability, etc.).[citation needed] Technology companies involved in "enterprise transformation," including IBM, are growing and profiting by what many people feel is a more rational approach toward legacy systems.[citation needed]

The reexamination of attitudes toward legacy systems is also inviting more reflection on what makes legacy systems as durable as they are. Technologists are relearning the fact that sound architecture, practiced up front, helps businesses avoid costly and risky rewrites in the first place. The most common legacy systems tend to be those which embraced well-known IT architectural principles, with careful planning and strict methodology during implementation. Poorly designed systems often don't last. (Such as Visual BASIC, which encouraged violation of the age-old IT architectural principle of separating business logic from presentation logic and data access.[citation needed]) Thus, many organizations are rediscovering not only the value in the legacy systems themselves but also their philosophical underpinnings.

[edit] References

    • Bisbal, J., Lawless, D., Wu, B. & Grimson, J. (1999). Legacy Information System Migration: A Brief Review of Problems, Solutions and Research Issues. IEEE Software, 16, 103-111.

    [edit] Further reading

    [edit] See also

    This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.

    In other languages