Bleeding edge
From Wikipedia, the free encyclopedia
In computer science, bleeding edge is a term that refers to technology that is so new (and thus, presumably, not perfected) that the user is required to risk reductions in stability and productivity in order to use it. It also refers to the tendency of the latest technology to be extremely expensive. The term was first coined by Peter Barus, a Superbase programmer.
The term is formed as an allusion to "leading edge" and its synonym cutting edge, but implying a greater degree of risk: the "bleeding edge" is in front of the "cutting edge". A technology may be considered bleeding edge under the following conditions:
- Lack of consensus — competing ways of doing some new thing exist and no one really knows for certain which way the market is going to go.
- Lack of knowledge — organizations are trying to implement a new technology or product that the trade journals have not even started talking about yet, either for or against.
- Industry resistance to change — trade journals and industry leaders have spoken against a new technology or product but some organizations are trying to implement it anyway because they are convinced it is technically superior.
The rewards for successful early adoption of new technologies can be great; unfortunately, the penalties for "betting on the wrong horse" or choosing the wrong product are equally large. Whenever an organization decides to take a chance on bleeding edge technology there is a good chance that they will be stuck with a white elephant or worse.
Recently however, the term bleeding edge has been increasingly used by the general public to mean "ahead of cutting edge" largely without the negative, risk-associated connotation concurrent with the term's use in more specific fields. An apt quotation concerning this issue is: "But when you’re living on the bleeding edge, you should not be surprised when you do, in fact, bleed."