Logic model
A logic model (also known as a logical framework, theory of change, or program matrix) is a tool used most often by managers and evaluators of programs to evaluate the effectiveness of a program. Logic models are usually a graphical depiction of the logical relationships between the resources, activities, outputs and outcomes of a program.[1] While there are many ways in which logic models can be presented, the underlying purpose of constructing a logic model is to assess the "if-then" (causal) relationships between the elements of the program; if the resources are available for a program, then the activities can be implemented, if the activities are implemented successfully then certain outputs and outcomes can be expected. Logic models are most often used in the evaluation stage of a program, they can however be used during planning and implementation.[2]
Versions
In its simplest form, a logic model has four components:[3]
Inputs | Activities | Outputs | Outcomes/impacts |
---|---|---|---|
what resources go into a program | what activities the program undertakes | what is produced through those activities | the changes or benefits that result from the program |
e.g. money, staff, equipment | e.g. development of materials, training programs | e.g. number of booklets produced, workshops held, people trained | e.g. increased skills/ knowledge/ confidence, leading in longer-term to promotion, new job, etc. |
Following the early development of the logic model in the 1970s by Carol Weiss, Joseph Wholey and others, many refinements and variations have been added to the basic concept. Many versions of logic models set out a series of outcomes/impacts, explaining in more detail the logic of how an intervention contributes to intended or observed results.[4] This will often include distinguishing between short-term, medium-term and long-term results, and between direct and indirect results.
Some logic models also include assumptions, which are beliefs the prospective grantees have about the program, the people involved, and the context and the way the prospective grantees think the program will work, and external factors, consisting of the environment in which the program exists, including a variety of external factors that interact with and influence the program action.
University Cooperative Extension Programs in the US have developed a more elaborate logic model, called the Program Action Logic Model, which includes six steps:
- Inputs (what we invest)
- Outputs:
- Activities (the actual tasks we do)
- Participation (who we serve; customers & stakeholders)
- Engagement (how those we serve engage with the activities)
- Outcomes/Impacts:
- Short Term (learning: awareness, knowledge, skills, motivations)
- Medium Term (action: behavior, practice, decisions, policies)
- Long Term (consequences: social, economic, environmental etc.)
In front of Inputs, there is a description of a Situation and Priorities. These are the considerations that determine what Inputs will be needed.
The University of Wisconsin Extension offers a series of guidance documents[5] on the use of logic models. There is also an extensive bibliography[6] of work on this program logic model.
Advantages
By describing work in this way, managers have an easier way to define the work and measure it. Performance measures can be drawn from any of the steps. One of the key insights of the logic model is the importance of measuring final outcomes or results, because it is quite possible to waste time and money (inputs), "spin the wheels" on work activities, or produce outputs without achieving desired outcomes. It is these outcomes (impacts, long-term results) that are the only justification for doing the work in the first place. For commercial organizations, outcomes relate to profit. For not-for-profit or governmental organizations, outcomes relate to successful achievement of mission or program goals.
Uses of the logic model
Program planning
One of the most important uses of the logic model is for program planning. Here it helps managers to 'plan with the end in mind' Stephen Covey, rather than just consider inputs (e.g. budgets, employees) or just the tasks that must be done. In the past, program logic has been justified by explaining the process from the perspective of an insider. Paul McCawley (no date) outlines how this process was approached:
- We invest this time/money so that we can generate this activity/product.
- The activity/product is needed so people will learn how to do this.
- People need to learn that so they can apply their knowledge to this practice.
- When that practice is applied, the effect will be to change this condition
- When that condition changes, we will no longer be in this situation.
While logic models have been used in this way successfully, Millar et al. (1999) has suggested that following the above sequence, from the inputs through to the outcomes, could limit one’s thinking to the existing activities, programs and research questions. Instead, by using the logic model to focus on the intended outcomes of a particular program the questions change from ‘what is being done?’ to’ what needs to be done?’ McCawley (no date) suggests that by using this new reasoning, a logic model for a program can be built by asking the following questions in sequence:
- What is the current situation that we intend to impact?
- What will it look like when we achieve the desired situation or outcome?
- What behaviors need to change for that outcome to be achieved?
- What knowledge or skills do people need before the behavior will change?
- What activities need to be performed to cause the necessary learning?
- What resources will be required to achieve the desired outcome?
By placing the focus on ultimate outcomes or results, planners can think backwards through the logic model to identify how best to achieve the desired results. Planners therefore need to understand the difference between the categories of the logic model.
Performance evaluation
The logic model is often used in government or not-for-profit organizations, where the mission and vision are not aimed at achieving a financial benefit. In such situations, where profit is not the intended result, it may be difficult to monitor progress toward outcomes. A program logic model provides such indicators, in terms of output and outcome measures of performance. It is therefore important in these organizations to carefully specify the desired results, and consider how to monitor them over time. Often, such as in education or social programs, the outcomes are long-term and mission success is far in the future. In these cases, intermediate or shorter-term outcomes may be identified that provide an indication of progress toward the ultimate long-term outcome.
Traditionally, government programs were described only in terms of their budgets. It is easy to measure the amount of money spent on a program, but this is a poor indicator of mission success. Likewise it is relatively easy to measure the amount of work done (e.g. number of workers or number of years spent), but the workers may have just been 'spinning their wheels' without getting very far in terms of ultimate results or outcomes. The production of outputs is a better indicator that something was delivered to customers, but it is still possible that the output did not really meet the customer's needs, was not used, etc. Therefore, the focus on results or outcomes has become a mantra in government and not-for-profit programs.
The President's Management Agenda[7] is an example of the increasing emphasis on results in government management. It states:
"Government likes to begin things — to declare grand new programs and causes. But good beginnings are not the measure of success. What matters in the end is completion. Performance. Results."[8]
However, although outcomes are used as the primary indicators of program success or failure they are still insufficient. Outcomes may easily be achieved through processes independent of the program and an evaluation of those outcomes would suggest program success when in fact external outputs were responsible for the outcomes (Rossi, Lipsey and Freeman, 2004). In this respect, Rossi, Lipsey and Freeman (2004) suggest that a typical evaluation study should concern itself with measuring how the process indicators (inputs and outputs) have had an effect on the outcome indicators. A program logic model would need to be assessed or designed in order for an evaluation of these standards to be possible. The logic model can and, indeed, should be used in both formative (during the implementation to offer the chance to improve the program) and summative (after the completion of the program) evaluations.
The logic model and other management frameworks
There are numerous other popular management frameworks that have been developed in recent decades. This often causes confusion, because the various frameworks have different functions. It is important to select the right tool for the job. The following list of popular management tools is suggested to indicate where they are most appropriate (this list is by no means complete).
Organizational assessment tools
Fact-gathering tools for a comprehensive view of the as-is situation in an organization, but without prescribing how to change it:
- Baldrige Criteria for Performance Excellence (United States)
- EFQM (Europe)
- SWOT Analysis (Strengths, Weaknesses, Opportunities, Threats)
- Skills audits
- Customer surveys
Strategic planning tools
For identifying and prioritizing major long-term desired results in an organization, and strategies to achieve those results:
- Strategic Vision (Writing a clear "picture of the future" statement)
- Strategy maps
- Portfolio Management (Managing a portfolio of interdependent projects)
- Participatory Impact Pathways Analysis (An approach for project staff and stakeholders to jointly agree on a vision, develop a logic model and an evaluation plan)
- Weaver's Triangle[9] (simply asks organisations to identify inputs, outcomes and outputs).
Program planning and evaluation tools
For developing details of individual programs (what to do and what to measure) once overall strategies have been defined:
- Program logic model (this entry)
- Work Breakdown Structure
- Managing for Results model
- Earned Value Management
- PART - Program Assessment Rating Tool (US federal government)
Performance measurement tools
For measuring, monitoring and reporting the quality, efficiency, speed, cost and other aspects of projects, programs and/or processes:
- Balanced scorecard systems
- KPI - key performance indicators
- Critical success factors
Process improvement tools
For monitoring and improving the quality or efficiency of work processes:
- PDCA - Plan-do-check-act (Deming)
- TQM - Total Quality Management (Shewhart, Deming, Juran) - A set of TQM tools is available.
- Six Sigma
- BPR - Business Process Reengineering
- Organizational Design
Process standardization tools
For maintaining and documenting processes or resources to keep them repeatable and stable:
- ISO 9000
- CMMI - Capability Maturity Model Integration
- Business Process Management (BPM)
- Configuration management
- Enterprise Architecture
Notes
- ↑ McCawley, Paul. "The logic model for program planning and evaluation". University of Idaho.
- ↑ Innovation Network. "Logic model workbook". Retrieved 28 August 2012.
- ↑ W. K. Kellogg Foundation (2001). W. K. Kellogg Foundation Logic Model Development Guide.
- ↑ Weiss, C.H. (1972). Evaluation Research. Methods for Assessing Program Effectiveness. Prentice-Hall, Inc., Englewood Cliffs, New Jersey
- ↑ guidance documents
- ↑ bibliography
- ↑ President's Management Agenda (2002)
- ↑ Results.gov: President's Management Agenda
- ↑ http://www.evaluationsupportscotland.org.uk/article.asp?id=9&node=gettingstarted
References
The following references on the logic model were compiled by Alan Listiak from a discussion on EVALTALK, the listserv of the American Evaluation Association.
- Mayeske, George W. and Michael T. Lambur (2001). How to Design Better Programs: A Staff Centered Stakeholder Approach to Program Logic Modeling. Crofton, MD: The Program Design Institute. Highly Recommended.
- Mayeske, George W. (2002). How to Develop Better Programs & Determine Their Results: An Organic & Heuristic Client & Staff Centered Approach with Stakeholder Involvement. Bowie, MD: The Program Design Institute. Highly Recommended.
The first manual (How to Design Better Programs) is a step-by-step guide to developing and implementing logic models. The second manual (How to Develop Better Programs) deals focuses on how-to develop experiential educational programs "based on, but not restricted to, the use of program logic models which serve as a tool for the development process." (from the Foreword).
Both manuals are available from The Program Design Institute, c/o Dr. George W. Mayeske, 12524 Knowledge Lane, Bowie, MD 20715-2622. The Logic Modeling manual is $28.00 (includes shipping) and the Better Pro-grams manual is $45.00 (including shipping) - checks only. But both manuals can be purchased at a discount. Contact Dr. Mayeske for details at gwmayeske@aol.com.
- W. K. Kellogg Foundation (2001). W. K. Kellogg Foundation Logic Model Development Guide.
Available for no cost by clicking on the link to the guide on the right of the page. This guide is not as detailed as the Program Design Institute guides on the nuts and bolts of logic modeling, but is better at discussing program theory and its application. And it's free for the downloading. Highly Recommended.
Also see: W. K. Kellogg Foundation (1998). W. K. Kellogg Foundation Evaluation Handbook. Available at no cost through this site by clicking on the link to the handbook.
- Devine, Patricia (1999). Using Logic Models in Substance Abuse Treatment Evaluations. Fairfax, VA: National Evaluation Data and Technical Assistance Center, Caliber Associates. Highly Recommended.
This paper discusses the use of logic models in planning and evaluating substance abuse treatment services. The best part is the "sample data maps" that specify evaluation questions, measures, and variables. The paper is part of the Integrated Evaluation Methods Package for substance abuse treatment programs developed under the auspices of the Center for Substance Abuse Treatment, Department of Health and Human Services. The full discussion of this evaluation framework, concepts, and tools is presented in: Devine, Patricia (1999). A Guide for Substance Abuse Treatment Knowledge-Generating Activities. Fairfax, VA: National Evaluation Data and Technical Assistance Center, Caliber Associates.
There are other papers in the Integrated Evaluation Methods Package available at http://www.calib.com/home/work_samples/pubs.cfm under the heading Substance Abuse Research and Evaluation, Evaluation Tools and Resources. These papers include: Devine, Patricia (1999). A Guide to Process Evaluation of Substance Abuse Treatment Services. Fairfax, VA: National Evaluation Data and Technical Assistance Center, Caliber Associates. Devine, Patricia, Bullman, Stephanie, & Zeaske, Jessica (1999). Substance Abuse Treatment Evaluation Product Outlines Notebook. Fairfax, VA: National Evaluation Data and Technical Assistance Center, Caliber Associates. Devine, Patricia, Christopherson, Eric, Bishop, Sharon, Lowery, Jacquelyn, & Moore, Melody (1999). Self- Adjusting Treatment Evaluation Model. Fairfax, VA: National Evaluation Data and Technical Assistance Center, Caliber Associates.
Moore, Melody (1998), Building Team Capability to Fully Implement and Utilize the Self-Adjusting Treatment Evaluation Model. Fairfax, VA: National Evaluation Data and Technical Assistance Center, Caliber Associates. [Discusses logic models, the Self-Adjusting Treatment Evaluation Model, and Step by Step Procedures for building capability to use the model, and provides assessment instruments.]
- The University of Wisconsin-Cooperative Extension has an online course entitled, Enhancing Program
Performance with Logic Models. The course contains two modules - Module 1, "Logic Model Basics," is an introduction to logic models; and Module 2, "Introducing The Community Nutrition Education Logic Model," is an application of logic models to community nutrition education programs. Each module has various interactive elements, including practice activities designed to help students better understand the course content. The citation is: Taylor-Powell, E., Jones, L., & Henert, E. (2002) Enhancing Program Performance with Logic Models. Retrieved December 1, 2003, from the University of Wisconsin-Extension web site: http://www1.uwex.edu/ces/lmcourse/.
- United Way of America (1996). Measuring Program Outcomes: A Practical Approach. This manual can
be purchased for $5.00 plus S&H by calling 1-800-772-0008 and ordering item number 0989. You can find the manual's table of contents and excerpts on the United Way web site.
- Harrell, Adele, with Burt, Martha, Hatry, Harry, Rossman, Shelli, Roth, Jeffrey, and Sabol, William (no date). Evaluation Strategies for Human Service Programs - A Guide for Policymakers and Providers. Washington, DC: The Urban Institute.
This guide focuses on developing a logic model and selecting and implementing an evaluation design. Gives an example of a logic model for a children-at-risk program.
- Hernandez, M. & Hodges, S. (2003). Crafting Logic Models for Systems of Care: Ideas into Action.
[Making children's mental health services successful series, volume 1]. Tampa, FL: University of South Florida, The Louis de la Parte Florida Mental Health Institute, Department of Child & Family Studies. This monograph is a guide to developing a system of care using a theory-based approach. System stakeholders can use the theory of change approach to move from ideas to action-oriented strategies to achieve their goals and understand the relationships among the populations that the system is intended to serve.
- Millar, A., R.S. Simeone, and J.T. Carnevale. (2001). Logic models: a systems tool for performance management. Evaluation and Program Planning. 24:73-81.
- Rossi, P., Lipsey, M.W., and Freeman, H.E. (2004). Evaluation. A systematic approach (7th ed.). Thousand Oaks, CA: Sage.
- McCawley, P.F. (no date). The Logic Model for Program planning and Evaluation. University of Idaho Extension. Retrieved at http://www.uiweb.uidaho.edu/extension/LogicModel.pdf
Other resources
- Alter, C. & Murty, S. (1997). Logic modeling: A tool for teaching practice evaluation. Journal of Social Work Education, 33(1), 103-117.
- Conrad, Kendon J., & Randolph, Frances L. (1999). Creating and using logic models: Four perspectives. Alcoholism Treatment Quarterly, 17(1-2), 17-32.
- Hernandez, Mario (2000). Using logic models and program theory to build outcome accountability. Education and Treatment of Children, 23(1), 24-41.
- Innovation Network's Point K Logic Model Builder (2006). A set of three online evaluation tools that includes a Logic Model Builder (requires registration).
- Julian, David A. (1997). The utilization of the logic model as a system level planning and evaluation device. Evaluation and Program Planning, 20(3), 251-257.
- McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A tool for telling your program's performance story. Evaluation and Program Planning, 22(1), 65-72.
- Stinchcomb, Jeanne B. (2001). Using logic modeling to focus evaluation efforts: Translating operational theories into practical measures. Journal of Offender Rehabilitation, 33(2), 47-65.
- Unrau, Y.A. (2001). Using client exit interviews to illuminate outcomes in program logic models: A case example. Evaluation and Program Planning, 24(4), 353-361.
- Usable Knowledge (2006). A 15 minute flash based tutorial on logic models.
Additional references
- den Heyer, M. (2001). A Bibliography for Program Logic Models / Logframe Analysis. This is a list of articles that evaluate program logic models as a set of tools and how they have been used.
- Shaping Outcomes . This is an on-line curriculum in outcomes-based planning and evaluation designed for museum and library professionals and students.
- http://www.innonet.org/client_docs/File/logic_model_workbook.pdf . This is a book on how to build a logic model
- Design, Plan, Implement - Modeling Tool (http://www.impactmap.org)