Evidence-based practice
Evidence-based practice (EBP) is an interdisciplinary approach to clinical practice that has been gaining ground following its formal introduction in 1992. It started in medicine as evidence-based medicine (EBM) and spread to other fields such as audiology, speech-language pathology, dentistry, nursing, child life specialty, psychology, social work, education, library and information science. EBP is traditionally defined in terms of a "three legged stool" integrating three basic principles: (1) the best available research evidence bearing on whether and why a treatment works, (2) clinical expertise (clinical judgment and experience) to rapidly identify each patient's unique health state and diagnosis, their individual risks and benefits of potential interventions, and (3) client preferences and values [1][2]
Evidence-based behavioral practice (EBBP) "entails making decisions about how to promote health or provide care by integrating the best available evidence with practitioner expertise and other resources, and with the characteristics, state, needs, values and preferences of those who will be affected. This is done in a manner that is compatible with the environmental and organizational context. Evidence is research findings derived from the systematic collection of data through observation and experiment and the formulation of questions and testing of hypotheses".[3]
EBP and the history of medicine and education
In recent years, EBP has been stressed by professional organizations such as the American Psychological Association, the American Occupational Therapy Association, the American Nurses Association, and the American Physical Therapy Association, which have also strongly recommended their members to carry out investigations to provide evidence supporting or rejecting the use of specific interventions. Equivalent recommendations apply to the Canadian equivalent of these associations. Pressure toward EBP has also come from public and private health insurance providers, which have sometimes refused coverage of practices lacking in systematic evidence of usefulness.
Areas of professional practice, such as medicine, psychology, psychiatry, rehabilitation and so forth, have had periods in their pasts where practice was based on loose bodies of knowledge. Some of the knowledge was lore that drew upon the experiences of generations of practitioners, and much of it had no valid scientific evidence on which to justify various practices.
In the past, this has often left the door open to quackery perpetrated by individuals who had no training at all in the domain, but who wished to convey the impression that they did, for profit or other motives. As the scientific method became increasingly recognized as the means to provide sound validation for such methods, the need for a way to exclude quack practitioners became clear, not only as a way of preserving the integrity of the field (particularly medicine), but also of protecting the public from the dangers of their "cures." Furthermore, even where overt quackery was not present, it was recognized that there was a value in identifying what actually does work so it could be improved and promoted.
The notion of evidence based practice has also had an influence in the field of education. Here, some commentators have suggested that the putative lack of any conspicuous progress is attributable to practice resting in the unconnected and noncumulative experience of thousands of individual teachers, each re-inventing the wheel and failing to learn from hard scientific evidence about 'what works'. Opponents of this view argue that hard scientific evidence is a misnomer in education; knowing that a drug works (in medicine) is entirely different from knowing that a teaching method works, for the latter will depend on a host of factors, not least those to do with the style, personality and beliefs of the teacher and the needs of the particular children (Hammersley 2013). Some opponents of EBP in education suggest that teachers need to develop their own personal practice, dependent on personal knowledge garnered through their own experience. Others argue that this must be combined with research evidence, but without the latter being treated as a privileged source.[4]
Evidence-based practice
Evidence-based practice is an approach that tries to specify the way in which professionals or other decision-makers should make decisions by identifying such evidence that there may be for a practice and rating it according to how scientifically sound it may be. Its goal is to eliminate unsound or excessively risky practices in favor of those that have better outcomes.
EBP uses various methods (e.g., carefully summarizing research, putting out accessible research summaries, educating professionals in how to understand and apply research findings) to encourage, and in some instances to force, professionals and other decision-makers to pay more attention to evidence that can inform their decision-making. Where EBP is applied, it encourages professionals to use the best evidence possible, i.e., the most appropriate information available.
The current implementation of EBP
The core activities at the root of evidence-based practice can be identified as:
- a questioning approach to practice leading to scientific experimentation
- meticulous observation, enumeration, and analysis replacing anecdotal case description, for instance, EBSCO's Dynamed.<http://www.ebscohost.com/dynamed>
- recording and cataloguing the evidence for systematic retrieval.[5]
Much of the credit for today’s EBP techniques belongs to Archie Cochrane, an epidemiologist, author of the book, Effectiveness and Efficiency: Random Reflections on Health Services.[6] Cochrane suggested that because resources would always be limited, they should be used to provide forms of health care which had been shown in properly designed evaluations to be effective.[7] Cochrane maintained that the most reliable evidence was that which came from randomised controlled trials (RCTs).
One of the main reasons that EBPs have been so successfully incorporated into treatment services is the vast amount of studies linking clients’ improved health outcomes and the general attitude that treatments should be based in scientific evidence (Institute of Medicine, 2001; Sackett & Haynes, 1995). It is now assumed that professionals must be well-informed and up-to-date with the newest knowledge in order to best serve their clients and remain professionally relevant (Gibbs, 2003; Pace, 2008; Patterson et al., 2012).
Evidence-based practice vs. tradition
Evidence-based practice (EBP) involves complex and conscientious decision-making which is based not only on the available evidence but also on patient characteristics, situations, and preferences. It recognizes that care is individualized and ever changing and involves uncertainties and probabilities.
EBP develops individualized guidelines of best practices to inform the improvement of whatever professional task is at hand. Evidence-based practice is a philosophical approach that is in opposition to rules of thumb, folklore, and tradition. Examples of a reliance on "the way it was always done" can be found in almost every profession, even when those practices are contradicted by new and better information.
However, in spite of the enthusiasm for EBP over the last decade or two, some authors have redefined EBP in ways that contradict, or at least add other factors to, the original emphasis on empirical research foundations. For example, EBP may be defined as treatment choices based not only on outcome research but also on practice wisdom (the experience of the clinician) and on family values (the preferences and assumptions of a client and his or her family or subculture).[8]
Research oriented scientists, as opposed to authors, test whether particular practices work better for different subcultures or personality types, rather than just accept received wisdom. For example, the MATCH Study run at many sites around the US by the National Institute on Alcohol Abuse and Alcoholism (NIAAA) tested whether particular types of clients with alcohol dependence would benefit differentially from three different treatment approaches to which they were randomly assigned.[9] The idea was not to test the approaches but the matching of clients to treatments, and though this missed the question of client choice, it did demonstrate a lack of difference between the different approaches regardless of most client characteristics, with the exception that clients with high anger scores did better with the non-confrontational Motivational Enhancement approach which has been demonstrated superior in a meta-analysis of alcohol treatment outcome research and only required four as opposed to twelve session within Project MATCH.
The theories of evidence based practice are becoming more commonplace in nursing care. Nurses who are “baccalaureate prepared are expected to seek out and collaborate with other types of nurses to demonstrate the positives of a practice that is based on evidence.” Looking at a few types of articles to examine how this type of practice has influenced the standard of care is important but rarely internally valid. None of the articles specify what their biases are. Evidence based practice has gotten its reputation by examining the reasons why any and all procedures, treatments, and medicines are given. This is important for refining practice so the goal of assuring patient safety is met.[10]
Research-based evidence
Evidence-based design and development decisions are made after reviewing information from repeated rigorous data gathering instead of relying on rules, single observations, or custom. Evidence-based medicine and evidence-based nursing practice are the two largest fields employing this approach. In psychiatry and community mental health, evidence-based practice guides have been created by such organizations as the Substance Abuse and Mental Health Services Administration and the Robert Wood Johnson Foundation, in conjunction with the National Alliance on Mental Illness. Evidence-based practice has now spread into a diverse range of areas outside of health where the same principles are known by names such as results-focused policy, managing for outcomes, evidence-informed practice etc.
This model of care has been studied for 30 years in universities and is gradually making its way into the public sector. It effectively moves away from the old “medical model” (You have a disease, take this pill.) to an “evidence presented model” using the patient as the starting point in diagnosis. EBPs are being employed in the fields of health care, juvenile justice, mental health and social services among others. The theories of evidence based practice are becoming more commonplace in the nursing care. Nurses who are “baccalaureate prepared are expected to seek out and collaborate with other types of nurses to demonstrate the positives of a practice that is based on evidence.”[10]
Key elements in using the best evidence to guide the practice of any professional include the development of questions using research-based evidence, the level and types of evidence to be used, and the assessment of effectiveness after completing the task or effort. One obvious problem with EBP in any field is the use of poor quality, contradictory, or incomplete evidence. Evidence-based practice continues to be a developing body of work for professions as diverse as education, psychology, economics, nursing, social work and architecture.
Psychology
Evidence-based practice of psychology requires practitioners to follow psychological approaches and techniques that are based on the best available research evidence (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000). Evidence suggests that some therapy approaches work better than others. Criteria for empirically supported therapies have been defined by Chambless and Hollon (1998). Accordingly, a therapy is considered "efficacious and specific" if there is evidence from at least two settings that it is superior to a pill or psychological placebo or another bona fide treatment. If there is evidence from two or more settings that the therapy is superior to no treatment it is considered "efficacious". If there is support from one or more studies from just a single setting, the therapy is considered possibly efficacious pending replication. Following these guidelines, cognitive behavior therapy (CBT) stands out as having the most empirical support for a wide range of symptoms in adults, adolescents, and children.[11] The term "evidence-based practice" is not always used in such a rigorous fashion, and many psychologists claim to follow "evidence-based approaches" even when the methods they use do not meet established criteria for "efficacy" (Berke, Rozell, Hogan, Norcross, and Karpiak, 2011). In reality, not all mental health practitioners receive training in evidence-based approaches, and members of the public are often unaware that evidence-based practices exist. However, there is no guarantee that mental health practitioners trained in "evidence-based approaches" are more effective or safer than those trained in other modalities. Consequently, patients do not always receive the most effective, safe, and cost effective treatments available. To improve dissemination of evidence-based practices, the Association for Behavioral and Cognitive Therapies (ABCT) and the Society of Clinical Child and Adolescent Psychology (SCCAP, Division 53 of the American Psychological Association) maintain updated information on their websites on evidence-based practices in psychology for practitioners and the general public. It should be noted that "evidence-based" is a technical term, and there are many treatments with decades of evidence supporting their efficacy that are not considered "evidence-based."
Some discussions of EBP in clinical psychology settings distinguish the latter from "empirically supported treatments" (ESTs). ESTs have been defined as "clearly specified psychological treatments shown to be efficacious in controlled research with a delineated population." [12] Those who distinguish EBP from ESTs highlight the greater emphasis in EBP on integrating the "three legs" of research evidence, clinician expertise, and client values. From the latter perspective, ESTs are understood to place primary or exclusive emphasis on the first "leg," namely, research evidence.[13][14]
Levels of evidence and evaluation of research
Because conclusions about research results are made in a probabilistic manner, it is impossible to work with two simple categories of outcome research reports. Research evidence does not fall simply into "evidence-based" and "non-evidence-based" classes, but can be anywhere on a continuum from one to the other, depending on factors such as the way the study was designed and carried out. The existence of this continuum makes it necessary to think in terms of "levels of evidence", or categories of stronger or weaker evidence that a treatment is effective. To classify a research report as strong or weak evidence for a treatment, it is necessary to evaluate the quality of the research as well as the reported outcome.[15]
Evaluation of research quality can be a difficult task requiring meticulous reading of research reports and background information. It may not be appropriate simply to accept the conclusion reported by the researchers; for example, in one investigation of outcome studies, 70% were found to have stated conclusions unjustified by their research design.[16]
Although early consideration of EBP issues by psychologists provided a stringent but simple definition of EBP, requiring two independent randomized controlled trials supporting the effectiveness of a treatment,[12] it became clear that additional factors needed to be considered. These included both the need for lower but still useful levels of evidence, and the need to require even the "gold standard" randomized trials to meet further criteria.
A number of protocols for the evaluation of research reports have been suggested and will be summarized here. Some of these divide research evidence dichotomously into EBP and non-EBP categories, while others employ multiple levels of evidence. As the reader will see, although the criteria used by the various protocols overlap to some extent, they do not do so completely.
The Kaufman Best Practices Project approach did not use an EBP category per se, but instead provided a protocol for selecting the most acceptable treatment from a group of interventions intended to treat the same problems.[17] To be designated as "best practice", a treatment would need to have a sound theoretical base, general acceptance in clinical practice, and considerable anecdotal or clinical literature. This protocol also requires absence of evidence of harm, at least one randomized controlled study, descriptive publications, a reasonable amount of necessary training, and the possibility of being used in common settings. Missing from this protocol are the possibility of nonrandomized designs (in which clients or practitioners decide whether an individual will receive a certain treatment), the need to specify the type of comparison group used, the existence of confounding variables, the reliability or validity of outcome measures, the type of statistical analysis required, or a number of other factors required by some evaluation protocols.[15]
A protocol suggested by Saunders et al.[18] assigns research reports to six categories, on the basis of research design, theoretical background, evidence of possible harm, and general acceptance. To be classified under this protocol, there must be descriptive publications, including a manual or similar description of the intervention. This protocol does not consider the nature of any comparison group, the effect of confounding variables, the nature of the statistical analysis, or a number of other criteria. Interventions are assessed as belonging to Category 1, well-supported, efficacious treatments, if there are two or more randomized controlled outcome studies comparing the target treatment to an appropriate alternative treatment and showing a significant advantage to the target treatment. Interventions are assigned to Category 2, supported and probably efficacious treatment, based on positive outcomes of nonrandomized designs with some form of control, which may involve a non-treatment group. Category 3, supported and acceptable treatment, includes interventions supported by one controlled or uncontrolled study, or by a series of single-subject studies, or by work with a different population than the one of interest. Category 4, promising and acceptable treatment, includes interventions that have no support except general acceptance and clinical anecdotal literature; however, any evidence of possible harm excludes treatments from this category. Category 5, innovative and novel treatment, includes interventions that are not thought to be harmful, but are not widely used or discussed in the literature. Category 6, concerning treatment, is the classification for treatments that have the possibility of doing harm, as well as having unknown or inappropriate theoretical foundations.
A protocol for evaluation of research quality was suggested by a report from the Centre for Reviews and Dissemination, prepared by Khan et al. and intended as a general method for assessing both medical and psychosocial interventions.[19] While strongly encouraging the use of randomized designs, this protocol noted that such designs were useful only if they met demanding criteria, such as true randomization and concealment of the assigned treatment group from the client and from others, including the individuals assessing the outcome. The Khan et al. protocol emphasized the need to make comparisons on the basis of "intention to treat" in order to avoid problems related to greater attrition in one group. The Khan et al. protocol also presented demanding criteria for nonrandomized studies, including matching of groups on potential confounding variables and adequate descriptions of groups and treatments at every stage, and concealment of treatment choice from persons assessing the outcomes. This protocol did not provide a classification of levels of evidence, but included or excluded treatments from classification as evidence-based depending on whether the research met the stated standards.
An assessment protocol has been developed by the U.S. National Registry of Evidence-Based Practices and Programs (NREPP).[20] Evaluation under this protocol occurs only if an intervention has already had one or more positive outcomes, with a probability of less than .05, reported, if these have been published in a peer-reviewed journal or an evaluation report, and if documentation such as training materials has been made available. The NREPP evaluation, which assigns quality ratings from 0 to 4 to certain criteria, examines reliability and validity of outcome measures used in the research, evidence for intervention fidelity (predictable use of the treatment in the same way every time), levels of missing data and attrition, potential confounding variables, and the appropriateness of statistical handling, including sample size.
A protocol suggested by Mercer and Pignotti[15] uses a taxonomy intended to classify on both research quality and other criteria. In this protocol, evidence-based interventions are those supported by work with randomized designs employing comparisons to established treatments, independent replications of results, blind evaluation of outcomes, and the existence of a manual. Evidence-supported interventions are those supported by nonrandomized designs, including within-subjects designs, and meeting the criteria for the previous category. Evidence-informed treatments involve case studies or interventions tested on populations other than the targeted group, without independent replications; a manual exists, and there is no evidence of harm or potential for harm. Belief-based interventions have no published research reports or reports based on composite cases; they may be based on religious or ideological principles or may claim a basis in accepted theory without an acceptable rationale; there may or may not be a manual, and there is no evidence of harm or potential for harm. Finally, the category of potentially harmful treatments includes interventions such that harmful mental or physical effects have been documented, or a manual or other source shows the potential for harm.
Protocols for evaluation of research quality are still in development. So far, the available protocols pay relatively little attention to whether outcome research is relevant to efficacy (the outcome of a treatment performed under ideal conditions) or to effectiveness (the outcome of the treatment performed under ordinary, expectable conditions).
Production of evidence
A process has been specified that provides a standardised route for those seeking to produce evidence of the effectiveness of interventions.[21] Originally developed to establish processes for the production of evidence in the housing sector, the standard is general in nature and is applicable across a variety of practice areas and potential outcomes of interest.
Meta-analyses and systematic research syntheses
When there are many small or weak studies of an intervention, a statistical meta-analysis can be used to co-ordinate the studies' results and to draw a stronger conclusion about the outcome of the treatment. This can be an important contribution to the establishment of a foundation of evidence about an intervention.
In other situations, facts about a group of study outcomes may be gathered and discussed in the form of a systematic research synthesis (SRS).[22] A SRS can be more or less useful, depending on the evaluation protocol chosen, and errors in choice or use of a protocol have led to fallacious reports.[23] The meaningfulness of a SRS report on an intervention is limited by the quality of the research under consideration, but SRS reports can be helpful to readers seeking to understand EBP-related choices.
Miller et al. provide an excellent example and explication of the use of meta-analysis examining treatment outcome research, incorporating the principles of rigorous empirical research from the strong end of the continuum of levels of evidence.[24] This textbook also explicates how the research included was selected (e.g. controlled study looking at two different approaches, appearing in a peer reviewed journal, sufficient power to find significant differences if they occurred) and how each study was checked for validity (how was the outcome measured?) and reliability (did the research do what they said they did?), etc. to create a Cumulative Evidence Score weighted by the quality of the study (and not by the outcome) such that better studies with "stronger designs" and better methodological quality ratings carry more weight than weaker studies. The results lead to a rank ordering of the 48 treatment modalities included and provide a basis for selecting supportable treatment approaches beyond anecdotes, traditions and lore.
Social policy
There are increasing demands for the whole range of social policy and other decisions and programs run by government and the NGO sector to be based on sound evidence as to their effectiveness. This has seen an increased emphasis on the use of a wide range of Evaluation approaches directed at obtaining evidence about social programs of all types. A research collaboration called the Campbell Collaboration has been set up in the social policy area to provide evidence for evidence-based social policy decision-making. This collaboration follows the approach pioneered by the Cochrane Collaboration in the health sciences.[25] Using an evidence-based approach to social policy has a number of advantages because it has the potential to decrease the tendency to run programs which are socially acceptable (e.g. drug education in schools) but which often prove to be ineffective when evaluated.[26] More recently the Alliance for Useful Evidence has been established to champion the use of evidence in social policy and practice. It is a UK-wide network that promotes the use of high quality evidence to inform decisions on strategy, policy and practice. The agency published a useful practice guide with Nesta's Innovation Skills Team on the effective use of research evidence in 2016.
See also
- Epidemiology
- Evidence-based design
- Evidence-based education
- Evidence Based Library and Information Practice
- Evidence-based management
- Evidence-based medicine
- Evidence-based nursing
- Evidence-based pharmacy in developing countries
- Dynamic treatment regimes
- Improvement Science Research Network
- National & Gulf Center for Evidence Based Health Practice
Notes
- ↑ Spring, Bonnie (5 June 2007). "Evidence-based practice in clinical psychology: What it is, why it matters; what you need to know". Journal of Clinical Psychology. Wiley Periodicals, Inc. 63 (7): 611–32. PMID 17551934. doi:10.1002/jclp.20373. Retrieved 17 May 2015.
- ↑ Lilienfeld SO; Ritschel LA; Lynn SJ; Cautin RL; Latzman RD (November 2013). "Why many clinical psychologists are resistant to evidence-based practice: root causes and constructive remedies". Clinical Psychology Review. 33 (7): 883–900. PMID 23647856. doi:10.1016/j.cpr.2012.09.008.
- ↑ http://www.ebbp.org
- ↑ Thomas, G. and Pring, R. (Eds.) (2004). Evidence-based Practice in Education. Open University Press.
- ↑ Peile, E. (2004). "Reflections from medical practice: balancing evidence-based practice with practice based evidence". In Thomas, G.; Pring, R. Evidence-based Practice in Education. Open University Press. pp. 102–16. ISBN 0335213340.
- ↑ Cochrane, A.L. (1972). Effectiveness and Efficiency. Random Reflections on Health Services. London: Nuffield Provincial Hospitals Trust. ISBN 0900574178. OCLC 741462.
- ↑ Cochrane Collaboration (2003) http://www.cochrane.org/about-us/history/archie-cochrane
- ↑ Buysse, V.; Wesley, P.W. (2006). "Evidence-based practice: How did it emerge and what does it really mean for the early childhood field?". Zero to Three. 27 (2): 50–55. ISSN 0736-8038.
- ↑ "Matching Alcoholism Treatments to Client Heterogeneity: Project MATCH posttreatment drinking outcomes". J. Stud. Alcohol. 58 (1): 7–29. January 1997. PMID 8979210. Archived from the original on 2013-01-27.
- 1 2 Duffy P, Fisher C, Munroe D (February 2008). "Nursing knowledge, skill, and attitudes related to evidenced based practice: Before or After Organizational Supports". MEDSURG Nursing. 17 (1): 55–60. PMID 18429543.
- ↑ Lambert MJ, Bergin AE, Garfield SL (2004). "Introduction and Historical Overview". In Lambert MJ. Bergin and Garfield's Handbook of Psychotherapy and Behavior Change (5th ed.). New York: John Wiley & Sons. pp. 3–15. ISBN 0-471-37755-4.
- 1 2 Chambless DL, Hollon SD (February 1998). "Defining empirically supported therapies". J Consult Clin Psychol. 66 (1): 7–18. PMID 9489259. doi:10.1037/0022-006X.66.1.7.
- ↑ "Evidence-based practice in psychology" (PDF). American Psychologist. 61 (4): 271–85. May–June 2006.
- ↑ La Roche, M.L., and Christopher, M.S. (2009). "Changing paradigms from empirically supported treatment to evidence-based practice: A cultural perspective". Professional Psychology: Research and Practice. 40 (4): 396–402. doi:10.1037/a0015240.
- 1 2 3 Mercer, J.; Pignotti, M. (2007). "Shortcuts cause errors in systematic research syntheses: Rethinking evaluation of mental health interventions". Scientific Review of Mental Health Practice. 5 (2): 59–77. ISSN 1538-4985.
- ↑ Rubin, A.; Parrish, D. (2007). "Problematic phrases in the conclusions of published outcome studies". Research on Social Work Practice. 17 (3): 334–47. doi:10.1177/1049731506293726.
- ↑ Kaufman Best Practices Project. (2004). Kaufman Best Practices Project Final Report: Closing the Quality Chasm in Child Abuse Treatment; Identifying and Disseminating Best Practices. Retrieved July 20, 2007, from http://academicdepartments.musc.edu/ncvc/resources_prof/reports_prof.thm.
- ↑ Saunders, B., Berliner, L., & Hanson, R. (2004). Child physical and sexual abuse: Guidelines for treatments. Retrieved September 15, 2006, from http://www.musc.edu/cvc.guidel.htm
- ↑ Khan, K.S., et al. (2001). CRD Report 4. Stage II. Conducting the review. phase 5. Study quality assessment. York, UK: Centre for Reviews and Dissemination, University of York. Retrieved July 20, 2007 from http://www.york.ac.uk/inst/crd/pdf/crd_4ph5.pdf
- ↑ National Registry of Evidence-Based Practices and Programs (2007). NREPP Review Criteria. Retrieved March 10, 2008 from http://www.nrepp.samsha.gov/review-criteria.htm
- ↑ Vine, Jim (2016), Standard for Producing Evidence – Effectiveness of Interventions – Part 1: Specification (StEv2-1), HACT, ISBN 978-1-911056-01-0,
Standards of Evidence
- ↑ Cooper, H. (2003). "Editorial". Psychological Bulletin. 129 (1): 3–9. doi:10.1037/0033-2909.129.1.3.
- ↑ Pignotti, M.; Mercer, J. (2007). "Holding Therapy and Dyadic Developmental Psychotherapy are not supported and acceptable social work interventions". Research on Social Work practice. 17 (4): 513–19. doi:10.1177/1049731506297046.
- ↑ Miller, W. R.; Wilbourne, P.L.; Hettema, J.E. (2003). "Ch. 2: What Works? A summary of alcohol treatment outcome research". In Hester, R.; Miller, W.R. Handbook of alcoholism treatment approaches: Effective alternatives (3rd ed.). Allyn & Bacon. pp. 13–63. ISBN 0205360645.
Summary table
- ↑ http://www.cochrane.org
- ↑ Raines, J.C. (2008). Evidence Based Practice in School Mental Health. Oxford University Press. ISBN 978-0-19-971072-0.
References
- Dale AE (2005). "Evidence-based practice: compatibility with nursing". Nurs Stand. 19 (40): 48–53. PMID 15977490. doi:10.7748/ns2005.06.19.40.48.c3892.
- Berke, D.M.; Rozell, C.A.; Hogan, T.P.; Norcross, J.A.; Karpiak, C.P. (April 2011). "What clinical psychologists know about evidence-based practice: familiarity with online resources and research methods". Journal of Clinical Psychology. 67 (4): 329–39. doi:10.1002/jclp.20775.
- Chambless DL, Hollon SD (February 1998). "Defining empirically supported therapies". J Consult Clin Psychol. 66 (1): 7–18. PMID 9489259. doi:10.1037/0022-006X.66.1.7.
- DiCenso, A.; Cullum, N.; Ciliska, D. (1998). "Implementing evidence-based nursing: some misconceptions". Evidence Based Nursing. 1 (2): 38–40. doi:10.1136/ebn.1.2.38.
- French P (February 2002). "What is the evidence on evidence-based nursing? An epistemological concern". J Adv Nurs. 37 (3): 250–57. PMID 11851795. doi:10.1046/j.1365-2648.2002.02065.x.
- Mason, Diana J.; Leavitt, Judith K.; Chaffee, Mary W. (2006). Policy and Politics in Nursing and Health Care (5th ed.). Elsevier. ISBN 1416067167.
- Melnyk, B.M.; Fineout-Overholt, E. (2005). "Making the case for evidence-based practice". In Melnyk, B.M.; Fineout-Overholt, E. Evidence-based practice in nursing & healthcare. A guide to best practice. Lippincott Williams & Wilkins. pp. 3–24. ISBN 0781744776.
- Mitchell GJ (January 1999). "Evidence-based practice: critique and alternative view". Nurs Sci Q. 12 (1): 30–35. PMID 11847648. doi:10.1177/08943189922106387.
- Patterson-Silver Wolf, D.A.; Dulmus, C.N.; Maguin, E. (2012). "Empirically supported treatment’s impact on organizational culture and climate". Research on Social Work Practice. 22 (6): 665–71. doi:10.1177/1049731512448934.
- Sackett, D.L.; Straus, S.E.; Richardson, W.S.; Rosenberg, W.; Haynes, R.B. (2000). Evidence-based medicine: How to practice and teach EBM. 2. London: Churchill Livingstone. ISBN 0443062404.
- Spring, B.; Hitchcock, K. (2010). "Evidence-based practice in psychology". In Weiner, I.B.; Craighead, W.E. Corsini’s Encyclopedia of Psychology and Behavioral Science. 2 (4th ed.). Wiley. pp. 604–. ISBN 0470170263.
- Spring, B.; Neville, K. (2014). "Evidence-based practice in clinical psychology". In Barlow, D.H. The Oxford Handbook of Clinical Psychology. Oxford Library of Psychology Series. Oxford University Press. pp. 128–49. ISBN 978-0-19-932871-0.
External links
Library resources about Evidence based practice |
- Association for Behavioral and Cognitive Therapies (ABCT)
- Evidence-Based Behavioral Practice Project
- NREPP National Registry of Evidence-based Programs and Practices
- Evidence-Based Practice @ Keele
- Evidence Based Practice Definitions
- The Joanna Briggs Institute – International Collaborative on Evidence-based Practice in Nursing.
- Indiana Center for Evidence Based Nursing Practice: A JBI Collaborating Center
- Evidence – Communicate leading research in order to promote international cooperation and evidence-based treatments
- Evidence based Practice clinic initiatives
- Evidence Based Practice studies at the University of Massachusetts
- Center for the Advancement of Evidence-Based Practice (CAEP) at Arizona State University College of Nursing and Health Innovation
- The National Nursing Practice Network
- Website about Evidence-Based teaching
- Improvement Science Research Network
- Academic Center for Evidence-Based Practice