What does the acronym RUMBA mean

The (in) sense of certification in intensive care medicine

Summary

Today certification is an indispensable part of quality management. Processes originally developed for industrial purposes and then further developed for the hospital sector are adopted. T. be able to capture structure or process quality, which for medicine, but especially for intensive care medicine, however, is an inadequate result quality, which can not only be represented one-dimensionally as a mortality rate, in its complex structure. This overview shows the content-related requirements that must be placed on an indicator system so that it is suitable for the certification of a certain institution or facility, in this case an intensive care unit in particular. The second part describes the efforts that are currently being made to develop an indicator system for intensive care medicine in Germany. This is necessary because there is no certification system in Germany that can depict the quality of intensive care units in a sufficiently valid, objective and reliable manner. Before such a system has been validated, further certifications of intensive care units have only limited informative value about their quality.

Abstract

Certification is a compulsory element of today's quality management. However, the instruments used for certification have mostly originally been developed for industrial purposes. Even with tried and tested adaptation to hospital structures, transferring these instruments to the medical environment implies partial negligence of outcome quality. This fact is due to the multidimensional structure of medical outcome quality, which cannot be reduced to only one indicator. This review describes the necessity to develop a specific indicator system, which is needed for an objective, reliable and valid system of certification for intensive care units. The second part of the review describes the current efforts which are being undertaken to develop such a certification system for German intensive care units. Until this new system has been validated, certification of intensive care units is of limited value for evaluating the quality of intensive care units in Germany.

This work critically examines which conditions are currently given in intensive care medicine for meaningful certification and which are missing. The existing data on the effects of certification on quality in the hospital or in a structural unit is presented.

Today, certification is an indispensable part of quality management in hospitals or clinics, institutes or departments. While only a few hospitals dealt with this topic just a few years ago, the increasing economic pressures have not only increased the pressure from service providers to prove the quality of their work to the outside world, to publish it in the interests of patient recruitment and to face a comparative competition [27], but the hospitals are even legally obliged to quality assurance according to Section 137 of the Book V of the Social Code. As a result, the number of external and internal quality assurance programs has also increased.

Certification according to uniform criteria and by independent experts is currently playing an increasing role, as can be seen in the growing number of z. B. Cooperation for Transparency and Quality in Health Care (KTQ), proCum Cert or DIN EN ISO 9001: 2000, European Foundation for Quality Management (EFQM), Joint Commission International Accreditation (JCIHA) or other systems of certified hospitals, clinics, departments or institutes shows [35]. However, the question is still open whether the certification systems currently most widespread in the German-speaking area (KTQ and DIN EN ISO 9001) are really applicable to intensive care medicine and whether they guarantee “the quality” that they are supposed to promise. In addition, the question arises as to what organizational, personnel and financial expenditure is required to carry out a meaningful certification.

Quality indicators in intensive care medicine

conditions

How does “certification” work?

A central element of quality management with certification is the “indicator concept.” But what are indicators?

Indicators are pure auxiliary quantities for a quantification, which are able to determine certain negative events valid to predict. Indicators should point out that the probability of negative / adverse events ideally falls or, in the worst case, increases. You imply a method of corporate management; therefore they have also been incorporated into instruments such as the balanced scorecard [20].

Indicators are primarily parameters with the help of which a statement can be made about results and possibly also a process. From the point of view of quality management, however, this definition of indicators is not a sufficient specification. For quality indicators, the corresponding parameter does not represent a direct variable of the unit of investigation and a reliable predictive value exists for a superordinate supply sector. In addition, a quality indicator must be evaluated with regard to feasibility, reliability and validity [10].

definition

"Quality indicators are auxiliary quantities that indirectly depict the quality of a unit through numbers or numerical ratios" [22].

The following parameters from the field of intensive care medicine do not meet the above-mentioned conditions for a quality indicator and therefore, even if they are often used incorrectly, cannot be considered indicators in the sense of a quality management system [12]:

  • Mortality rate in the intensive care unit,

  • Hospital mortality rate of all patients treated in the intensive care unit,

  • 30 day mortality rate,

  • Number of patients treated,

  • average length of stay,

  • "Case-mix" index,

  • Use of scoring systems [Acute Physiology and Chronic Health Evaluation (APACHE) II, III, Sepsis-related Organ Failure Assessment (SOFA), Simplified Acute Physiology Score (SAPS) II, III, Therapeutic Intervention Scoring System (TISS) 28, Core -TISS etc.),

  • Ventilation hours,

  • Number of excessive extubations,

  • Number of exceptional catheter removals,

  • Number of resuscitations on the ward,

  • Number of bronchoscopies,

  • Number of tracheotomies,

  • available special processes,

  • Participation in external quality assurance and

  • Personnel composition (service system, structure, specialist presence, etc.).

The detection of suitable indicators for intensive care medicine is made more difficult by the fact that the systems used for certification (e.g. KTQ and DIN EN ISO 9001) do not use individual indicators, but rather indicator systems! For these systems, not only each indicator has to be validated, but each composition and combination of indicators.

The following characteristics must apply to each individual indicator. You need to:

  • relevant ",

  • understandable "(understandable for employees),

  • measurable "(measurable),

  • bbehaviorable ”and

  • achievable "(can be influenced by measures)

be. The acronym "RUMBA" is a good memory aid for this.

The biggest problem in meeting all of these criteria lies in the high complexity of medical and, in particular, intensive care, which is usually characterized by interdisciplinarity, intradisciplinary discontinuities (shift work systems, on-call systems) and the interdependent collaboration of various professional groups.

Since differences in quality management parameters between two hospitals or institutions can be due to different degrees of severity of comorbidities of the treated patients as well as structural quality deficits, the factors that cannot be influenced by a hospital or clinic / department must be neutralized as far as possible, i.e. That is, a complex adjustment has to be carried out.

In addition, indicators must be able to be ascertained without great effort, not only because of today's significantly increased workload. As part of the validation of indicators, clarity must also be obtained about the consequences of missing or incorrect measurements or values. To do this, the classic test and quality criteria must first be examined.

Objectivity, reliability and validity are classic quality criteria.

When it comes to validity, a distinction must be made again between the following dimensions:

  • Face validity,

  • Criterion validity and

  • Construct validity.

One should be aware that indicators do not have to strictly adhere to the division into structure, process and result quality introduced by Donabedian [3], but can reflect aspects of different quality characteristics. If you follow the Joint Commission on Accreditation of Health Care Organizations (JCAHO), indicators can be divided into:

  • global indicators (e.g. mortality, nosocomial infections),

  • subject-specific indicators (e.g. number of unplanned extubations, frequency of hypoglycaemia),

  • diagnosis-specific indicators [e.g. B. Time to first antibiotic administration in sepsis, proportion of lung protective ventilation in acute respiratory distress syndrome (ARDS) patients] and

  • overarching indicators (e.g. patient satisfaction [26], completeness of medical records)

divide [19]. However, all of these indicator groups must always be considered and adjusted, taking into account the patient population and risk profiles, in order to enable a comparison.

In order to meet the requirements for quality indicators in intensive care medicine mentioned in the first part, it is necessary to think about the selection of indicators, their abstraction, their purposefulness, their feasibility and their validation [28].

Selection means that the indicators have to be selected from a very large number of possible quality-relevant parameters. So there is z. B. the catalog of the Spanish Society for Intensive Care Medicine currently consists of over 100 parameters.

Abstraction means that the indicators themselves do not have to be directly relevant to quality. (But it can be.) For example, the completeness of medical records can be B. be a more relevant indicator for the process of "patient transport to the operating theater" than the average time required.

Targetedness means that indicators must be clearly aimed at a quality problem.

Feasibility describes the reality that employees have to be motivated and have to collect data correctly. Automation through IT systems is an important key here [12, 21].

And finally, indicators have to be checked to see whether they are sufficiently precise (reliability) and whether they reflect what they should reflect (validity).

This means that very high demands must be placed on the development of indicators: In addition to careful selection, a high degree of methodical precision (identification of a problem, definition of the requirements for the measuring instrument, selection of a measuring instrument, validation) is required. Schrappe [28] gives as an example that the selection of the otherwise scientifically usual level of significance of 0.05 would suggest an unfounded - nonexistent - lack of quality in 5 out of 100 cases. In times of benchmarking, this can e.g. B. can have massive economic consequences due to a drop in patient recruitment as a result of supposedly poor quality.

With regard to the selection of indicators, it should be noted that one indicator can be very useful for identifying one problem, but at the same time it is completely unsuitable for describing a second problem.

Indicators must not be transferred without criticism.

Its purpose must be precisely defined and must be taken into account during the assessment. The development of indicators should therefore always take place within the framework of process analyzes [30], but must not lead to too large a selection of parameters, as these only generate unnecessary amounts of data.

The verification of the “feasibility” of the collection of data must be carried out in the context of pilot studies; only then may the reliability and validity be checked.

The test of reliability requires z. B. the use of retests, parallel tests or the implementation of a measurement with several methods, the results of which are then compared statistically.

The validity can be checked on the aforementioned levels of face validity, criterion validity and construct validity. The face validity can e.g. B. be weighted in the context of the Delphi method. The face validity is therefore not objective, but purely subjective.

Criterion validity as the most important measure of indicator development uses a method that reliably measures a certain quality as a gold standard and examines the degree of conformity of the indicator with the gold standard.

The construct validity is the most demanding criterion, as it compares the indicator with data that has already been confirmed or tries to confirm the confirmed hypotheses.

From what has been presented so far, it can be seen that the development of indicators in general and for intensive care medicine in particular is a very lengthy, highly complex and labor-intensive task. Such reliable and usable indicators for intensive care medicine do not yet exist. They are currently under development in Germany, namely at the level of the validity check - and here currently in the status of the visual validity check. This is done for a selection of possible criteria made by intensive care physicians in Germany in accordance with the Delphi methodology already mentioned, as described in the second part of the article.

development

Quality in healthcare is defined as care or treatment that is safe, timely, effective, efficient, appropriate and patient-centered [2]. In order to achieve this, a quality management system must be implemented, for which indicators are necessary so that a continuous quality improvement can be achieved and the results of the quality management in terms of structure, process and result quality can be regularly and reliably checked.

If the main focus of the established certification procedures is to check the structure and process quality, especially in intensive care medicine, for which the previous certification procedures have not been specially developed, the important quality of results must also be checked, as many confounders are at work here can.

There are a large number of publications on the quality of results in intensive care medicine. Models were developed that show the risk-adjusted as well as the standardized mortality rate. But even this standardized measurement of mortality has important limitations and can only partially represent the quality of individual intensive care units [6]. Because in addition to mortality as the primary endpoint of the quality of results, other parameters, such as morbidity z. B. by nosocomial infections or venous thromboembolism of importance. The result quality of intensive care medicine cannot therefore be represented one-dimensionally as outcome quality in the sense of a mortality statistic, but can only be represented multidimensionally. The length of stay, ventilation days, number of nosocomial infections, etc. can also represent important outcome parameters in a broader sense.

In order to measure the quality in an intensive care unit and to improve it based on this, it is necessary to develop quality indicators. Since the knowledge of the medical staff about their own quality is often based on subjective feelings and thus it is very difficult to improve quality, it is necessary to objectify quality [24]. The aim must therefore be to develop quality indicators that are valid, reliable and practicable. They must reflect the current state of science and offer the opportunity to improve quality. In addition to the steps listed in Tab. 1 for developing the quality indicators, these must also meet the RUMBA rules (see section “Requirements”).

The quality indicators developed and selected in this way must be accepted by the intensive team and the measurement must be continuous and objective. The data must be interpretable and the associated measurements must be feasible. This means that the indicators must be able to be answered with “yes” or “no” and can be derived from the routine documentation.

The quality indicators can vary due to the different structures of the intensive care units.The “Spanish Society of Intensive Care Medicine and Coronary Care Units (SEMICYUC)” has developed over 100 quality indicators and presented the corresponding quality goals (http://www.calidad.semicyuc.org/). The presentation of the indicators is structured according to a uniform pattern (Tab. 2, Tab. 3).

Since the development of the indicators is very complex and only affordable for a few departments, the German Society for Anesthesiology and Intensive Care Medicine (DGAI) translated the quality indicators of the Spanish Intensive Care Society with their kind permission and, if possible or necessary, updated them with the latest evidence-based medical literature .

In a two-stage Delphi process, the indicators were prioritized and the first 10 indicators recommended for further review (Tab. 4). The implementation of the parameters is currently being evaluated as part of a prevalence study (publication in preparation).

The regular recording and evaluation of the indicators enables internal benchmarking over time [16]. It can be shown whether a quality improvement can actually be achieved by adhering to the indicators. However, a further link with the outcome parameters (mortality, morbidity, length of stay, ventilation time, etc.) is necessary. These further links have to be validated again in further projects in order to have reliable indicators available.

discussion

At this point the questions remain: What does certification currently bring, and what does it do once indicators have been developed? Is there any evidence that certification can improve the quality of critical care medicine?

If you look at the experience reports from different clinics or departments, at first glance the positive assessments predominate despite the high financial and personnel costs [18, 14, 33]. The certification process is described as a “value-adding process”, “as an impetus to implement projects that have long been planned”, and the benefits for uncovering weak points are emphasized as greater than the effort. If you analyze the reports more closely, you will find that all certification processes were implemented with considerable financial, personnel (additional positions, commitment outside of working hours) and high communication effort. As a rule, a flat hierarchy and a general interest in hospital solutions and not just in departmental “island solutions” were cited as a condition for implementation.

A 2006 survey by Prof. Schubert from Witten-Herdecke of 26 hospitals certified in 2003 with a response rate of 76% of the questionnaires shows, all in all, only an average success of the certification measures, especially with regard to the retention of referring physicians, an - non-existent - increase in the number of cases or the - non-existent - reduction of complications [14]. The “strongest” changes within the framework of certification are described in the “soft” areas of a hospital. These are:

  • Transparency of the clinic,

  • Quality awareness in-house,

  • Employee motivation,

  • Identification of employees with the hospital,

  • Degree of organization of important processes and

  • Patient safety.

The strongest negative changes were seen in the large increase in administrative work. Unfortunately, the costs for external certification programs have not yet been sufficiently published, or the existing studies show elementary methodological deficiencies [5]. In Germany, however, more than EUR 1.5 million has been spent on the certification of around 300 rehabilitation clinics since 2000. Smaller facilities such as According to published reports, e.g. pathological institutes required at least 2 additional full-time positions for the certification measures. Since these are relatively manageable structures that allow very structured work processes, the need for clinical facilities that are larger and, like intensive care medicine, work in an interdisciplinary manner and with a high degree of incalculable acute medicine, is likely to be many times higher.

The cost of developing evidence-based guidelines is estimated at EUR 300,000–400,000. The effort for the development of indicators in intensive care medicine, their implementation and implementation in the routine will be at least of the same order of magnitude.

However, guidelines have so far not produced any or only moderate improvements in the quality of treatment, and no relation to the patient's outcome has yet been demonstrated [8, 32]. Existing data shows that the effectiveness of e.g. B. in feedback strategies, the higher the lower the initial status. So far, there is no evidence of an improvement in the quality of care through certification; at most, the certification affects parts of the process quality, but not the quality of the results. Studies on the effect of certification that stand up to evidence-based criteria have so far been virtually completely absent [7, 9, 32].

So what does certification in intensive care medicine bring us today?

An impartial third party (the “certifier”) confirms that standards and work instructions are available for all parties involved. The relevance of these existing work instructions still has to be discussed controversially, especially since certification says nothing about the correctness of the work instructions or standards, nor about the implementation in reality and in practice. Likewise, the current certification says nothing about numerical or technically sufficient existing human resources or the efficiency of processes in intensive care medicine. The question of international or even national validity and comparability has not yet been clarified, as there are no validated indicators or indicator systems for intensive care medicine, so that today's certification in intensive care medicine is more of a minimalist approach of “just being good enough “Represents.

Particularly when it comes to the certification of entire hospitals, the question of specificity and thus relevance and applicability for intensive care medicine arises quite openly. There is still no evidence of the cost effectiveness of certification procedures in general, and in particular for intensive care medicine.

And finally, it should be noted that the currently widespread certification procedures themselves only have a rather superficial quality management; at least, according to the authors' knowledge, effects or successes of the certifiers have still not been disclosed openly.

In the opinion of the authors, the “Audits by Peer Reviews” procedure is currently a cost-effective alternative to certification [4, 15].

"" Audits through peer reviews "as a cost-effective alternative to certification"

In combination with the recording and evaluation of simple quality parameters, this procedure, which is more of an advisory than a control character, represents a good method for objectively representing quality improvement and thus quality development.

The combination of both procedures is characterized in particular by the close proximity to nursing, medical and organizational core services. Experience with this system exists in Germany in gynecology and internal medicine [1] as well as in the intensive networks of Baden-Württemberg and Hamburg [17]. But here too, the lower the initial status, the greater the “successes” [9].

When the indicators for intensive care medicine are finally available, the degree of the achievable qualitative and economic increase in efficiency will have to be checked again within the framework of research projects. Only then does it possibly make sense to have a formal certification in intensive care medicine carried out.

conclusion for practice

Certification is en vogue as a quality instrument in medicine. With regard to a specific quality description of an intensive care unit, the currently existing methods only allow the general certification of an entire hospital (KTQ) or certification via DIN ISO 9001: 2000. Both instruments are indicator systems whose indicators or indicator combinations have not yet been validated for intensive care medicine. In addition to the existing general discussion points of certification with regard to increasing efficiency, effectiveness and improvement of the outcome, there is the specific problem for intensive care medicine that there are objectively no suitable instruments for certification. These instruments are currently being created in a complex process, but are not yet available. Until these quality indicators are validated, certification in intensive care medicine will remain a “Potemkin village” that does not deliver what it promises. The financial and personal expenditure required for a current certification should be in a favorable ratio to the expected or hoped-for benefit. Even if the certificate may be able to induce a positive publicity effect, at the present time it in no way describes the real medical quality in intensive care medicine. From today's perspective, a sensible alternative to the certification process is the peer review process in combination with the regular recording and evaluation of the established quality parameters.

literature

  1. 1.

    Blum K, Hanel E, Mündermann-Hahn A et al. (2002) Guide: Clinical Audit. Volume 143 of the series of publications by the Federal Ministry of Health. Nomos, Baden-Baden, p 80

  2. 2.

    Committee on Quality of Health Care in America (2001) Crossing the quality chasm. A new health system for the 21st century. Formulating new rules to redesign and improve care. National Academy Press, Washington, DC, p.61

  3. 3.

    Donabedian A (1986) Criteria and standards for quality assessment and monitoring. QRB Qual Rev Bull 12: 99-108

    PubMedCAS Google Scholar

  4. 4.

    Ellerbeck EF, Kresowik TF, Hemann RA et al. (2000) Impact of quality improvement activities on care for acute myocardial infarction. Int J Qual Health Care 12: 305-310

    PubMedArticleCAS Google Scholar

  5. 5.

    Gandjour A, Lauterbach KW (2002) On the profitability of quality improvement measures in the health care system. Med Klin 97: 499-502

    Article Google Scholar

  6. 6.

    Gerlach H, Toussaint S (2006) Sepsis Therapy - Why Change Management Can Reduce the Lethality of Sepsis. Anesthesiol Intensivmed Emergency Med Schmerzther 10: 614–623

    Google Scholar

  7. 7.

    Glattacker M, Jäckel WH (2007) Evaluation of Quality Assurance - Current Data and Consequences for Research. Health 69: 277-282

    PubMedArticleCAS Google Scholar

  8. 8.

    Grol R (2000) Between evidence-based practice and total quality management: the implementation of cost-effective care. Int J Qual Health Care 12: 297-304

    PubMedArticleCAS Google Scholar

  9. 9.

    Jamtvedt G, Young JM, Kristoffersen DT et al. (2003) Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 3: CD000259

    PubMed Google Scholar

  10. 10.

    Joint Commission on Accreditation of Health Care Organizations (JCAHO) (1991) Primer on indicator development and application. Oakbrouk Terrace, Illinois

  11. 11.

    Kazandjian VA (1995) Indicators of performance or the search for the best pointer dog. In: Kazandjian VA (ed) The epidemiology of quality. Aspen, Gaithersburg, pp 25-37

  12. 12.

    Kazandjian VA, Wood P, Lawthers J (1995) Balancing science and practice in indicator development: the Maryland Hospital Association Quality Indicator Project. Int J Qual Health Care 7: 39-46

    PubMedArticleCAS Google Scholar

  13. 13.

    Krier C, Martin J (2006) Process Optimization, DRGs, SOPs, Clinical Pathways, KTQ - When can I still be a doctor? Anesthesiol Intensivmed Emergency Med Schmerzther 41: 135-136

    PubMedArticleCAS Google Scholar

  14. 14.

    Schubert HJ (2008) Cost / benefit assessments for certification according to KTQ or proCumCert. Cooperation for Transparency and Quality in Health Care (KTQ). http://www.ktq.de from 07/15/2008

  15. 15.

    MacClellan WM, Hodgin E, Pastan S et al. (2004) A randomized evaluation of two health care quality improvement program interventions to improve the adequacy of hemodialysis care of ESDR patients: feedback alone versus intensive intervention. J Am Soc Nephrol 15: 754-760

    Article Google Scholar

  16. 16.

    Martin J, Wegermann P, Bause H et al. (2007) Quality Management in Intensive Care Medicine. Anaesthesiol Intensivmed 48: S40-S47

    Google Scholar

  17. 17.

    Mende H (2007) Our intensive care medicine has to get better! Doktorebl Baden Wuerttemb 62: 622-623

    Google Scholar

  18. 18.

    Metz-Schimmerl S, Schima W, Herold CJ (2002) Certification according to ISO 9001 - a waste of time or a necessity? Radiologist 42: 380-386

    PubMedArticleCAS Google Scholar

  19. 19.

    Nadzam DM (1991) Development of medication-use indicators by the Joint Commission on Accreditation of Healthcare Organizations. Am J Hosp Pharm 48: 1925-1928

    PubMedCAS Google Scholar

  20. 20.

    Oliveira J (2001) The balanced scorecard: an integrative approach to performance evaluation. Healthc Finance Manage 55 (5): 42-46

    PubMedCAS Google Scholar

  21. 21.

    Paltiel O, Salakhov E, Ronen I et al. (2001) Management of severe hypokalemia in hospitalized patients: a study of quality of care based on computerized databases. Arch Intern Med 161: 1089-1095

    PubMedArticleCAS Google Scholar

  22. 22.

    Pietsch-Breitfeld B, Sens B, Rais S (1996) Terms and concepts of quality management. Informat Biometr Epidemiol 27: 200-230

    Google Scholar

  23. 23.

    Pronovost PJ, Berenholtz SM, Ngo K et al. (2003) Developing and pilot testing quality indicators in the intensive care unit. J Crit Care 18: 145-155

    PubMedArticle Google Scholar

  24. 24.

    Pronovost PJ, Jenckes MW, Dorman T et al. (1999) Organizational characteristics of intensive care units related to outcomes of abdominal aortic surgery. JAMA 14: 1330-1331

    Google Scholar

  25. 25.

    Satzinger W (2002) Information for quality management in hospitals: on the function and methodology of patient and staff surveys. Med Klin 97: 104-110

    Article Google Scholar

  26. 26.

    Schindler AW, Schindler N, Vagts DA (2007) Marketing in Healthcare - Example Hospital. Anesthesiol Intensivmed Emergency Med Schmerzther 42: 552–556

    Google Scholar

  27. 27.

    Schrappe M (1999) Performance and quality comparison in hospitals - future of the comparison of hospital operations? Betriebswirtschaftl Forsch Prax 5: 499-511

    Google Scholar

  28. 28.

    Schrappe M (2001) The indicator concept: central element of quality management. Med Klin 96: 642-647

    ArticleCAS Google Scholar

  29. 29.

    Selbmann HK (2004) Evaluation and certification of acute hospitals in Germany. Bundesgesundheitsbl Gesundheitsforsch Gesundheitsschutz 47: 103–110

    Article Google Scholar

  30. 30.

    Sheldon T (1998) Promoting health care quality: what role performance indicators? Qual Health Care 7: 45-50

    Google Scholar

  31. 31.

    Siebert H, Sturm J (1999) Quality Assurance - Quality Management - Certification. Trauma Surgeon 102: 906-908

    PubMedArticleCAS Google Scholar

  32. 32.

    Simoes E, Boukamp K, Mayer ED, Schmahl FW (2004) Is there evidence of the impact of quality assurance / quality-promoting processes in other countries? Health 66: 370–379

    PubMedArticleCAS Google Scholar

  33. 33.

    Thüsing C (2005) Quality management in hospitals. Relevance of KTQ. Med Klin 100: 149-153

    Article Google Scholar

  34. 34.

    Weiler T, Hoffmann R, Strehlau-Schwoll H (2003) Quality management and certification. Process optimization in the hospital. Trauma Surgeon 106: 692-697

    PubMedArticleCAS Google Scholar

  35. 35.

    Vagts DA, Martin J, Dahmen K (2008) Quality management in hospitals. Anesthesiol Intensivmed Emergency Med Schmerzther 43: 156–160

    PubMedArticle Google Scholar

Download references

Conflict of interest

The corresponding author declares that there is no conflict of interest.

Author information

Affiliations

  1. Clinic and Polyclinic for Anaesthesiology and Intensive Therapy, University of Rostock, Schillingallee 35, 18057, Rostock, Germany

    PD Dr. THERE. Vagts MSc. Hospital management, DEAA, EDIC

  2. Anaesthesiology and OR Management, Göttingen University Medical Center, Göttingen, Germany

    M. Bauer

  3. Clinics of the district of Göppingen gGmbH, Göppingen, Germany

    J. Martin

Corresponding author

Correspondence to PD Dr. THERE. Vagts MSc. Hospital management, DEAA, EDIC.