Haviors only in the hallways and conference rooms–rarely with patients,”76 efforts should be made to gather data regarding medical learners’ and practicing physicians’ behaviors when working with patients (e.g. from other learners, colleagues, nurses, and allied health staff). The identities of individuals participating in 360-degree reviews should be kept confidential, and institutions should have policies that prohibit retaliation related to such reviews. Another tool for assessing professionalism among medical learners and practicing physicians is patient feedback. Patients can be GSK2256098 biological activity surveyed at the point of care or at a later date. In a systematic review, 12 of 15 studies (80 ) showed improved educational outcomes in physicians as a SCIO-469 price result of patient feedback. All studies that assessed Fitzpatrick level 1 (valuation), level 2 (learning), and level 3 (intended behavior) outcomes demonstrated positive results. However, only four of seven studies that assessed level 4 (change in actual performance or results) demonstrated positive results.77 According to the study authors, these discrepant results might be due to lack of precision in assessing actual performance, under-reporting of poor experiences by patients, and a true absence of effect of patient feedback. Future research should determine the reasons for the discrepant result. In the meantime, it is reasonable for institutions to develop and implement methods for improving performance in medical learners and practicing physicians who receive poor feedback from patients. Other methods of assessing professionalism include objective structured clinical examinations,5,73 simulation,78 and “critical incident reports,”79 reviews of patient complaints and professionalism lapses, and tests of knowledge (i.e. of the “cognitive base”). A recent review describes these methods in detail.74 Notably, efforts to validate scores generated by professionalism assessment tools have lagged behind the creation of these tools. Future research should determine the validity of professionalism assessment tools.80 Professionalism assessments should be relevant to the individual’s level of education and specialty setting.5 Assessments should commence during medical school, be conducted regularly during residency and fellowship training, and continued throughout a physician’s career. Individuals should know they are being assessed for professionalism. Finally, observations should be made over a long period of time.81 The data generated by these multiple assessment tools can be used to create a “professionalism portfolio,” the totality of which should represent a picture of the individual’s professionalism.82 Portfolios can be used for formative feedback (i.e. feedback and action plans for improvement) and summative feedback (e.g. discipline individuals with unacceptable professionalism lapses). The data can also be used to reward exemplars.5,76 (Notably, the author, who is the chair of a division comprising about 90 faculty-level general internists, incorporates review of the faculty members’ “professionalism portfolio” into the annual review process, during which other data [e.g. the individual’s clinical productivity, teaching and research activities, career goals, etc.] are reviewed; similar portfolios are also used by training programs within the Mayo Clinic College of Medicine.33) Finally, the data can be used to develop and improve professionalism curricula and generate research hypotheses regarding profe.Haviors only in the hallways and conference rooms–rarely with patients,”76 efforts should be made to gather data regarding medical learners’ and practicing physicians’ behaviors when working with patients (e.g. from other learners, colleagues, nurses, and allied health staff). The identities of individuals participating in 360-degree reviews should be kept confidential, and institutions should have policies that prohibit retaliation related to such reviews. Another tool for assessing professionalism among medical learners and practicing physicians is patient feedback. Patients can be surveyed at the point of care or at a later date. In a systematic review, 12 of 15 studies (80 ) showed improved educational outcomes in physicians as a result of patient feedback. All studies that assessed Fitzpatrick level 1 (valuation), level 2 (learning), and level 3 (intended behavior) outcomes demonstrated positive results. However, only four of seven studies that assessed level 4 (change in actual performance or results) demonstrated positive results.77 According to the study authors, these discrepant results might be due to lack of precision in assessing actual performance, under-reporting of poor experiences by patients, and a true absence of effect of patient feedback. Future research should determine the reasons for the discrepant result. In the meantime, it is reasonable for institutions to develop and implement methods for improving performance in medical learners and practicing physicians who receive poor feedback from patients. Other methods of assessing professionalism include objective structured clinical examinations,5,73 simulation,78 and “critical incident reports,”79 reviews of patient complaints and professionalism lapses, and tests of knowledge (i.e. of the “cognitive base”). A recent review describes these methods in detail.74 Notably, efforts to validate scores generated by professionalism assessment tools have lagged behind the creation of these tools. Future research should determine the validity of professionalism assessment tools.80 Professionalism assessments should be relevant to the individual’s level of education and specialty setting.5 Assessments should commence during medical school, be conducted regularly during residency and fellowship training, and continued throughout a physician’s career. Individuals should know they are being assessed for professionalism. Finally, observations should be made over a long period of time.81 The data generated by these multiple assessment tools can be used to create a “professionalism portfolio,” the totality of which should represent a picture of the individual’s professionalism.82 Portfolios can be used for formative feedback (i.e. feedback and action plans for improvement) and summative feedback (e.g. discipline individuals with unacceptable professionalism lapses). The data can also be used to reward exemplars.5,76 (Notably, the author, who is the chair of a division comprising about 90 faculty-level general internists, incorporates review of the faculty members’ “professionalism portfolio” into the annual review process, during which other data [e.g. the individual’s clinical productivity, teaching and research activities, career goals, etc.] are reviewed; similar portfolios are also used by training programs within the Mayo Clinic College of Medicine.33) Finally, the data can be used to develop and improve professionalism curricula and generate research hypotheses regarding profe.