Abstract
After years of dissatisfaction with existing instruments, a tool for preceptors to evaluate an undergraduate student's clinical performance was developed, with preceptors' input in its construction. A 2-year pilot evaluation revealed notable problems including excessively high preceptor ratings and significant disparities between faculty and preceptor ratings. Further revisions were made, reducing indicators to those which the preceptors can actually evaluate on an everyday basis and developing a rubric. Additional recommendations to bolster the quality of ratings are improving orientation and guidance of preceptors and modifying procedures for giving feedback.