An essential component of learning is assessment. Formative assessment provides feedback during the process to increase retention and transfer of information, while summative assessment is post hoc assessment of learning, typically used for making decisions such as the awarding of a certificate. It is vital that we thoughtfully design our assessments to be reliable, high quality, and as valid as possible for the intended purpose. The field of psychometrics is dedicated to this endeavor; this presentation provides an introduction to the principles and best practices of developing assessments that are reliable, valid, and defensible.
Moreover, the greater the stakes involved in the assessment, the greater our obligation to ensure that the assessments are as strong as possible. That is, the low stakes of a short formative quiz in HR training might require minimal test development effort, but consider if you are tasked with training and certifying technicians to work on MRI machines. This is where the best practices of psychometrics and the test development cycle are absolutely crucial.
The presentation will start with the definition of foundational concepts, such as reliability, validity, and difficulty. We then cover a framework of the test development cycle, which is used both as a project management tool and to maximize reliability/validity. Steps in the framework include: job analysis or curriculum definition, item writing and review, standard setting, test assembly/publishing, delivery, score reporting, and psychometric analysis. A consistent example of a professional credentialing test will be used to drive the discussion.
Session outcomes include: