An essential component of learning is assessment. Formative assessment provides feedback during the process to increase retention and transfer of information, while summative assessment is post hoc assessment of learning, typically used for making decisions such as the awarding of a certificate. It is vital that we thoughtfully design our assessments to be reliable, high quality, and as valid as possible for the intended purpose. The field of psychometrics is dedicated to this endeavor; this presentation provides an introduction to the principles and best practices of developing assessments that are reliable, valid, and defensible.
Moreover, the greater the stakes involved in the assessment, the greater our obligation to ensure that the assessments are as strong as possible. That is, the low stakes of a short formative quiz in HR training might require minimal test development effort, but consider if you are tasked with training and certifying technicians to work on MRI machines. This is where the best practices of psychometrics and the test development cycle are absolutely crucial.
The presentation will start with the definition of foundational concepts, such as reliability, validity, and difficulty. We then cover a framework of the test development cycle, which is used both as a project management tool and to maximize reliability/validity. Steps in the framework include: job analysis or curriculum definition, item writing and review, standard setting, test assembly/publishing, delivery, score reporting, and psychometric analysis. A consistent example of a professional credentialing test will be used to drive the discussion.
Session outcomes include:
I am passionate about improving assessment via technology and psychometrics, and how they can be used to improve aspects of our society from Education to Medicine to Professional Certification to Pre-Employment Testing. Tests are used to make decisions about people every day. Bad tests make bad decisions. I want to provide software tools and education to practitioners to help better tests be made.
My background is as a PhD psychometrician with a focus on item response theory (IRT) and computerized adaptive testing (CAT), and since both remain vastly underutilized, a specific goal is to provide software, training, and other materials to allow IRT/CAT to become more widespread. I wish to elevate the profession by automating the many mundane tasks done by psychometricians, allowing us to focus on solving important measurement problems.
|Registration Type||Amount||Dates Available|
||$0.00||February 1, 2018 5:00 PM – March 15, 2018 1:00 PM|
||$45.00||February 1, 2018 12:00 AM – March 15, 2018 12:00 AM|
Register You are not signed in, so non-member pricing is being displayed. After logging-in, member pricing will be used if applicable.