Ignite Sessions

Common Validation and Fairness Threats When Evaluating Programs Against Accreditation, Testing, and Measurement Standards

Speaker: Manny Straehle, Assessment, Education, and Research Experts

Over the past few years, this session’s presenter has conducted a number of evaluations for various credentialing programs in order to determine whether they meet validity and fairness standards. Credentialing organizations often evaluate their programs to determine their readiness for meeting accreditation standards such as ISO/IEC 17024:2012 or NCCA’s Standards for the Accreditation of Certification Program, and while these accreditation standards are useful, the accreditation standards are often general and vague leaving credentialing organizations to determine the detailed methodologies and activities to meet these standards.

Consequently, these credentialing organizations frequently focus on meeting these standards while unintentionally ignoring other resources such as evidence-based materials (Handbook of Test Development) and other testing and research standards (The Standards for Educational and Psychological Testing, AAPOR’s Standards Definition, NCES Statistical Standards). Therefore—in an attempt to meet these accreditation standards (ISO and NCCA) and ignoring other resources related to improving a validity and fairness claim—credentialing organizations often share many common threats to the validity and fairness.

In this session, the presenter will discuss their own experience evaluating these programs and discuss the common threats to validity and fairness across the psychometric/test development lifecycle (including maintenance) and program and management processes of credentialing programs. Examples of the common threats will include: (1) SME representation, (2) the required number of SMEs for various activities, (3) defining a detailed scope, (4) lack of survey based job analysis, and (4) lack of policies such as security and appeals policies. Attendees will learn about these common validation and fairness threats to their own program and learn lessons in how to potentially strengthen their validity and the fairness claim of their program.

 

Getting Fired Up About Market Research

Speaker: Beth Kalinowski, PSI Services

Market research provides decision makers with information on the effectiveness of an organization’s current state, while also providing insights into potential issues. Market research can be used for decision making and developing long-term plans as well. The ultimate goal of any market research, however, is to create products and services that satisfy customer demand. Many credentialing organizations have extensive market research without even realizing it, and this data can be used to develop strategic initiatives, improve customer satisfaction, and ensure that value messaging is being heard.

 

Open World Games and Assessment

Speaker: Douglas Whatley, BreakAway Games

Games are a hot topic in assessment and there is a significant amount of confusion about exactly what a game 'is'. There are many types of gameplay - action, puzzles, trivia, casual - and each type has its own uses in assessment. Open World games are a type of game where the virtual game world is open to the player to explore as they desire. There may be story elements motivating them to do certain actions, but they are free to enjoy the game in any way they prefer. Within this open sandbox there can be mini-games or other simulation elements. How can this large open world be used for assessment and/or to scaffold the mini-game assessment elements. This discussion will expose the listener to the range of options in these games and then talk about the science behind using this for more comprehensive assessments.

 

Predicting College and Career Readiness: How do we know we’re on the right track?

Speakers: O'Neal Hampton, Scantron Corporation, and Sue Steinkamp, Scantron Corporation

Research has shown that predictions of performance in secondary and even post-secondary education can be made using academic data starting as early as 3rd grade. Academic indicators can be used to identify low-performing students in need of academic intervention and gifted students in need of enrichment activities. Such programs utilize data gathered within primary and secondary schools to identify and track early warning indicators (e.g. grades, assessment results, attendance, and discipline) and proactively work with students based on those results. Implications of predictive analytics could include earlier and more effective adjustments to educational approaches as well as specialized programs that leverage student strengths to prepare them prepare to enter the workforce.

While educators may have access to an abundance of academic data, they often struggle to turn that data into predictive and rich analytical insight.

Questions this session seeks to ask include: (1) what are the available approaches and tools that can be employed for the most beneficial view of current and likely future performance, (2) what measures provide the best early warning indicators regarding student probability of success well before they enter high school, and (3) how can we help educational institutions benefit from the power of predictive analytics to proactively determine the likelihood of future academic performance and potential career success?

The purpose of this session is to start the discussion around these topics and help educational organizations to address these questions to ensure student success. This session will cover the following: (1) use and value of predictive analytics, including specific education examples, (2) processes to track and identify early warning indicators, and (3) recommendations for data sets with most significant value.

 

Proactive Planning for Testing Irregularities

Speaker: Andrew Wiley, ACS Ventures

The concept of testing irregularities is one that is receiving an increased amount of attention in the testing industry. Loosely defined, testing irregularities refers to any incidents that occur that impede the ability to deliver and score tests. Some scenarios that have been encountered include (1) students being unable to log in to systems that will deliver tests, (2) distributed denial of service (DDOS) attacks that prevent tests from being delivered to test centers, and (3) students and candidates being kicked out of the testing system in the middle of taking a test. These problems have been encountered in a wide variety of settings, including states like Florida, Tennessee, and Indiana.

But even with a variety of preventive activities, testing irregularities still do—and will—occur. All too often when these testing irregularities are encountered testing centers and publishers are left scrambling and struggling to recover and restart test administration as soon as possible. Given the frequency with which these testing irregularities seem to occur, more proactive planning must be completed.

The purpose of this Ignite session will be to quickly review a set of policies and procedures that could be introduced to help plan for the event of a testing irregularity. After the session participants will be invited to join a roundtable discussion of plans specified as well as other means of proactive planning. This discussion will be a critical part of the conversation and will allow participants to brainstorm and develop new or more efficient models. The facilitator of the session will plan on taking notes and will share the outcomes from the roundtable with any attendee who is interested in seeing notes from the discussion.

 

View the Full Program
March 5-8, 2017 | Westin Kierland Resort & Spa | Scottsdale, AZ
March 5-8, 2017
Westin Kierland Resort & Spa
Scottsdale, AZ