ePosters

ePosters highlight key industry trends and insights in a fun, interactive way. ePosters are a series of slides displayed on a large monitor. Each series of slides typically lasts about 6 minutes and repeats throughout the 2-hour ePoster session. Presenting ePosters electronically enhances the visual experience and provides greater interactivity between attendees and presenters. Not only will you get a chance to ask these presenters questions and dig deeper into their results and insights, you will have an opportunity to meet with our Exhibitors to learn more about how they can help you solve the challenges you are facing in your program. You will not want to miss these ePosters or reception!

AP-CAT: A Comprehensive, Adaptive Web-Based Assessment Platform with Diagnostic Features for AP Statistics

Presented by: Cheng Liu & Ying Cheng, University of Notre Dame

Cognitive Diagnostic Computerized Adaptive Testing for AP Statistics (AP-CAT) is funded by the National Science Foundation and aims to enhance high school statistics education by combining modern information technology and state-of-the-art testing approaches. Our item bank has 842 well-designed items mapped to a knowledge tree with four sections, 16 main topics, and 157 learning attributes. This allows us to provide individual diagnostic feedback to students regarding their strengths and weaknesses and group-level score reports to teachers to help them adjust teaching strategies and content. The platform adopts an integrated, test-delivery strategy to serve multiple purposes: 1) (Bi-)Weekly assignments assembled by teachers by selecting items from the large bank; 2) Bi-monthly, sectional, linear test designed by our team to assess student learning on different topics; and 3) Annual, adaptive test designed by our team that mimics the content composition of the actual AP statistics exam.

 

Are Learning Sciences and Psychometrics Strange Bedfellows? Challenges and Opportunities in Applying Learning Sciences to the Design of Assessments

Presented by: Natalie Jorion, PSI

How compatible are learning sciences and psychometrics? Both fields make inferences about candidate cognition based on manifested behaviors. However, they diverge in how they conceptualize the design and use of assessments. The aim of this presentation is to highlight fundamental differences in the two paradigms, suggest ways these differences can be addressed, as well as propose implications and opportunities in the landscape of assessment.

 

Developing Errant Paths in a Simulation Testing Environment: A How to Guide for Assessment Professionals

Presented by: Sean Gyll, Western Governors University

Computer simulations as examinations represent a much-needed effort to move beyond the shortcomings of today’s forms-based assessments. Within computer simulations we assess for competency and problem-solving skills versus the content memorization typically supported by multiple-choice assessments. This paper discusses one of the primary dilemmas impeding the development of high-fidelity computer simulation examinations; chiefly, the determination of the appropriate number of errant paths that render a computer simulation exam valid. I briefly explore the history of simulations as examinations and differentiate between low- and high-fidelity assessments in a simulation environment. I also explore end-user navigation requirements and its relationship to developing the appropriate number of errant paths within a computer simulation. Finally, I provide several tools and templates to aid assessment professionals in the development process.

 

How SAP Have Digitally Transformed Their Certification Program

Presented by: Daniela Kelemen, SAP SE; John Kleeman, Questionmark

SAP is the market leader in enterprise application software. SAP runs a certification program in the cloud for employees, customers, and partners and offers more than 150 certifications and delivers exams in up to 20 languages, with a truly global program.

This session describes the latest developments in the SAP certification program -- focusing particularly on the advantages and disadvantages of certification in the cloud, and how it’s possible to run a certification program that is truly global. We will describe SAP’s “stay current” process, which is extremely agile and allows the update of exams rapidly in order to keep up with the speed of SAP’s software changes and frequent release cycles of cloud solutions.

We will cover the following areas:

  • Improvements in process, possible due to use of the cloud.
  • How translations are managed and processed.
  • Candidate experience of certification in the cloud.
  • Challenges with the cloud model.
  • Integration with digital badges.
  • Technology/infrastructure used.

 

Improving Multiple-Choice Items: Three Options v. Four Options

Presented by: Jason Meyers, Western Governors University

Despite recent technological advancements in assessment, multiple-choice (MC) items remain a staple, due to the speed and cost associated with writing and scoring them. Four option items appear to be the gold standard, as they strike a nice balance between the ease of creating plausible options and limiting the odds of answering correctly by guessing. However, research (Rodriguez, 2005) challenges the commonly held belief that four options are superior to three.

This study evaluates the current functioning of four option MC items across all objective assessments within one university and tests the hypothesis that writing three option MC items is quicker and cheaper than four option MC items through an item-writing activity in which three option MC items will be written alongside four option items. Final costs will be tallied and compared, and focus groups and surveys will be conducted with item writers to gain insight into the speed and ease of constructing these two item types.

 

Improving the Utility of Competency-Based Performance Assessments Via Cognitive Design

Presented by: Heather Hayes, Western Governors University

As online, competency-based higher education increases in prevalence among working adults, the quality of its assessments must evolve and meet the standards necessary to predict work-related competencies. Performance-based assessments that simulate job tasks represent a viable method of assessing these competencies. To maximize validity of score use, it is necessary to delve deeper into cognitive components of task stimuli and the response process and link them to the underlying competency using cognitive theory and design. Consequently, one can effectively target the range of the competency by systematically varying the difficulty level of assessment tasks. Discussion centers on the effectiveness of cognitive design in CBE performance assessments for IT courses and implications for differentiating mastery and non-mastery students and customizing assessments to students for competency-level feedback at various stages toward course completion.

 

Is This an Outcome to Add to My Report?

Presented by: Anton Beguin & Hendrik Stratt, Cito

One of the key questions in test design is: "How many items do I need to reliably report on a learning goal within a test?" In this presentation, techniques are provided to determine the number of items that is necessary in different situations and test designs.

Using Bayesian evaluation of diagnostic hypothesis, we can distinguish between the response behavior of masters and non-masters. Practically, it implies that for a response pattern on a group of items, it is evaluated if this pattern is more that of a master or more in line with the behavior of a non-master. We guide the audience to the steps needed to apply this technique and help them to evaluate their own test or test design.

We also provide results of a small simulation study evaluating different hypotheses for mastery and non-mastery, different number of items, and the effect of harder or easier items.

 

Iterative Item Incubation for Incessant Inspection: Developing and Maintaining Highly Scrutinized Longitudinal Assessments

Presented by: Allie Daugherty & Robert Furter, American Board of Pediatrics

Longitudinal assessment programs are becoming more prevalent in the world of certification/licensure. These programs incorporate components of adult learning theory and spaced education to increase the formative value of testing to the test taker, while still providing the testing organization with sufficient information to make summative decisions. To accomplish this goal, longitudinal programs focus on more continuous assessment and feedback throughout a practitioner’s career.

This ePoster session will describe the end-to-end development and evaluation cycle of exam items in a longitudinal assessment program seeking to fulfill summative and formative purposes. A case study of a longitudinal program built from the ground up will serve to demonstrate the challenges posed in rapidly developing content that is suitable for multi-platform administration/viewing and making the content available to test-takers for review, in near-perpetuity, following the initial administration.

 

JAWS Doesn’t Bite! Experience Test Delivery with Assistive Technologies

Presented by: Tim Burnett & Leon Hampson, Surpass, Powering Assessment

Balancing innovation, comparability, and accessibility can seem like a huge undertaking for any test creator, even in 2020. We all want to move beyond a purely "box-ticking" exercise and make inclusivity core to all test development processes. However, with the right software and mindset, accommodating screen readers and the test takers that use them is a huge step forward for many test creators on the journey to making their exams truly accessible.

In this interactive e-Poster session, we invite you to take a short test using accessibility tools common to a candidate with a severe visual impairment. Here, you will experience first-hand the JAWS screen reader working within a test driver and other examples of content that can make a significant difference to a candidate's experience.

You’ll leave the session with a stronger insight into the experience of candidates with accessibility needs and with a few simple steps you can make to improve the assessment experience for everybody.

 

Learning Credits: Creating a New Learning Currency

Presented by: Jackie Berdy, Xvoucher

Developer of one of the hottest technical certifications and a leader in the gaming and interactive industries, this certification body worked directly with two partners to introduce a Learning Credit currency. Training and enterprise customers can now acquire, spend, and manage Learning Credits.

These customers issue learning products to candidates that enhance adoption, retention, and expansion of the community. Through a branded ecommerce marketplace, Learning Credits may be purchased by training and enterprise customers to provide versatility, visibility, and management of approved learning products.

Meeting a critical business need, the certification body now has the ability to view these various accounts from a centralized hub, which enables them to monitor both sales and consumption across the Learning Credit ecosystem.

 

Scoring Short-Answer Items on a High-Stakes Medical Licensing Examination: An Application of Natural Language Processing and Machine Learning

Presented by: Maxim Morin & Andre De Champlain, Medical Council of Canada

The increasing reliance on constructed-response (CR) items in large-scale assessment reflects an interest in broadening the constructs measured in a rapidly evolving landscape. However, the use of CR items, while promising, raises many psychometric and practical challenges, mostly because the latter strongly rely on human scoring. Automated scoring (AS) offers a promising alternative for supplementing or even replacing human scoring of CR items, but its use must be implemented in a manner that promotes psychometric best standards.

While automated essay scoring has and continues to be used operationally in many testing programs, applications of AS with short-answer items has received less attention. The current session will outline an application of AS for short-answer items included in a bilingual, high-stakes, medical licensing examination.

 

Trust But Verify: Analyzing Vendor Data for Anomalies

Presented by: Maria Incrocci, Tara McNaughton & Nicholas Williams, American Osteopathic Association

When working with vendors for computerized testing, it is important to remain vigilant for potential anomalies in the data collected. Vendors generally work with many different clients and programs. However, there is no guarantee that results and processes are error-free. So, the quality of all data should be reviewed. Typically, verification of data is well-understood from a test-publication perspective, as there are multiple reviews of the converted examination content. However, it is equally important to conduct reviews of the response data collected post-examination. Vendor meta-data -- such as key strings, reports, candidate comments, survey data, response times, and demographic information -- are all examples of data that may flag issues that require further investigation. Anomalies identified and addressed play an important role in improving the quality-assurance aspect of a program.

 

View the Full Program
Special thanks to our
Platinum Sponsors