Featured Speakers

We are excited to announce our featured speaker sessions for this year’s Innovations in Testing Conference! Check back as we will be adding more sessions.

Avoiding Misuse of Translated Psychological Assessment Scales

Psychological assessment scales, such as the WISC-IV and Bailey-III, are very powerful comprehensive tools for assessing children’s development delays. Some issues, however, arise when using translated psychological assessment scales across different cultures. For example, a study in China conducted using the Bayley Scale of Infant and Toddler Development concluded that more than 53% of children in the less developed countryside of China are developmentally delayed in cognitive skills or have intellectual disability. The conclusion seems to defy common sense, as the international average is only around 16% (1 standard deviation below the mean of IQ scores). Other independent studies in China using different clinical scales with local and national norms indicated that the corresponding percentage ranges from 10%-16% only. This paper will discuss the issues encountered when using translations of clinical assessment scales in different cultures and their impact on the scale's construct, items, and norms. Key Words: Clinical Assessment; Bayley Scale of Infant and Toddler Development; intellectual disability.

Speaker(s): Dr. Zhiming Yang, Hunnan Normal University, China

 

The Promise and Perils of Using Artificial Intelligence in Testing

This panel brings together experts in artificial intelligence and assessment to consider the likely benefits of using AI within assessment to make decisions about people and also the possible pitfalls. Although artificial intelligence promises advancements in precision, efficiency, cost reduction and unbiased data-driven decision-making, AI systems also present many challenges. The challenges of AI include the potential for privacy intrusion, discriminatory or biased outcomes, legal and ethical risks in how AI-derived data is used and decisions are made. These potential benefits and risks apply when using AI for all types of assessment, including certification, licensure, employment selection, educational and psychological testing. Given the importance of fairness and validity to the assessment community and the increasing use of AI in assessment systems and processes, understanding how to thoughtfully navigate these issues has never been more important. In this context, the panelists will discuss the following questions during this session:

  • What do we mean by the term artificial intelligence?
  • What are the ways in which AI is already being used in the context of testing and assessment and how else may it be used in the near future?
  • What are the potential impacts on broader society that might result from using AI in assessment?
  • How can we create AI systems that are not biased or discriminatory and are there standards to follow in this regard?
  • What are the legal and regulatory trends regarding the use of AI for assessment decisions and how do these differ in the United States, Europe and elsewhere in the world?
  • What should ATP be doing in the field of AI and assessment?

Speaker(s): John Kleeman, Questionmark; Marc Weinstein, Marc J. Weinstein, PLLC and Caveon, LLC; Alina von Davier, Duolingo; Ken Johnston, Microsoft; Yohan Lee, Riiid Labs

 

Beyond Assessments: Why You Don’t Want to be a Test Provider in a Solutions Marketplace

The challenges that the testing industry is facing in terms of the backlash against testing, can be effectively addressed, not be defending testing for the sake of the tests, but because the tests themselves are the entryway to solutions. Whether it be in Education, offender risk management, employment testing or any other kind of testing programs. If all we do is continue to defend testing for the sake of testing, we are fighting a losing battle. The answer lies in focusing on the real value of assessments - providing insight into challenges faced by individuals and using those insights to create real and meaningful solutions at both the individual and the organizational level.

Speaker(s): Hazel Wheldon, MHS; Kimberly Nei, Hogan

 

Should Interim Assessments replace Summative Assessments for K-12 Accountability?

Due to COVID19, and the fact that students were not in schools, all accountability testing was canceled in the spring of 2020. Some states have indicated their desire to also suspend accountability testing in the spring of 2021 and have asked the US Department of Education for a waiver under the requirements of ESSA. Many schools continue to administer -- often remotely at home -- interim assessments and other low-stakes assessments that have typically been used only for diagnosis and prescription. There have been opinions expressed for years that aggregating the results of low stakes assessments can be used for accountability in lieu of high-stakes summative assessments administered on demand in the spring of each year. Publishers and administers of high-stakes summative assessments are concerned that canceling their administration, or making their administration optional, for two years in a row would spell the end of a very large segment of the assessment business. A more important concern is that many experts in measurement question whether aggregating the results of interim and formative assessments can replace the quality of information that comes from on-demand summative assessments that are designed specifically for that purpose. This session will address the concerns on both sides of that argument, and its short-term and long-term effect on the educational assessment industry.

Speaker(s): John Oswald, The Oswald Group; Scott Marion, Center for Assessment; Dirk Mattson, Curriculum Associates; JT Lawrence, Cambium Assessments

 

Using AI and Biometrics: The Good, The Bad, and The Cost

Artificial intelligence and biometrics have offered employers new tools for security, hiring, training, and promotion. At the same time, advocates have raised concerns that these tools erode privacy and create bias that may discriminate against protected classes.

In testing, AI and biometrics have offered testing programs enhanced tools for securing test content, protecting the integrity of the testing process, and identifying examinees. How will the legal issues that employers face play out in the testing arena? The reality is, incorporating these tools potentially increases hard and soft costs and may raise potential legal issues, including privacy and discrimination concerns. In fact, use of AI in the workforce continues to be scrutinized from a legal perspective in connection with privacy and discrimination issues, with predictions of more litigation in the future. Experienced legal and security professionals will share their experiences and review the practical benefits offered by AI and biometrics for exam security, discuss the risks and limitations of the current use of this technology, and address sometimes unanticipated costs associated with incorporating these tools. This session will address security and privacy issues in the workforce and lessons learned in that space for testing, as well as opportunities and considerations for expanding the use of AI in testing.

Speaker(s): Donna McPartland, McPartland Privacy Advising; Jennifer Semko, Baker McKenzie; Camille Thompson, The College Board

 

Ask the DPO’s

Ask EU DPOs for 2 association member organizations about their roles, how companies can utilize their services more effectively, what they see as potential stumbling blocks with ever-evolving privacy and data security concerns, repercussions of the invalidation of Privacy Shield, SCCs, and more...

Speaker(s): Joseph Srouji, Prometric; Laura Stoll, Hogrefe

 

Facing the Courts

Security and privacy incidents often include both legal and public relations implications, and potentially even more so as testing programs have rapidly shifted to remote testing in response to the global pandemic. The reality is that any incident may be judged not only in the court of law, but also the court of public opinion, and results may differ. In fact, it is entirely possible to win in the court of law, but lose in the court of public opinion. Every testing program should be familiar with the legal remedies and defenses available to them in protecting their program, as well as philosophies and activities available in engaging with others through traditional and social media. Join legal and media relations experts as they discuss these issues and reflect on the lessons learned over the past 12 months to prepare testing organizations to “face the courts.”

Speaker(s): Rachel Schoenig, Cornerstone Strategies; Sarah Pauling, Examity; Stephanie Dille, Yardstick-ProctorU; Jennifer Semko, Baker McKenzie

 

How the COVID 19 Pandemic Has Escalated Digital Transformation for Testing Companies

Two months into the COVID Pandemic, Satya Nadella, CEO of Microsoft, was quoted as saying that organizations had experienced 2 years of digital transformation in two months. By the end of the summer of 2020, the Rotman Institute in Toronto reported that the rate of change had accelerated to 10 years of transformation in 6 months. This change has brought to light a raft of both benefits and challenges to the industry. This session will focus on the impact of digital transformation across all the assessment divisions and examine how companies are dealing with the challenges and taking advantages of the opportunities the change has brought.

Speaker(s): Mike Sparling, MHS; Andre Allen, Fifth Theory; Ashley Norris, Yardstick-ProctorU

 

Post-Pandemic Asia EdTech: Growth and Deepening in Learning Assessment

With school closures due to coronavirus pandemic, public schools are forced to increase adoption of e-Learning solutions. EdTech based learning in Asia has seen a rapid adoption in public K12 schools to create online classroom experience which deepened market penetration in both traditional classroom but also the afterschool training market. The post-pandemic EdTech learning has become a critical component of K12 education ecosystem.

At 2020 ATP Virtual Conference, Asia ATP Panel presented country snapshots of EdTech market drivers, leading companies, impact on testing in China, Japan and Korea. In 2021 ATP Virtual Conference, we propose a panel discussion to drill-down the popular use case in details in each country. We intend to focus on “smart classroom” use case in elementary, middle schools and high schools in China, Japan and Korea. Panel discussion covers the topics of teacher user case, student user case, to investigate assessment innovation in smart classrooms, data capture and data use.

Speaker(s): Dr. Changhua Rich, ACT; Norihisa Wada, EduLab; Won Kim, Seoulfull Tech; Alex Tong, ATA Online China

 

Leading Through the Pandemic: Leadership Lessons and Insights from Industry CEO’s

2020 and the COVID 19 pandemic tested every one of us professional and personally. For CEO's the additional challenge of steering their organizations through the financial uncertainty, the digital shirfe, the mass adoption of work from home and the planning of return to office as well as keeping the day-to-day operations going offered both and added layer of challenge and an opportunity for growth. Hear from a panel of industry CEO's, moderated by Executive Director G. Harris, on how they managed to keep their organizations moving forward, what they learned throughout the pandemic and leadership lessons and insights they would like to share.

Speaker(s): Williams G. Harris, ATP; Lars Pederson, Questionmark; Sangeet Chowfla, GMAC; Hazel Wheldon, MHS

 

Bias In, Bias Out: How Can We Responsibly Use Artificial Intelligence and Machine Learning Algorithms

Artificial intelligence and machine learning (AI/ML) models are powerful tools for assessment professionals to have in their toolkit. AI/ML can efficiently help us improve our ability to predict important outcomes and, as a result, help us and our clients make better decisions. But there are important concerns around the quality of the data being used to train those models and their ability to replicate underlying biases in those data. Put succinctly, although appropriately executed AI/ML can help reduce bias (in employment decisions, etc.), poorly executed AI/ML can increase it. The benefits of AI/ML models and the concerns around their misuse are not unique to a single ATP division or industry vertical. What can we learn from other divisions and their use of these models? How are effectively using AI/ML models to improve their work? And how are they identifying, correcting, and preventing algorithmic bias? Join us for a conversation with a panel of experts from multiple divisions to find out.

Speaker(s): Clay Richey, Pearson; Mike Sparling, MHS; Fred Oswald, Rice University

 

Online Proctoring and Test Center Equivalence & Consideration: A Comprehensive Look

Online assessment is demonstrating tremendous potential as a tool to expand access and streamline the experience for test-takers. At the same time, concerns about privacy and security and the wide variety of approaches are creating confusion about equivalency and appropriate applications of online proctoring.

During this session, a panel of assessment experts from a variety of backgrounds will present the findings from their research exploring various comparisons between exams and programs using online proctoring and test center proctoring to help attendees decide the right modality to apply for each of their programs. Topics covered will include:

  • Security issues/prevalence of cheating
  • Demonstrating equivalence between the different variations of test centers and OP options
  • Delivery metrics: what should be measured, what are some benchmarks, how should these
  • metrics be compared across modalities
  • Delivery issues: how they compare across delivery modalities, candidate populations, and tradeoffs with convenience
  • Considerations for testing programs and types of workforce users of credentials

Speaker(s): Isabelle Gonthier, Yardstick; Rachel Schoenig, Cornerstone Strategies; Bill West, Examity; Tony Zara, Pearson VUE

 

Treating Your Pandemic Hangover – a Workforce Perspective

In response to the global pandemic, our world has experienced radical shifts in a short period of time. The need to social distance has caused educators and employers to think differently about distance learning, working from home, and the use of technology to train and monitor individuals. Will those shifts remain and, if so, how will that impact the testing industry? Within our industry, the use of remote proctoring, artificial intelligence, and personal protective equipment has changed how tests are administered. Will testing programs continue to employ those tools after pandemic concerns are alleviated, or have we established new expectations for how test takers and staff experience testing? How will testing be viewed and used in a post-pandemic world? Looking ahead to these questions can prepare us to effectively manage our “post-pandemic hangover” and better position testing programs to support their stakeholders, no matter what the future holds.

Speaker(s): Carl Matheson, Australian Medical Council; Camille Thompson, The College Board

 

How Can the Educational Testing Industry Help Close the Achievement Gap Widened by Covid-19?

A continuing concern in K-12 education in the United States has been the pernicious achievement gap between under-served children – children of color, children with special education needs, and children in low SES schools – and more privileged children. During the 2019-2020 school year, many things happened to widen the achievement gap, such as a digital divide and its effect on distance-learning and the suspension of spring statewide summative assessments to provide data on students' achievement and the status of the achievement gap. Social issues intensified in 2020 around concerns of systemic racism in the United States and this has caused an increase in attention to equity in education on many levels. This session will deal with the role of educational assessments as a component in addressing those challenges. It will include a summary of research showing the widening achievement gap and then invite discussions by educational policy experts from a large-school district perspective and from a state perspective.

Speaker(s): John Oswald, Moderator, The Oswald Group; Kristen Huff, Curriculum Associates; Raymond Hart, Council of Great City Schools; Scott Norton, Council of Chief State School Officers; Annie Holmes, Council of Chief State School Officers

 

Remote Proctoring: A Practitioners’ Guide to Implementation

The COVID-19 pandemic has significantly disrupted traditional methods of test administrations and accelerated the adoption of remote proctoring technology. Remote proctoring is providing a pathway for learners and job seekers to access education and professional opportunities. It enables testing organizations to continue serving stakeholders during the pandemic. While remote proctoring is an accepted practice in the assessment industry, it is not well-understood by the public. The media and test users have raised concerns regarding access and privacy surrounding the technology.

In this session, leading education and technology experts will engage in a debate to discuss the various viewpoints on the implementation of remote proctoring. The debate will be followed by a discussion; where panelists will examine the successes, challenges, and opportunities for implementing remote proctoring in different segments of the testing community. Is remote proctoring right for your program? Join the conversation and find out!

Moderators: Ada Woo, Ascend Learning; Chad Buckendahl, ACS Ventures

Panelists: Carey Wright, Mississippi Department of Education; Jarrod Morgan, ProctorU; Jim Larimore, Riiid Labs; Mike Olsen, Proctorio

 

Register Today!