How the Archival Certification Examination is Developed and Evaluated

Kevin J. Williams
Associate Professor of Psychology, State University of New York at Albany
Originally published in “ACA News,” July 1999.

To obtain the title of Certified Archivist, individuals must pass the Academy of Certified Archivists’ national examination. A passing score on this exam signifies that the individual possesses the level of knowledge necessary to practice successfully as an archivist. But did you ever wonder how certification exams are constructed and validated? Or how associations such as the ACA can be reasonably sure that the passing score on their certification exam accurately reflects the level of knowledge necessary to be successful in the field? Such questions are important ones and are often received by ACA officers. Accordingly, I have been asked by the ACA to write a description of the test development and evaluation process for its members.

I have been performing the psychometric analysis for the national certification examination for the past four years, and have performed similar services for other associations and state licensing agencies. In this article I will lay out the general strategy that is used by the ACA to ensure that the examination for certified archivists provides a valid assessment of one’s capability to practice successfully in the field.

I’ll begin by stating that the ACA exam is constructed and evaluated according to sound scientific principles. It differs from some other examinations that I have seen in terms of its scientific rigor. The exam is based on a thorough scientific analysis of the work activities of archivists, the criterion for passing is established and validated using the judgments of subject matter experts, and the pool of test items is continually evaluated and updated. These practices ensure that the ACA exam is a valid assessment of one’s ability to perform competently in the profession.

A key component in this process is the ACA’s Examination Development Committee. Members of this committee use their experience and expertise in steering the test validation process. In the following sections of this article, I will review (1) the objective of the examination, (2) the process by which the test is constructed, and (3) the procedures used to evaluate the test (i.e., to make sure it’s reliable and valid).

Objective of the Certification Examination

Like all certification examinations, the ACA national certification exam assesses the competence of professionals after they have completed their professional training. The objective of a certification exam is to identify candidates who possess the knowledge, skills, and abilities to perform entry-level work in the profession competently. This objective is different than that of other tests used by organizations, such as those used for hiring or promotion. For hiring and promoting individuals, the goal of a test is to discriminate among individuals in terms of their total capacity to perform the job in question. In other words, organizations want to be able to rank-order individuals from highest to lowest in terms of their knowledge, skills, and abilities. Organizations follow “top-down” selection procedures for hiring and promotion: positions are offered to individuals with the highest test scores, with subsequent offers extended down the list until all the available slots are filled.

Certification tests, by contrast, are more concerned with identifying all candidates who pass the “threshold for competence” in a profession. From a psychometric perspective, we want to make sure that all individuals who pass the test and become certified are competent workers, and that all competent individuals pass the test and become certified. This implies that there are two errors we wish to avoid: we do not want to certify someone who is not competent (a “false positive”) or fail to certify someone who is competent (a “false negative”). Such errors can be minimized by (1) ensuring that the knowledge and skills assessed by the exam are related to critical work activities of the profession, and (2) establishing a valid passing point for the examination. As you will see in the next section, considerable attention goes into developing test items and determining the pass point for the archivists’ certification exam.

The Test Construction Process

A scientifically and legally defensible examination is based on a detailed analysis of the job in question. The foundation for the exam is what is called a job analysis or role delineation study. A typical job analysis study for a national certification exam involves surveying job incumbents across the country about the specific tasks involved in their work. The survey responses provide a detailed description of the job activities of professionals. From these data, the job analyst defines the major job domains of the profession, along with the specific tasks that comprise each domain.

Next, subject matter experts are consulted to help identify the knowledge and skills a worker must possess to perform each domain task competently. Knowledge statements are then written for each domain task. Together, the job domains, tasks, and knowledge statements encompass the commonly accepted duties that professionals perform in the course of their daily work and provide the blueprint for the certification examination.

In essence, the job analysis provides the test specifications for a certification examination. Using the task and knowledge statements, subject matter experts write questions for the exam. Test items are constructed to assess the extent to which people have the knowledge and skills (as outlined in the knowledge statements) required to perform the critical tasks in each job domain.

For the ACA exam, members of Examination Development Committee write the test items, with guidance from psychometric experts. The Committee as a whole then scrutinizes all test items, again in consultation with psychometricians and other subject matter experts. This process ensures that only items that measure knowledge and competency relevant to the archivist role are selected and placed on the exam.

To make this process more meaningful, let me describe some of the specific outcomes of the job analysis for the ACA exam. The latest job analysis conducted for the ACA exam identified seven key performance domains in the archivist work role, with several critical tasks within each domain. The seven domains are:

  • selection of documents
  • arrangement and description of documents
  • reference services and access to documents
  • preservation and protection of documents
  • outreach, advocacy, and promotion of document collections
  • managing archival programs
  • and professional ethical and legal responsibilities.

Each domain has between three and six tasks associated with it. One of the tasks within the domain of preservation and protection of documents, for example, is “analyze the current physical condition of documents and determine appropriate preservation actions and priorities.” A knowledge statement related to this task is “archivists know and can apply knowledge about the causes and consequences of the deterioration of paper and other media.” Thus, an appropriate item for the exam would ask candidates about factors that cause deterioration of paper.

Jobs and professions are not static, but tend to change over time because of advances in technology and changes in the social, political, and legal environments related to work. Thus, job analyses need to be conducted periodically to track changes in the work activities of a profession. For certification examinations, job analysis studies are typically conducted every five years to make sure that the exam stays current with the profession. (So, when you get the next job analysis survey from the ACA Examination Development Committee, make sure you fill it out and return it!) As you can see, developing and writing test questions is an on-going process.

After the test is constructed, one final critical step remains – the cutoff score for passing the exam must be identified. There are different methods for determining cutoff scores or pass points, all of which contain an element of subjectivity. It is crucial, however, for the cutoff or passing score to be consistent with normal expectations of proficiency for entry-level professionals. Unfortunately, it is not uncommon for some associations to arbitrarily set a pass point (e.g., 70% correct). The ACA uses a more scientific approach to setting the pass point. Following procedures used for many licensing exams, a panel of expert judges (e.g., the Examination Development Committee), who are experienced in the profession, carefully examine each of the items on the test and rate them in terms of the percentage or proportion of just competent persons that would answer the item correctly. Their ratings for all items are averaged to arrive at the pass point. The main advantage of this approach (which is the most common approach used by test developers and psychometricians) is that expert judges, rather than consultants or outsiders, use their knowledge and experience to determine minimum performance standards for the profession.

I have gone into quite a bit of detail about the content and construction of the exam because the most appropriate method for validating a certification examination is through content validation procedures. Since archivists work in many different contexts, it is practically impossible to validate test scores against a common performance criterion. Instead, psychometricians turn their attention toward the content of the exam itself and make sure that it measures the normative activities of the profession.

To summarize the test construction process, a job analysis delineates the major domains of archival practice, along with specific work tasks nested within each domain. Knowledge statements are then written for each work task. Together, the domains, tasks, and knowledge statements provide the blueprint for the certification examination. Subject matter experts then write questions for the exam and define the passing score for certification.

Evaluating the Test

After each administration of the exam, analyses are conducted to assess test outcomes and item performance. These analyses are conducted to ensure that the present test outcomes (i.e., pass/fail decisions) are fair and valid and to examine the appropriateness of test items for future exams. Analyses are conducted at both the test and item level.

At the test level, the distribution of raw scores across candidates is examined and estimates of error of measurement are obtained. The distribution of raw scores for certification exams typically shows a negative skew, meaning that scores tend to bunch on the passing side of the distribution. This is because certification exams are designed to test minimally acceptable or threshold knowledge and skill for practice. A candidate’s raw test score provides a point estimation of his or her total knowledge related to the profession.

Obviously, one’s “true” score or knowledge may be different than the actual test score, but unfortunately we can never really know what one’s true score or knowledge is for sure. There are methods, however, of estimating error in measurement. These estimates are examined after each test and if the estimates of error are large, adjustments are made to the cutoff score. Fortunately, when the item writing procedures described above are followed, error estimates tend to be low.

At the item level, analyses are conducted to ensure that each of the test items is of appropriate difficulty (not too easy or too difficult) and capable of distinguishing between high test scorers and low test scorers. Ideally, each item on the exam should discriminate between those who possess the requisite knowledge for practice and those who do not. Different analysis are conducted to determine the extent to which each item on the test does in fact differentiate between those who do well on the test (and presumably have high levels of knowledge and skill) and those who do poorly on the test. Items that are too easy or difficulty, are flagged and discarded from the examination item pool.

I hope that this overview has provided insight into the certification testing process used by the ACA. The test construction and evaluation process is consistent with the highest standards in the testing industry and safeguards are instituted to help ensure the integrity of the examination. Job analysis, item writing, and item analysis are all part of an on-going process of test construction and validation. These procedures help ensure that the test is meeting its objective – to certify individuals as competent to practice as archivists.

Upcoming Events

May 2 at 3:00 p.m. (ET)

Meet the Candidates

The 2024 ACA Election is coming up! Join us for an informal meet and greet event to learn more about the candidates before you cast your ballot.

Read about the slate of candidates on ACA’s website at www.certifiedarchivists.org/2024-election

Hosted by the ACA Nominations Committee

Join Zoom Meeting

https://us02web.zoom.us/j/81481056995?pwd=OWZLRlVVdWZhVWVVc2tqQUVzbjNkZz09

Meeting ID: 814 8105 6995

Passcode: 710571

Find your local number: https://us02web.zoom.us/u/kb5XJVWBHt