LPI is committed to providing the IT community with exams of the highest quality, relevance, and accuracy. This commitment requires that our exam development process is highly detailed, participatory, consultative and employs many of the proven techniques used by most other IT certification programs.
LPI prefers the written exam method. Find out why
LPI’s exam development process is detailed, thorough, participatory, collaborative and employs many proven techniques used by the best IT certification programs.
Psychometrics, the study of testing and measuring mental capacity, is used throughout LPI certification development to ensure that our exams reflect the needs of the IT community and industry.
As a part of our ongoing certification development process we continually monitor the needs of the Linux and IT markets to ensure our exams effectively evaluate candidates on the most relevant skills.
When we began this process in the late 1990’s we initially launched a two-tiered certification track that became what is known as LPIC-1 and LPIC-2 today. Over the years we have expanded our offerings to include the third tier to the LPIC professional track with the three enterprise specialties of LPIC-3. We’ve also introduced an entry-level certificate program, Linux Essentials, for those seeking to add some Linux to their credentials.
After development of a program structure and a job description for an exam or series, the next stage is to scientifically determine the skills, tasks and areas of knowledge needed for the job. The challenge: Anyone could come up with a list of tasks they think a Linux professional should be able to do. If you ask 10 Linux professionals what a “junior-level” professional should do, you might get 10 lists.
Which list is correct?
Our solution: We ask a large number of Linux professionals for their lists of necessary job duties, and then compile the responses to find the common and most important tasks. The most important tasks show up on all lists.
This process is called job analysis study or job task analysis. LPI has completed extensive job analysis surveys of Linux professionals to help ensure exams are unbiased and constructed fairly.
First we work with a large pool of subject-matter experts to compile an exhaustive list of all the tasks that they think might be performed by the target audience of the certification.
Next, the tasks collected during the pre-survey go into a job analysis survey. This survey asks practicing Linux professionals to rate each task in several ways:
Finally, we conduct statistical analysis of the survey responses. We compute statistics indicating, on average, how critical respondents rated each task. This analysis guides the determination of the final job task list.
The third major stage of development is converting the results from the Job Analysis Study to develop the actual objectives for the exam(s). Objectives express specific things that Linux professionals must be able to do. Each objective is assigned a weighting value indicating its importance relative to other objectives.
First, a small group of people with knowledge of both Linux technical issues and psychometric principles drafted an initial set of test objectives, basing them upon the results of the job analysis study.
After the draft objectives are created, they are placed online in a web based system for public review and comment. This system organizes objectives by exam and content topic, displaying the objectives themselves, along with links to additional documentation about the objectives. Public comments about objectives are collected and then supervisors review the comments and revise the objectives as necessary. The most recent review and revision of the objectives is posted publicly on our LPI Exam Development wiki and we send them to our community and ExamDev mailing list for comments and input.
When the objectives are finalized, we post them to LPI.org and let our community as well as courseware and training providers know, so that the training materials can be updated to reflect the new exam material.
Once the objectives are finalized, we begin the process of writing questions, called items, for exams. Security is a major concern in item development. All items are kept as confidential as possible by having those involved in the process sign non-disclosure agreements agreeing not to disclose item content to anyone. LPI also takes other undisclosed security precautions.
Historically, the process used to develop the items for most other IT certification exams was to fly a group of subject-matter experts into a location for a week or more, give them training in how to write items, and then have them work intensely to create the questions.
But this technique is expensive and exclusive. At LPI, during our initial exam development phase we leveraged the power of the community through the internet to encouraged everyone who was interested and knowledgeable to help with item writing.
Since then, LPI has developed new items for exam rotation in house by tapping the knowledge of subject matter experts, online volunteers and participants in item writing workshops.
Supervisors screened all submitted exam items, and accepted, rejected or reworded them. They focused on three criteria:
Next, LPI uses a group of Linux experts to put items through a technical review. Each item is reviewed by multiple experts. Each expert classifies items as approved, rejected or “other” for rewording or review by others.
The primary technical criteria:
Supervisors then collect the reviews to determine if each item was:
The next stage of development involves assembling items into exams for global deployment. Each test has multiple forms. If the candidate fails one form and retakes the exam, they receive the different form of the exam on retake attempt.
The Pearson VUE test engine randomly orders the questions of each form when someone takes the exam to ensure two candidates taking the same exam are not tested on the same questions in the same order.
Once the LPI psychometric staff has determined the composition of forms, the exam must be converted from text-based items into the actual exam file format to deployed globally through LPI’s network of testing centers.
The exam enters a period of initial testing to determine if the questions are in fact measuring skills and competencies. In IT certification, this period is known as the beta testing period.
During the beta period, candidates can register for tests and complete them at local events. They receive credit, but candidates do not receive scores back immediately after the exam. Beta exams often involve extra questions with an extended-time format as well as additional survey and demographic questions. Several simultaneous processes determine the cut score, so that exams may be evaluated and scored.
Before the passing score can be set, LPI had to accumulate an adequate number of exams taken by people who are similar to the target job description. As our support has grown, our target data numbers continue to grow, helping to generate the most accurate results. As part of the beta exam process, demographic data is taken into account by psychometric staff when reviewing the validity of questions.
As tests results roll in, psychometric staff begin to examine the data. Asking questions such as: Are there questions that everyone gets correct? Are there questions that everyone fails? Exam comments collected during the process are reviewed and questions and concerns are addressed.
While psychometric staff review incoming data, a separate pool of subject-matter experts simultaneously participated in a modified Angoff study. Their goal, to provide the psychometric staff with additional data to validate questions and assist in setting the passing score.
During this process, the experts:
Ideally, the results from the Angoff study should parallel the actual results from the exams in the beta period. Beyond validating item performance, the results of the Angoff study are also used in helping to establish the passing score for exams.
After all of the data collection, the analysis and the Angoff study, the psychometric staff set a passing score, and distribute scores to exam takers who participated in the beta.
Once the beta has been completed, the passing score has been set, and any bad items have been removed or fixed, the exam is ready to be re-published. This work involves significant review and can take a month or more to complete. Once this final review process is complete, we coordinate with Pearson VUE and our partner network to publish the finalized exams for all test takers worldwide.
Multiple choice is a common standard for most certification and licensure exams. Whether you want to be a doctor, lawyer, or chartered accountant, most professions require that you pass a multiple-choice exam. The procedures for producing high-quality multiple choice exams are firmly established. No such standard exists for hands-on exams. Therefore they tend to be ad hoc and rarely include pilot testing, item analysis, formal standard setting, and equating.
Written job knowledge exams have approximately the same levels of predictive validity as job simulations (Roth et al., 2005)
Written exams with individual questions are more efficient than exams with more complex item types. For example, Jodoin (2003) examined innovative item types on an IT certification exam that required examinees to construct answers (e.g. draw a network diagram). He found that these constructed response items provided more information but also took additional time. As a result, he concluded that, “multiple-choice items provide more information per unit time.”
Certification exams (like LPI’s) cover a broad range of knowledge areas. Using individual items, written exams can easily ensure adequate coverage of all objectives. Because of practical constraints, hands-on testing must either sample narrowly from these objectives or cover a much smaller body of knowledge.
Hands-on testing is typically more expensive in all phases, including item development, pilot testing, administration, and scoring. If hands-on testing is more expensive but not more reliable or valid, then it offers less value.
Scoring practices for open-ended exams vary considerably, but the literature on scoring constructed responses suggests that subjective scoring is often less reliable than the scoring of traditional items. The literature on job performance suggests that objective measures of performance are also unreliable.