Our Certification Development Standards

Linux Professional Institute (LPI) is committed to providing the IT community with exams of the highest quality, relevance, and accuracy. This commitment requires that our exam development process is highly detailed, participatory, consultative and employs many of the proven techniques used by most other IT certification programs.

LPI prefers the written exam method. Find out why  

Exam Development

LPI’s exam development process is detailed, thorough, participatory, collaborative and employs many proven techniques used by the best IT certification programs.

Psychometrics

Psychometrics, the study of testing and measuring mental capacity, is used throughout LPI certification development to ensure that our exams reflect the needs of the IT community and industry.

Development Structure

As a part of our ongoing certification development process we continually monitor the needs of the Linux and IT markets to ensure our exams effectively evaluate candidates on the most relevant skills.

When we began this process in the late 1990’s we initially launched a two-tiered certification track that became what is known as LPIC-1 and LPIC-2 today. Over the years we have expanded our offerings to include the third tier to the LPIC professional track with the three enterprise specialties of LPIC-3. We’ve also introduced an entry-level certificate program, Linux Essentials, for those seeking to add some Linux to their credentials.

Job Task Analysis

After development of a program structure and a job description for an exam or series, the next stage is to scientifically determine the skills, tasks and areas of knowledge needed for the job. The challenge: Anyone could come up with a list of tasks they think a Linux professional should be able to do. If you ask 10 Linux professionals what a “junior-level” professional should do, you might get 10 lists.

Which list is correct?

Our solution: We ask a large number of Linux professionals for their lists of necessary job duties, and then compile the responses to find the common and most important tasks. The most important tasks show up on all lists.

This process is called job analysis study or job task analysis. LPI has completed extensive job analysis surveys of Linux professionals to help ensure exams are unbiased and constructed fairly.

How we do it:

Pre-survey

First we work with a large pool of subject-matter experts to compile an exhaustive list of all the tasks that they think might be performed by the target audience of the certification.

Job Analysis Survey

Next, the tasks collected during the pre-survey go into a job analysis survey. This survey asks practicing Linux professionals to rate each task in several ways:

  • Frequency: How often they perform the task.
  • Importance: How important it is for an administrator to be able to perform the task.

Data analysis

Finally, we conduct statistical analysis of the survey responses. We compute statistics indicating, on average, how critical respondents rated each task. This analysis guides the determination of the final job task list.

Objective Development

The third major stage of development is converting the results from the Job Analysis Study to develop the actual objectives for the exam(s). Objectives express specific things that Linux professionals must be able to do. Each objective is assigned a weighting value indicating its importance relative to other objectives.

Initial Objective Drafting

First, a small group of people with knowledge of both Linux technical issues and psychometric principles drafted an initial set of test objectives, basing them upon the results of the job analysis study.

Objective Review and Revision

After the draft objectives are created, they are placed online in a web based system for public review and comment. This system organizes objectives by exam and content topic, displaying the objectives themselves, along with links to additional documentation about the objectives. Public comments about objectives are collected and then supervisors review the comments and revise the objectives as necessary. The most recent review and revision of the objectives is posted publicly on our LPI Exam Development wiki and we send them to our community and ExamDev mailing list for comments and input.

When the objectives are finalized, we post them to LPI.org and let our community as well as courseware and training providers know, so that the training materials can be updated to reflect the new exam material.

Item Development

Once the objectives are finalized, we begin the process of writing questions, called items, for exams. Security is a major concern in item development. All items are kept as confidential as possible by having those involved in the process sign non-disclosure agreements agreeing not to disclose item content to anyone. LPI also takes other undisclosed security precautions.

Item writing

Historically, the process used to develop the items for most other IT certification exams was to fly a group of subject-matter experts into a location for a week or more, give them training in how to write items, and then have them work intensely to create the questions.

But this technique is expensive and exclusive. At LPI, during our initial exam development phase we leveraged the power of the community through the internet to encouraged everyone who was interested and knowledgeable to help with item writing.

Since then, LPI has developed new items for exam rotation in house by tapping the knowledge of subject matter experts, online volunteers and participants in item writing workshops.

Item Screening

Supervisors screened all submitted exam items, and accepted, rejected or reworded them. They focused on three criteria:

  • Redundancy: Items that are substantially identical to previously submitted items are rejected.
  • Phrasing and Clarity: Items phrased in confusing or otherwise inappropriate ways are rejected or reworded. Supervisors pay attention to ensure that questions can be understood by non-native English speakers.
  • Accuracy: Supervisors rejected or reworded items that are not technically accurate.

Item Technical Review

Next, LPI uses a group of Linux experts to put items through a technical review. Each item is reviewed by multiple experts. Each expert classifies items as approved, rejected or “other” for rewording or review by others.

The primary technical criteria:

  • Correctness
  • Appropriateness of distractors (for multiple-choice items): Reviewers ensure that the distractor answer choices are incorrect but reasonably plausible.
  • Phrasing and clarity: Reviewers ensure items are worded in appropriate language.
  • Relevance
  • Expected difficulty

Supervisors then collect the reviews to determine if each item was:

  • Accepted based on consensus
  • Rejected based on consensus
  • Accepted after further review: If reviewers did not agree, the supervisor might accept it, perhaps based on the opinion of another reviewer.
  • Rejected after further review: If reviewers did not agree, the supervisor might reject it, perhaps based on the opinion of another reviewer.
  • Accepted after revision: In some cases, reviewers might suggest rewording the item and the supervisor might accept the item after rewording it.

Exam Creation

Live Form Creation

The next stage of development involves assembling items into exams for global deployment. Each test has multiple forms. If the candidate fails one form and retakes the exam, they receive the different form of the exam on retake attempt.

The Pearson VUE test engine randomly orders the questions of each form when someone takes the exam to ensure two candidates taking the same exam are not tested on the same questions in the same order.

Initial Exam Publishing

Once the LPI psychometric staff has determined the composition of forms, the exam must be converted from text-based items into the actual exam file format to deployed globally through LPI’s network of testing centers.

The exam enters a period of initial testing to determine if the questions are in fact measuring skills and competencies. In IT certification, this period is known as the beta testing period.

During the beta period, candidates can register for tests and complete them at local events. They receive credit, but candidates do not receive scores back immediately after the exam. Beta exams often involve extra questions with an extended-time format as well as additional survey and demographic questions. Several simultaneous processes determine the cut score, so that exams may be evaluated and scored.

Obtaining Enough Exams

Before the passing score can be set, LPI had to accumulate an adequate number of exams taken by people who are similar to the target job description.  As our support has grown, our target data numbers continue to grow, helping to generate the most accurate results. As part of the beta exam process, demographic data is taken into account by psychometric staff when reviewing the validity of questions.

Reviewing the Questions

As tests results roll in, psychometric staff begin to examine the data. Asking questions such as: Are there questions that everyone gets correct? Are there questions that everyone fails? Exam comments collected during the process are reviewed and questions and concerns are addressed.

Modified Angoff Study

While psychometric staff review incoming data, a separate pool of subject-matter experts simultaneously participated in a modified Angoff study. Their goal, to provide the psychometric staff with additional data to validate questions and assist in setting the passing score.

During this process, the experts:

  • Receive copies of the exam questions on each form.
  • Look at each question independently and in consultation with each other and make judgments about how likely a minimally qualified person meeting the job requirements described in a specification sheet would be able to answer the question correctly. In other words, the experts consider the question from the perspective of someone who is at the bottom of the competence scale for job performance.
  • Rate each question with their estimate of what percentage of people will answer correctly, keeping in mind that on multiple choice questions, some people will get it right by virtue of guessing.

Ideally, the results from the Angoff study should parallel the actual results from the exams in the beta period. Beyond validating item performance, the results of the Angoff study are also used in helping to establish the passing score for exams.

Distributing Score Results

After all of the data collection, the analysis and the Angoff study, the psychometric staff set a passing score, and distribute scores to exam takers who participated in the beta.

Exam Republishing

Once the beta has been completed, the passing score has been set, and any bad items have been removed or fixed, the exam is ready to be re-published. This work involves significant review and can take a month or more to complete. Once this final review process is complete, we coordinate with Pearson VUE and our partner network to publish the finalized exams for all test takers worldwide.

Why Written Exams

Written exams are a global standard.

Multiple choice is a common standard for most certification and licensure exams. Whether you want to be a doctor, lawyer, or chartered accountant, most professions require that you pass a multiple-choice exam. The procedures for producing high-quality multiple choice exams are firmly established. No such standard exists for hands-on exams. Therefore they tend to be ad hoc and rarely include pilot testing, item analysis, formal standard setting, and equating.

Written exams are valid.

Written job knowledge exams have approximately the same levels of predictive validity as job simulations (Roth et al., 2005)

Written exams are more efficient.

Written exams with individual questions are more efficient than exams with more complex item types. For example, Jodoin (2003) examined innovative item types on an IT certification exam that required examinees to construct answers (e.g. draw a network diagram). He found that these constructed response items provided more information but also took additional time. As a result, he concluded that, “multiple-choice items provide more information per unit time.”

Written exams cover all objectives.

Certification exams (like LPI’s) cover a broad range of knowledge areas. Using individual items, written exams can easily ensure adequate coverage of all objectives. Because of practical constraints, hands-on testing must either sample narrowly from these objectives or cover a much smaller body of knowledge.

Written exams are more valuable.

Hands-on testing is typically more expensive in all phases, including item development, pilot testing, administration, and scoring. If hands-on testing is more expensive but not more reliable or valid, then it offers less value.

Written exams are more reliable and objective.

Scoring practices for open-ended exams vary considerably, but the literature on scoring constructed responses suggests that subjective scoring is often less reliable than the scoring of traditional items. The literature on job performance suggests that objective measures of performance are also unreliable.