What are the indicators of test reliability?
Four indicators are most commonly used to determine the reliability of a clinical laboratory test. Two of these, accuracy and precision, reflect how well the test method performs day to day in a laboratory. The other two, sensitivity and specificity, deal with how well the test is able to distinguish disease from absence of disease.
The accuracy and precision of each test method are established and are frequently monitored by the professional laboratory personnel. Sensitivity and specificity data are determined by research studies and are generally found in medical literature. Although each test has its own performance measures and appropriate uses, laboratory tests are designed to be as precise, accurate, specific, and sensitive as possible. These basic concepts are the cornerstones of reliability of your test results and provide the confidence your health care provider has in using the clinical laboratory.
Accuracy and Precision
Statistical measurements of accuracy and precision reveal a lab test's basic reliability. These terms, which describe sources of variability, are not interchangeable. A test method can be precise (reliable reproducibility) without being accurate (measuring what it is supposed to measure and its true value) or vice versa.
A test method is said to be precise when repeated analyses on the same sample give similar results. When a test method is precise, the amount of random variation is small. The test method can be trusted because results are reliably reproduced time after time.
A test method is said to be accurate when the test value approaches the absolute “true” value of the substance (analyte) being measured. Results from every test performed are compared to known "control specimens" that have undergone multiple evaluations and compared to the "gold" standard for that assay, thus analyzed to the best testing standards available.
Although a test that is 100% accurate and 100% precise is ideal, in practice, test methodology, instrumentation, and laboratory operations all contribute to small but measurable variations in results. The small amount of variability that typically occurs does not usually detract from the test’s value and statistically is insignificant. The level of precision and accuracy that can be obtained is specific to each test method but is constantly monitored for reliability through comprehensive quality control and quality assurance procedures. Therefore, when your blood is tested more than once by the same laboratory, your test results should not change much unless your condition has changed. There may be some differences between laboratories in precision and accuracy due to different analytical instrumentation or methodologies, however, the test results are reported with standardized reference intervals specific for that laboratory. This helps your health care provider to correctly interpret the information and its relevance to that reference interval.
Sensitivity and Specificity
The tests that a provider chooses in order to diagnose or monitor a medical condition are based on their inherent ability to distinguish whether you have the condition or do not have the condition. Depending on the symptoms and medical history, a provider will order tests to confirm a condition (tests with high sensitivity) or tests to rule out the condition (tests with high specificity).
Sensitivity is the ability of a test to correctly identify individuals who have a given disease or condition. For example, a certain test may have proven to be 90% sensitive. If 100 people are known to have a certain disease, the test that identifies that disease will correctly do so for 90 of those 100 cases (90%). The other 10 people (10%) tested will not show the expected result for this test. For that 10%, the finding of a "normal" result can be misleading and is termed false-negative.
A test's sensitivity becomes particularly important when you are seeking to exclude a dangerous disease, such as testing for the presence of the HIV antibody. Screening for HIV antibody often utilizes an ELISA test method, which has sensitivity over 99%. However, a person may get a false-negative if tested too soon after the initial infection (less than 6 weeks). Thus, the result of a false-negative gives a person the sense of being disease-free when in fact they are not. The more sensitive a test, the fewer false-negative results will be produced.
Specificity is the ability of a test to correctly exclude individuals who do not have a given disease or condition. For example, a certain test may have proven to be 90% specific. If 100 healthy individuals are tested with that method, only 90 of those 100 healthy people (90%) will be found "normal" (disease-free) by the test. The other 10 people (who do not have the disease) will appear to be positive for that test. For that 10%, their "abnormal" findings are a misleading false-positive result. When it is necessary to confirm a diagnosis that requires dangerous therapy, a test's specificity is one of the crucial indicators. A patient who has been told that he is positive for a specific test yet truly does not have that disease may be subjected to potentially painful or dangerous treatment, additional expense, and unwarranted anxiety. The more specific a test, the fewer false-positive results it produces.
The FDA requires that developers and manufacturers of a new test provide target values for test results and provide evidence for the expected ranges as well as information on test limitations and other factors that could generate false results. Thus it is critical for the health care provider to correlate the laboratory results with an individual's clinical condition to determine if repeat testing would be needed.