Metrology is defined by the International Bureau of Weights and Measures (BIPM) as "the science of measurement, embracing
both experiment and theoretical determinations at any level of uncertainty in any field of Science and Technology."
Metrology is a very broad field and may be divided into three subfields:
* Scientific or fundamental metrology concerns the establishment of measurement units, unit systems, the development
of new measurement methods, realisation of measurement standards and the transfer of traceability from these standards to
users in society.
* Applied or industrial metrology concerns the application of measurement science to manufacturing and other processes
and their use in society, ensuring the suitability of measurement instruments, their calibration and quality control of measurements.
* Legal metrology concerns regulatory requirements of measurements and measuring instruments for the protection of
health, public safety, the environment, enabling taxation, protection of consumers and fair trade.
A core concept in metrology is (metrological) traceability, defined as "the property of the result of a measurement or
the value of a standard whereby it can be related to stated references, usually national or international standards, through
an unbroken chain of comparisons, all having stated uncertainties." The level of traceability establishes the level of
comparability of the measurement: whether the result of a measurement can be compared to the previous one, a measurement result
a year ago, or to the result of a measurement performed anywhere else in the world.
Traceability is most often obtained by calibration, establishing the relation between the indication of a measuring instrument
and the value of a measurement standard. These standards are usually coordinated by national laboratories: National Institute
of Standards and Technology (USA), National Physical Laboratory, UK, etc.
An integral part of establishing traceability is evaluation of measurement uncertainty.
Want to know more? Clik below to download "Metrology in Short" from EUROMET.
Click here to download "Metrology in Short"
Calibration Fundamentals and Best Practices
Calibration, in its most basic form, is the measuring of an instrument against a standard. As instruments become more complicated,
successfully identifying and applying best practices can reduce business expenses and improve organizational capabilities.
What is Calibration?
Calibration is the comparison of a measurement device (an unknown) against an equal or better standard. A standard in a
measurement is considered the reference; it is the one in the comparison taken to be the more correct of the two. Calibration
finds out how far the unknown is from the standard.
A "typical" commercial calibration uses the manufacturer’s calibration procedure and is performed with a reference
standard at least four times more accurate than the instrument under test.
Why Calibrate?
Calibration can be an insurance policy because out-of-tolerance (OOT) instruments may give false information leading to
unreliable products, customer dissatisfaction and increased warranty costs. In addition, OOT conditions may cause good products
to fail tests, which ultimately results in unnecessary rework costs and production delays.
Calibration Terms
As found data—The reading of the instrument before
it is adjusted.
As left data—The reading of the instrument after
adjustment or "same as found," if no adjustment was made.
Optimization—Adjusting a measuring instrument to
make it more accurate is NOT part of a typical calibration and is frequently referred to as "optimizing" or "nominalizing"
an instrument.
Out-of-tolerance (OOT) condition—When an instrument’s
performance is outside its specifications, it is considered an out-of-tolerance (OOT) condition, resulting in the need to
adjust the instrument back into specification.
Limited calibration— It may be more cost effective
to have a limited calibration when only certain functions of an instrument are not utilized by the user.
Test uncertainty ratio (TUR)—This is the ratio of
the accuracy of the instrument under test compared to the Uncertainty of the Measurement Ensemble.
Without data—Most calibration labs charge more to
provide the certificate with data and will offer a "no-data" option.
Calibration Quality Management Systems
Calibration is the key to quality control. In order to meet calibration standards, a good quality system needs to be in
place. Here are some of the requirements:
ISO 9001:2008 Calibration (International Organization for Standardization) - This type of calibration is crucial for many
industries and has the following requirements (in alphabetical order):
Accredited calibration lab—The calibration laboratory
must be ISO 9001:2008 accredited or be the original equipment manufacturer.
Comprehensive equipment list—To pass the ISO audit,
the company must demonstrate that it has a comprehensive equipment list with controls in place for additions, subtractions
and custodianship of equipment.
Calibrated and no calibration required items properly
identified—The equipment list must identify any units that do not require calibration, and controls must be in place
to ensure that these units are not used in an application that will require calibration.
Documented calibration procedures—The valid calibration
procedure is based on the manufacturer’s recommendations and covers all aspects of the instrument under test.
Equipment custodianship—There is an assignment of
responsibility for ensuring equipment is returned to the calibration lab.
An OOT investigation log—For any instrument found
OOT, an investigation must be performed and recorded.
Proper documentation—All critical aspects of the
calibration must be properly documented for the certificate to be recognized by an ISO auditor.
Proper recall system—A procedure should be established
that includes timeframes for recall notification, an escalation procedure and provisions for due-date extension.
Traceable
assets—The calibration provider must be able to demonstrate an unbroken chain of traceability back to National Institute
of Standards and Technology (NIST).
Trained
technicians—The proper training of each technician must be documented for each discipline involved in performing the
calibration.
Calibration Program Best Practices
Any successful calibration program must begin with an accurate recall list of test, measurement and diagnostic equipment.
The recall list should contain:
• a unique identifier that can be used to track the instrument, the location and the instrument’s
custodian.
• modules, plug-ins and small handheld tools along with any "home-made" measuring devices
(e.g., Test Fixtures).
Identify all of the instruments on the recall list that
may not require calibration.
After creating an accurate recall list:
• procedures must be established for adding new instruments, removing old or disposed instruments,
or making changes in instrument custodianship.
• recall reports should be run with sufficient time for both the end user and the service
provider to have the unit calibrated with minimal impact on production.
A late report identifying any units about to expire or
already expired will ensure 100 percent conformity, which can be supplied by a full-service calibration laboratory along with
special escalation reporting.
For efficiency, companies should consider a web-based
equipment management system for recall and late reports and electronic versions of calibration certificates.
Determining Calibration Intervals
Calibration intervals are determined by the instrument "owner" based on the manufacturer’s recommendations. The OEM’s
(original equipment manufacturer) intervals are typically based on parameters like mean drift rates for the various components
within the instrument. However, when determining calibration intervals as an instrument "owner," several other factors should
be taken into consideration such as:
the required accuracy vs. the instrument’s accuracy,
the impact an OOT will have on processes and,
the performance history of the particular instrument in
its application.
|