FMEA Detection Ratings: Evaluating Current Controls and Assigning Scores Consistently
Detection is the most misunderstood rating in FMEA. Teams routinely confuse it with occurrence, rate it backwards (assigning low scores when detection is poor), or default to middle-of-the-road values because they cannot decide how effective their current controls actually are. The result: detection ratings that look reasonable on paper but do not reflect reality on the shop floor.
This guide provides a structured approach to evaluating current controls and assigning FMEA detection ratings consistently. Instead of debating abstract definitions, use the decision path below to match your control type to a defensible rating.
What the FMEA Detection Rating Measures
Detection rates the ability of current controls to detect the failure mode or its cause before the product reaches the customer. It answers: if this failure occurs, how likely are we to catch it?
The detection scale is inverted compared to severity and occurrence:
- 1 = almost certain detection (best). An automatic error-proofing device physically prevents the defect from passing.
- 10 = no detection method (worst). No current control exists to catch this failure mode.
This inverse scale trips up practitioners who are used to “higher is worse” for severity and occurrence but forget that for detection, lower is better.
FMEA Detection Rating Scale: 1–10 With Control Types
| Rating | Detection Likelihood | Control Type | Manufacturing Example |
|---|---|---|---|
| 1 | Almost certain | Proven automatic error-proofing (poka-yoke) integrated into the process | Fixture prevents wrong-orientation loading; part physically cannot proceed if incorrect |
| 2 | Very high | Automatic detection with automatic stop/reject | In-line vision system rejects non-conforming parts before next operation; sensor stops machine on out-of-spec condition |
| 3 | High | Automatic detection with operator alert and manual intervention | CMM or automated gauge checks 100% of parts; alarm notifies operator who must disposition |
| 4 | Moderately high | Automatic measurement and SPC monitoring with control limits | SPC chart on critical dimension with automated data collection; out-of-control triggers investigation |
| 5 | Moderate | Manual inspection with go/no-go gauging (100% or sample-based) | Operator uses calibrated gauge at defined frequency; documented inspection instruction |
| 6 | Low-moderate | Manual visual inspection per documented criteria | Operator performs visual check against limit samples or boundary samples; relies on human judgment |
| 7 | Low | Double-check or audit inspection (not primary control) | Quality auditor samples finished goods at packaging; not every unit inspected |
| 8 | Very low | Indirect detection or inconsistent method | Defect detectable only during customer assembly; no in-process inspection for this characteristic |
| 9 | Remote | Control exists but is unreliable or unproven | Occasional spot check with no defined frequency or criteria; relies on operator experience |
| 10 | Virtually impossible | No current detection control | No inspection, test, or monitoring exists for this failure mode; defect passes to customer undetected |
Decision Path: Matching Your Controls to a Detection Rating
Instead of debating numbers directly, walk through this decision path for each failure mode. It evaluates three factors the AIAG-VDA handbook identifies as critical: the type of control, the timing in the process, and the reliability of the method.
Question 1: Is the control automatic or manual?
- Automatic (sensor, vision system, in-line gauge) → Start at ratings 1–4. Proceed to Question 2.
- Manual (operator inspection, visual check, audit) → Start at ratings 5–9. Proceed to Question 3.
- No control exists → Rating = 10. Stop here and flag this as an action item.
Question 2 (Automatic controls): Does the system prevent or only detect?
- Prevents the failure from proceeding (physical poka-yoke, automatic reject) → Rating = 1–2.
- Detects and alerts (alarm, SPC violation, vision system flag) → Rating = 3–4. Use 3 if 100% of parts are checked; use 4 if sample-based or SPC-monitored.
Question 3 (Manual controls): How structured is the inspection?
- Go/no-go gauging with defined criteria and frequency → Rating = 5.
- Visual inspection with limit/boundary samples → Rating = 6.
- Audit or sampling inspection (not every unit) → Rating = 7.
- No defined method; relies on operator discretion → Rating = 8–9.
Question 4 (Timing adjustment): When in the process does detection occur?
- At the station where the defect is created (in-process detection): No adjustment needed.
- At end-of-line or downstream operation: Consider adding +1 to the rating. Defects that travel through multiple operations before detection have more opportunity to cause secondary damage or be obscured.
A stamping operation produces a bracket with a critical hole diameter. The process has the following controls:
- In-line laser gauge checks every part (automatic, 100% detection)
- Out-of-spec parts trigger a machine stop and alarm
- Operator must clear the alarm and remove the rejected part before the machine restarts
Walking the decision path:
- Q1: Automatic control → start at ratings 1–4
- Q2: Detects and stops the machine (automatic reject equivalent) → Rating = 2
- Q4: Detection occurs at the station where the feature is created → no adjustment
Detection = 2. The in-line laser gauge with automatic stop provides very high detection reliability.
Now consider the same bracket, but the hole diameter is checked by a go/no-go plug gauge at end-of-line by an operator:
- Q1: Manual control → start at ratings 5–9
- Q3: Go/no-go gauging with defined criteria and frequency → Rating = 5
- Q4: End-of-line (not at the creating station) → adjust to Rating = 6
Detection = 6. Manual gauging at end-of-line is significantly less reliable than in-process automatic detection.
How Detection Fits Into Risk Prioritization
Under the AIAG-VDA Action Priority system, detection is the third factor evaluated—after severity and occurrence. For safety-critical failure modes (severity 9–10), detection does not change the Action Priority—it remains High regardless. Detection matters most for moderate-severity failure modes where the combination of occurrence and detection determines whether action is required.
In the older RPN calculation (S × O × D), detection has equal mathematical weight to severity and occurrence. This is one of RPN’s known flaws: a high detection rating can inflate the RPN of a low-severity failure above a high-severity failure with good detection, distorting priorities.
Use the RPN & Action Priority calculator to compare how detection changes affect risk ranking under both systems.
Improving Detection: The Control Upgrade Path
When a detection rating is too high (7–10), the team should recommend actions to improve detection. The upgrade path follows a predictable pattern:
| Current State | Upgrade Action | Expected New Rating |
|---|---|---|
| No control (D=10) | Add manual inspection with criteria | D=5–7 |
| Visual inspection (D=6–7) | Add go/no-go gauging or boundary samples | D=5 |
| Manual gauging (D=5) | Implement SPC monitoring with automated data collection | D=4 |
| SPC monitoring (D=4) | Add automatic in-line detection with reject/stop | D=2–3 |
| Automatic detection (D=2–3) | Implement physical error-proofing (poka-yoke) | D=1 |
Common Pitfalls When Rating Detection
- Rating detection on the wrong failure mode: Ensure the detection control actually detects this specific failure mode. A CMM that measures hole position does not detect surface finish defects—those need a separate control and a separate detection rating.
- Giving credit for controls that are not validated: A GR&R (Gauge Repeatability and Reproducibility) study failing on a measurement system means the gauge cannot reliably detect the characteristic. Rate detection higher (worse) until the measurement system is qualified.
- Assuming customer detection: Do not rate detection based on the assumption that “the customer will catch it during their incoming inspection.” Detection rates your controls, not your customer’s.
- Ignoring the inverse scale: Always verify the team understands that D=1 is the best detection and D=10 is the worst. This confusion is especially common with team members who participate in FMEA infrequently.
Key Takeaways
- Detection rates the ability of current controls to catch a failure before it reaches the customer, on an inverted 1–10 scale (lower is better).
- Use the decision path: automatic vs. manual → prevent vs. detect → structured vs. unstructured → timing adjustment.
- Rate actual control performance, not theoretical capability. A visual inspection that operators skip under time pressure is not a Detection = 6.
- Under AIAG-VDA Action Priority, detection matters most for moderate-severity failure modes. For severity 9–10, AP is always High regardless of detection.
- Improve detection by following the control upgrade path: no control → manual → gauging → SPC → automatic → error-proofing.