Divergence theory and its practical application as an innovation exercise seeks to ?open the aperture? and first expand the problem space, as opposed to just immediately converging toward a viable solution. This paper shares a use case application of a simple but powerful divergence methodology as an analytical technique to retire a major program critical risk believed by many to be technically impossible to address. The risk statement required system resilience against ?critical omissions in the identification and classification of failure conditions, resultant hazards, and hazard severities.? Many programs attack resilience by what-if-ing the problem until time and budget are exhausted and program leadership feels like due diligence has been served. But what else could be done to show resilience against truly unforeseen threats and critical omissions? Every time the what-if activity identifies a new threat it moves out of the unforeseen bucket into the known threats bucket. How do you prove resilience for everything that?s left in the unforeseen bucket ? the things that nobody?s ever thought of? This paper provides a method to do just that, and which was sufficient to retire the subject major program?s critical omission risk.