Black swan risks are described as an extreme outlier events that come as a surprise to the observer, and in hindsight, the observer rationalizes that they should have predicted it. Those risks have the maximum impact, but the minimum likelihood, and it's never easy to predict them. How can we draw the line between realistic risk scenarios and those perceived as unrealistic, but nevertheless possible ones, with catastrophic outcomes?