Over the past decade, hundreds of nights have been spent on the world's largest telescopes to search for and directly detect new exoplanets using high-contrast imaging (HCI). Thereby, two scientific goals are of central interest: first, to study the characteristics of the underlying planet population and distinguish between different planet formation and evolution theories. Second, to find and characterize planets in our immediate solar neighborhood. Both goals heavily rely on the metric used to quantify planet detections and nondetections. Current standards often rely on several explicit or implicit assumptions about noise. For example, it is often assumed that the residual noise after data postprocessing is Gaussian. While being an inseparable part of the metric, these assumptions are rarely verified. This is problematic as any violation of these assumptions can lead to systematic biases. This makes it hard, if not impossible, to compare results across data sets or instruments with different noise characteristics. We revisit the fundamental question of how to quantify detection limits in HCI. We focus our analysis on the error budget resulting from violated assumptions. To this end, we propose a new metric based on bootstrapping that generalizes current standards to non-Gaussian noise. We apply our method to archival HCI data from the NACO instrument at the Very Large Telescope and derive detection limits for different types of noise. Our analysis shows that current standards tend to give detection limits that are about one magnitude too optimistic in the speckle-dominated regime. That is, HCI surveys may have excluded planets that can still exist.