The identification of new rare signals in data, the detection of a sudden change in a trend, and the selection of competing models, are some among the most challenging problems in statistical practice. In this talk I discuss how these challenges can be tackled using a test of hypothesis where a nuisance parameter is present only under the alternative, and how a computationally efficient solution can be obtained by Testing One Hypothesis Multiple times (TOHM). Specifically, a fine discretization of the space of the non-identifiable parameter is specified, and evidence in support of the null or alternative hypothesis is obtained by approximating the distribution of the supremum of the resulting stochastic process (or random field). The methodology proposed is highly generalizable and combines elements of extreme value theory, graph theory and simulations methods to achieve ease of implementation and computational efficiency. Applications are discussed in the context of bump-hunting, break-point regression and non-nested models comparisons.