Threshold estimation: The state of the art

Neil A. Macmillan
2001 Perception & Psychophysics  
The estimation of thresholds is the oldest project in psychophysics, and the search for appropriate methodology has almost as long a history. There are two broad strategies for measuring thresholds. The first (and earlier) is the construction of complete psychometric functions, in which detectability or discriminability increases from zero or chance to perfect performance as a function of stimulus level or stimulus difference. In a detection experiment, a particular point on the psychometric
more » ... ction is taken as the absolute threshold, and in a discrimination experiment, the difference between two points is the difference threshold, or just noticeable difference. In both cases, the slope of the function provides information about the reliability of the threshold estimate. The second, more modern strategy is to use an "adaptive" procedure, in which stimulus values are chosen on the basis of the observer's previous trial-by-trial performance. Many, many thresholds have been measured since the time of Weber and Fechner, but the best way to find them quickly and accurately continues to be an active research question. How can this ancient set of problems still be unresolved? The answer lies in changes in psychophysical theory, statistical theory, computing power, and range of applications. First, it has long been known that the use of different methods could lead to different psychometric functions and different thresholds. Relating such discrepant outcomes requires a psychophysical theory, and powerful theories of accuracy did not arise until Thurstone (1927). Many current approaches assume aspects of signal detection theory, initially presented in the 1950s. Second, the statistical theory underlying psychometric functions and parameter estimation generally has advanced in recent decades with the development of bootstrap, jackknife, and related techniques. Other statistical approaches, such as maximum likelihood estimation, are older, but have been applied to the problem relatively recently. Third, the increasing power of computers enables a kind of stimulus control and data analysis (including the statistical methods just mentioned) that were not available to earlier investigators. The use of computers to simulate adaptive algorithms has proved critical in uncovering the (sometimes unexpected) implications of new testing procedures. Many current methods can claim statistical optimality in some sense, although what is optimal always depends on characteristics of the observer. Finally, threshold measurement is increasingly conducted in populations that are specialthat is, are not healthy college students. Békésy audiometry, an old example of such an application, sacrifices precision for speed, an acceptable compromise in the clinic. But as studies of animals, infants, children, older participants, and those with sensory and cognitive impairments have proliferated, it has become more crucial to make informed choices about the many parameters of psychometric-function measurement.
doi:10.3758/bf03194542 pmid:11800456 fatcat:yoj25ctwyvhp7lc72v6zwf3dq4