Multi-Atlas Segmentation with Joint Label Fusion

Hongzhi Wang, J. W. Suh, S. R. Das, J. B. Pluta, C. Craige, P. A. Yushkevich
2013 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Multi-atlas segmentation is an effective approach for automatically labeling objects of interest in biomedical images. In this approach, multiple expert-segmented example images, called atlases, are registered to a target image, and deformed atlas segmentations are combined using label fusion. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity have been particularly successful. However, one
more » ... tion of these strategies is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this limitation, we propose a new solution for the label fusion problem, in which weighted voting is formulated in terms of minimizing the total expectation of labeling error, and in which pairwise dependency between atlases is explicitly modeled as the joint probability of two atlases making a segmentation error at a voxel. This probability is approximated using intensity similarity between a pair of atlases and the target image in the neighborhood of each voxel. We validate our method in two medical image segmentation problems: hippocampus segmentation and hippocampus subfield segmentation in magnetic resonance (MR) images. For both problems, we show consistent and significant improvement over label fusion strategies that assign atlas weights independently. for potential bias associated with using a single atlas and applies label fusion to produce the final segmentation. This method requires higher computational costs but, as extensive empirical studies have verified in the recent literature, e.g. [16], [3], [22] , it is more accurate than single atlas based segmentation. Enabled by availability and low cost of multi-core processors, multi-atlas label fusion (MALF) is becoming more accessible to the medical image analysis community. Recently, the concept has also been applied in computer vision for segmenting natural images [37], [21] . Errors produced by atlas-based segmentation can be attributed to dissimilarity in the structure (e.g., anatomy) and appearance between the atlas and the target image. Recent research has been focusing on addressing this problem. For instance, such errors can be reduced by optimally constructing a single atlas that is the most representative of the population using training data [12] , [11], [18]. Constructing multiple representative atlases from training data has been considered as well and usually works better than single-atlas based approaches. Multi-atlas construction is done either by constructing one representative atlas for each mode obtained from clustering training images [5], [2], [32] or by simply selecting the most relevant atlases for the unknown image on-the-fly [30], [1], [41]. Either way, one needs to combine the segmentation results obtained by referring to different atlases to produce the final solution. Most existing label fusion methods are based on weighted voting, [30], [16], [3], [17], [33], where each atlas contributes to the final solution according to a non-negative weight, with atlases more similar to the target image receiving larger weights. Among weighted voting methods, those that derive weights from local similarity between the atlas and target, and thus allow the weights to vary spatially, have been most successful in practice [3], [17], [33]. One common property of these spatially variable weighted voting MALF methods is that the weights for each atlas are computed independently, only taking into consideration the similarity between the warped atlas and the target image. As such, these methods are less effective when the label errors produced by the atlases are not independent, e.g. most atlases produce similar errors. As a simple example, suppose that a single atlas is duplicated multiple times in the atlas set. If weights are derived only from atlas-target similarity, the total contribution of the repeated atlas to the consensus segmentation will increase in proportion to the number of times the atlas is repeated, making it more difficult to correct the label error produced by the duplicated atlas. Likewise, if the atlas set is dominated by a certain kind of anatomical feature or configuration, there will be an inherent bias towards that feature, even when segmenting target images which do not share that feature. As the result, the quality of the segmentation for the less frequent anatomical features/ configurations may be reduced. Another class of label fusion methods perform majority voting among a small subset of atlases that globally or locally best match the target image, discarding the information from poor matching atlases [3], [7] . These methods are less susceptible to the problem described, since the atlas appearing multiple times would only be included in the voting if it is similar to the target image. However, by completely discarding information from poorer matches, these methods lose the attractive property of voting arising from the central limit theorem. In particular, when all atlases are roughly equally similar to the target image, performing Wang et al.
doi:10.1109/tpami.2012.143 pmid:22732662 pmcid:PMC3864549 fatcat:kidcxtml7ngihd6hojfarlgfr4