Abstract
Background and objectives: Saliency refers to the visual perception
quality that makes objects in a scene to stand out from others and
attract attention. While computational saliency models can simulate the
expert's visual attention, there is little evidence about how these
models perform when used to predict the cytopathologist's eye fixations.
Saliency models may be the key to instrumenting fast object detection on
large Pap smear slides under real noisy conditions, artifacts, and cell
occlusions. This paper describes how our computational schemes retrieve
regions of interest (ROI) of clinical relevance using visual attention
models. We also compare the performance of different computed saliency
models as part of cell screening tasks, aiming to design a
computer-aided diagnosis systems that supports cytopathologists.
Method: We record eye fixation maps from cytopathologists at work, and
compare with 13 different saliency prediction algorithms, including deep
learning. We develop cell-specific convolutional neural networks (CNN)
to investigate the impact of bottom-up and top-down factors on saliency
prediction from real routine exams. By combining the eye tracking data
from pathologists with computed saliency models, we assess algorithms
reliability in identifying clinically relevant cells.
Results: The proposed cell-specific CNN model outperforms all other
saliency prediction methods, particularly regarding the number of false
positives. Our algorithm also detects the most clinically relevant
cells, which are among the three top salient regions, with accuracy
above 98% for all diseases, except carcinoma (87%). Bottom-up methods
performed satisfactorily, with saliency maps that enabled ROI detection
above 75% for carcinoma and 86% for other pathologies.
Conclusions: ROIs extraction using our saliency prediction methods
enabled ranking the most relevant clinical areas within the image, a
viable data reduction strategy to guide automatic analyses of Pap smear
slides. Top-down factors for saliency prediction on cell images
increases the accuracy of the estimated maps while bottom-up algorithms
proved to be useful for predicting the cytopathologist's eye fixations
depending on parameters, such as the number of false positive and
negative. Our contributions are: comparison among 13 state-of-the-art
saliency models to cytopathologists' visual attention and deliver a
method that the associate the most conspicuous regions to clinically
relevant cells. (C) 2019 Elsevier B.V. All rights reserved.
Links and resources
Tags
community