Click here to close
Hello! We notice that you are using Internet Explorer, which is not supported by Xenbase and may cause the site to display incorrectly.
We suggest using a current version of Chrome,
FireFox, or Safari.
???displayArticle.abstract???
Cell segmentation is crucial in the study of morphogenesis in developing embryos, but it had been limited in its accuracy until machine learning methods for image segmentation like U-Net. However, these methods take too much time. In this study, we provide a rapid method for cell segmentation using machine learning with a personal computer, termed Cell Segmentator using Machine Learning (CSML). CSML took four seconds per image with a personal computer for segmentation on average, much less than time to obtain an image. We observed that F-value of segmentation by CSML was around 0.97, showing better performance than state-of-the-art methods like RACE and watershed in assessing the segmentation of Xenopus ectodermal cells. CSML also showed slightly better performance and faster than other machine learning-based methods such as U-Net. CSML required only one whole embryo image for training a Fully Convolutional Network classifier and only two parameters. To validate its accuracy, we compared CSML to other methods in assessing several indicators of cell shape. We also examined the generality of this approach by measuring its performance of segmentation of independent images. Our data demonstrate the superiority of CSML, and we expect this application to improve efficiency in cell shape studies.
JP20km0210085 Hiroshima University Amphibian Research Center through National BioResource Project (NBRP) of AMED, 19H04948 KAKENHI, 21K06183 KAKENHI, 26440115 KAKENHI
Figure 1. Outline of CSML segmentation procedure. CSML consists of two steps: a training step (a) and an inferring step (b). (a) First, training patches are obtained by preprocessing training cut images. The classifier is trained based on the difference between a patch inferred by the classifier and the labeled patch. (b) Raw inferred patches are obtained by cutting out, processing, and inferring patches. Two (s/2, s/2)-pixel-shifted raw inferred images are obtained by connecting the raw inferred patches with the 1px end ignored. Whole segmented images are obtained by postprocessing the raw merged inferred image. (c) The 13 layers of the FCN classifier.
Figure 2. Assessment of accuracy and speed of CSML, U-Net, reduced U-Net, RACE, and watershed. (a) Comparison of precision, recall, and F-value of segmentation by CSML, U-Net, reduced U-Net, RACE, and watershed. (b) The proportion of topological errors of segmentation by CSML, U-Net, reduced U-Net, RACE, and watershed. The remaining proportion not shown in the plot indicates no error. (c) Comparison of computational time required for training by machine learning-based methods and segmentation of 60 images (1024 px à 1024 px). CSML(G) is CSML with GPU. (d) The variance of F-value of CSML segmentation for 30 times. (e) Comparison of shape indicators among GT, CSML, RACE, and watershed. Histograms of area, perimeter, solidity, axis ratio, eccentricity, and orientation are shown (n = 2412 cells). Cells whose areas were over 1000 px were regarded as background and ignored. (f) Comparison of samples of segmentation by CSML, RACE, and watershed (128 px à 128 px
Figure 3. Comparison of accuracy in various training conditions. (a) F-value of CSML segmentation in relation to the number of training patches. (b) Precision, recall, and F-value of CSML segmentation in relation to the proportion of insertion and deletion of lines in labeled images. (c) F-value of CSML segmentation using the whole image, the central region, or the peripheral region of an embryo as training images or inferring images. (d) The accuracy of CSML segmentation using images with high (#4-1) or low clarity (#4-2) as training images, and images with high (#2-3), medium (#1-2), or low clarity (#3-5) as inferring images. (e) F-values of CSML segmentation using images of each stage in the same embryo for training or inferring, #2 for (E1) and #3 for (E2). (median ± IQR, n = 10)
Figure 4. Comparison of accuracy in various training conditions. (a) F-value of CSML segmentation using the whole image, the central region, or the peripheral region of an embryo as training images or inferring images. (b) F-value of CSML segmentation using images with high (#2-3), medium (#1-2), or low clarity (#3-5) as inferring images and images with high (#4-1) or low clarity (#4-2) as training images, and (c) F-value of CSML segmentation using images at each stage of the same embryo for training or inferring, #2 for (E1) and #3 for (E2). (median ±IQR, n = 10)