13. Image preprocessing

Pre-processing of fusion images

Fig. 13.1 Pre-processing of fusion images.

The segmentation process involves 3 valued (or grey-level) images:

These images may be transformed version of the fused image: figure Pre-processing of fusion images. presents the workflow to obtain these images.

First, the values (or intensities) of the fused image may be normalized into 1 or 2 unsigned byte(s), depending on the value of intensity_prenormalization. This first normalization has been introduced to deal with floating encoded images, and should not be used for images already encoded on 1 or 2 unsigned byte(s)

Second, this pre-normalized image can undergo 3 different processing, and the resulting image will be a combination of this 3 images.

  • intensity_transformation transform the image values based on its histogram and allows to normalize the intensities of the input image into 1 or 2 unsigned byte(s), either globally or an a cell basis (only in segmentation propagation). This image can be further smoothed by a gaussian kernel (if intensity_sigma > 0). See section Histogram based image value transformation.

  • intensity_enhancement transform the image based on a membrane dedicated process, either globally or an a cell basis (only in segmentation propagation). See section Membrane dedicated enhancement.

  • outer_contour_enhancement adds a fake outer membrane, issued from the previous segmentation (thus is only available for the segmentation propagation).

The combination mode is set by the reconstruction_images_combination variable.

If the fused image is transformed before being segmented, the transformed image is named <EN>_fuse_t<timepoint>_xxx.inr with xxx being either seed, membrane, or morphosnake and stored in the directory SEG/SEG_<EXP_SEG>/RECONSTRUCTION/ if the value of the variable keep_reconstruction is set to True. Only one of them is computed if the pre-processing parameters are the same for the 3 images. None are computed if the 3 images are equal to the fused images.

Note that specifying

intensity_prenormalization = 'identity'
intensity_transformation = 'identity'
intensity_enhancement = None
outer_contour_enhancement = False

in the parameter file comes to use the unprocessed fused image as input image for seed extraction, watershed, and morphosnake computation.

A comprehensive list of the pre-processing parameters can be found in section Preprocessing parameters. Pre-processing parameters can be set differently for the seed, membrane and morphosnake images by prefixing them (see section Prefixed parameters) by seed_, membrane_ or morphosnake_ (see sections astec_mars parameters and astec_astec parameters).

13.1. Histogram based image value transformation

The option intensity_transformation can be set to one out the three (segmentation of the first time point, see section astec_mars) or four (segmentation by propagation of the other time points, see section astec_astec) values.

None

this pre-processing channel is not used, meaning that only the membrane dedicated process will produce the input for the segmentation.

'identity'

there is no transformation of the fused image.

'normalization_to_u8'

input images are usually encoded on 2 bytes. However, it is of some interest to have input images of similar intensity distribution, so the segmentation parameters (eg the \(h\) for the regional minima computation) do not have to be tuned independently for each image or sequence.

This choice casts the input image on a one-byte image (ie into the value range \([0, 255]\)) by linearly mapping the fused image values from \([I_{min}, I_{max}]\) to \([0, 255]\). \(I_{min}\) and \(I_{max}\) correspond respectively to the 1% and to the 99% percentiles of the fused image cumulative histogram. This allows to perform a robust normalization into:math:[0, 255] without being affected by extremely low or high intensity values. Values below \(I_{min}\) are set to \(0\) while values above \(I_{max}\) are set to \(255\).

The percentiles used for the casting can be tuned by the means of two variables

normalization_min_percentile = 0.01
normalization_max_percentile = 0.99
'cell_normalization_to_u8'

this choice can only be used for the segmentation propagation (see section astec_astec). It has been developed (and kept) for historical reasons but has not proven to be useful yet.

The segmentation (the image of cell labels) at time point \(t\), \(S^{\star}_t\), is first deformed onto the image at time \(t+1\) thanks to the transformation \(\mathcal{T}_{t \leftarrow t+1}\) from the image \(I^{t+1}_{fuse}\) at time \(t+1\) towards to image \(I^{t}_{fuse}\) at time \(t\) (this transformation is computed with the fused images). The deformed segmentation can be denoted by \(S^{\star}_t \circ \mathcal{T}_{t \leftarrow t+1}\). According that the co-registration of the image \(I^{t+1}_{fuse}\) and \(I^{t}_{fuse}\) is successful, this deformed segmentation is an estimated segmentation (without any cell division) of \(I^{t+1}_{fuse}\).

Instead of computing one histogram for the whole image as in the 'normalization_to_u8', and thus having one \(I_{min}\) and one \(I_{max}\) value for the whole image, histogram are here computed on a cell basis, and a couple \((I_{min}, I_{max})\) is computed for each label of \(S^{\star}_t \circ \mathcal{T}_{t \leftarrow t+1}\), yielding images of values \(I_{min}\) and \(I_{max}\). Since this induces discontinuities at cell borders, these two images are smoothed (with a Gaussian filter of standard deviation cell_normalization_sigma before casting into \([0, 255]\).

For each cell, different histogram can be used for the computation of \(I_{min}\) and \(I_{max}\).

cell_normalization_max_method

sets the cell area where to compute the histogram for the \(I_{max}\) value, while

cell_normalization_min_method

sets the cell area where to compute the histogram for the \(I_{min}\) value.

Cell areas can be defined as

cell

all the values of \(I^{t+1}_{fuse}\) below the aimed cell defined in \(S^{\star}_t \circ \mathcal{T}_{t \leftarrow t+1}\) are used for the histogram computation,

cellborder

only the values of \(I^{t+1}_{fuse}\) at the aimed cell border defined in \(S^{\star}_t \circ \mathcal{T}_{t \leftarrow t+1}\) are used for the histogram computation, and

cellinterior

all the value of \(I^{t+1}_{fuse}\) in the aimed cell interior (the border is excluded) defined in \(S^{\star}_t \circ \mathcal{T}_{t \leftarrow t+1}\) are used for the histogram computation.

Default values are

cell_normalization_max_method = 'cellborder'
cell_normalization_min_method = 'cellinterior'

meaning that \(I_{max}\) are computed at the cells’ borders while \(I_{min}\) are computed in the cells’ interiors.

13.2. Membrane dedicated enhancement

The option intensity_transformation can be set to one out the two (segmentation of the first time point, see section astec_mars) or three (segmentation by propagation of the other time points, see section astec_astec) values.

None

this pre-processing channel is not used, meaning that only the histogram based image value transformation will produce the input for the segmentation.

'GACE'

stands for Global Automated Cell Extractor. This is the method described in [MGFM14], [Mic16].

'GLACE'

stands for Grouped Local Automated Cell Extractor. It differs from one step from GACE: the threshold of extrema image is not computed globally (as in GACE), but one threshold is computed per cell of \(S^{\star}_{t-1} \circ \mathcal{T}_{t-1 \leftarrow t}\), from the extrema values of the cell bounding box.

GACE and GLACE consist both of the following steps.

  1. Membrane dedicated response computation. The Hessian is computed by convolution with the second derivatives of a Gaussian kernel (whose standard deviation is given by mars_sigma_membrane). The analysis of eigenvalues and vectors of the Hessian matrix allows to recognize the normal direction of an eventual membrane. A response is then computed based on a contour detector in the membrane normal direction.

  2. Directional extrema extraction. Extrema of the response in the direction of the membrane normal are extracted. It yields a valued image of membrane centerplanes.

  3. Direction dependent automated thresholding.

It has been observed that the membrane contrast depends on the membrane orientation with respect to the microscope apparatus. Directional response histogram are built and a threshold is computed for each of them, which allows to compute a direction-dependent threshold.

Thresholds are computing by fitting known distribution on histograms. Fitting is done by the means of an iterative minimization, after an automated initialization. The sensitivity` option allows to control the threshold choice after the distribution fitting.

Setting the manual parameter to True allows to manually initialize the distribution before minimization thanks to the manual_sigma option.

Last, the user can directly give the threshold to be applied (this is then a global threshold that did not depend on the membrane direction) by setting the hard_thresholding option at True: the threshold to be applied has to set at the hard_threshold option.

  1. Sampling. Points issued from the previous binarization step will be further used for a tensor voting procedure. To decrease the computational cost, only a fraction of the binary membrane may be retained. This fractions is set by the sample option.

    Note

    Sampling is performed through pseudo-random numbers. To reproduce a segmentation experiment by either GACE or GLACE, the random seed can be set thanks to the mars_sample_random_seed option.

    If one want to reproduce segmentation experiments, the verboseness of the experiments has to be increased by adding at least one -v in the command line of either astec_mars ot astec_astec. This ensures that the necessary information will be written into the .log file. Then, to reproduce one given experiment, one has to retrieve the used random seed 'RRRRRRRRRR' from the line

    Sampling step : random seed = RRRRRRRRRR
    

    in the log file SEG/SEG_<EXP_SEG>/LOGS/astec_mars-XXXX-XX-XX-XX-XX-XX.log or SEG/SEG_<EXP_SEG>/LOGS/astec_astec-XXXX-XX-XX-XX-XX-XX.log, and then to add the line

    sample_random_seed = 'RRRRRRRRRR'
    

    in the parameter file to get the same sampling.

  2. Tensor voting. Each retained point of the binary image (together with its membrane normal direction) generates a tensor voting field, whose extent is controlled by the sigma_TV option (expressed in voxel units). These fields are added to yield a global tensor image, and a membraness value is computed at each point, resulting in a scalar image.

  3. Smoothing. An eventual last smoothing of this scalar image may be done, controlled by the sigma_LF option.