4. Tutorial¶
4.1. Fusion parameter settings¶
This section aims at explaining how to set the three parameters that describes the acquisition protocol (see section important parameters):
acquisition_mirrors(orraw_mirrors) is a parameter indicating whether the right camera images have already been mirrored along the X axis (so that the X axis direction is the one of the left cameras) or not. Its value is eitherFalseorTrue. Whenacquisition_mirrorsis set toFalse, the X-axis of the right camera images will be inverted.acquisition_leftcamera_z_stackinggives the order of stacking of in the Z direction for the left camera images.When
acquisition_leftcamera_z_stackingis set to'inverse', the Z-axis of all images (from both left and right cameras) will be inverted.acquisition_orientation(orraw_ori) is a parameter describing the acquisition orientation of the acquisition of the stack #1 images with respect to the stack #0 ones.'right': the frame (X, Z) of the left camera of stack #0 needs to be rotated clockwise (90 degrees along the Y axis) to correspond to the left camera of stack #1 (see figure 6.1).'left': the frame (X, Z) of the left camera of stack #0 needs to be rotated counterclockwise (-90 degrees along the Y axis) to correspond to the left camera of stack #1 (see figure 6.2).
The experiments presented below rely upon the raw data organized as below (see also Data organization). They aimed at helping to set the important parameters for fusion.
$ /path/to/experiment/
├── RAWDATA/
│ ├── stack_0_channel_0_obj_left/
│ │ ├── Cam_left_00000.lux.h5
│ │ ├── ...
│ │ └── ...
│ ├── stack_0_channel_0_obj_right/
│ │ ├── Cam_right_00000.lux.h5
│ │ ├── ...
│ │ └── ...
│ ├── stack_1_channel_0_obj_left/
│ │ ├── Cam_left_00000.lux.h5
│ │ ├── ...
│ │ └── ...
│ └── stack_1_channel_0_obj_right/
│ ├── Cam_right_00000.lux.h5
│ ├── ...
│ └── ...
4.1.1. Setting acquisition_mirrors¶
acquisition_mirrors (or raw_mirrors) is a parameter indicating whether the right camera images have
already been mirrored along the X axis (so that the X axis direction is the one of the left cameras) or not.
Looking at the acquired data provides a first means to decide how to set this parameter.
image from |
image from |
Above are the XY-section (Z=145) of time point 0 of Zidane series for both (left an right cameras) of stack #0. The right camera image is symetrical with respect to the left one, demonstrating that the X axis has to be mirrored, hence that
acquisition_mirrors = False
has to be specified in the parameter file.
image from |
image from |
Obviously, the same symetry can be observed in the images of the second stack (stack #1), acquired after the stage rotation.
Alternatively, the value of acquisition_mirrors can also be checked by performing a first fusion.
4.1.2. Setting acquisition_leftcamera_z_stacking¶
The right values of both acquisition_mirrors and acquisition_leftcamera_z_stacking can be
checked by performing a first fusion.
Since acquisition_mirrors will cause the x-mirroring of the right camera images, and
acquisition_leftcamera_z_stacking will cause the z-mirroring of all image, performing a
fusion with the left and right camera images of the first stack (stack #0) is sufficient to
check whether the values of these parameters have been properly set.
Let consider the following parameter file (saved in parameters.py, but any name can be chosen)
PATH_EMBRYO = "path/to/experiment/"
EN = "241010-Zidane"
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = ""
DIR_RIGHTCAM_STACKONE = ""
acquisition_leftcam_image_prefix = 'Cam_left_00'
acquisition_rightcam_image_prefix = 'Cam_right_00'
acquisition_resolution = (0.195, 0.195, 1.0)
acquisition_mirrors = True
acquisition_leftcamera_z_stacking = 'direct'
begin = 0
end = 0
target_resolution = 0.60
result_image_suffix = 'mha'
acquisition_z_cropping = True
registration_transformation_type = 'translation'
fusion_xzsection_extraction = True
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
EXP_FUSE = 'test_true_direct'
Some remarks.
The first lines describes how the data are organized (see section Section 6.4). Please note that both
DIR_LEFTCAM_STACKONEandDIR_LEFTCAM_STACKONEare set to"". This allows to discard the images of the second stack (stack #1) to be used for the fusion. In other words, the fusion will be done only with the images of the first stack (stack #0).acquisition_mirrorsandacquisition_leftcamera_z_stackinghave intentionally incorrectly set for educational purpose.beginandendare both set to0. Since the aim is to check the value of some parameters, ony one time pint will be fused (here0)The
target_resolutionis set to some high value, to decrease the computational time.The type of the sought transformation (between the images to be fused) is set to
'translation'. It corresponds to a perfect acquisition. The'affine'transformation type, that allows scaling, may find an extreme scaling factor if th two image to be registered are too far apart, yielding the interpretation more difficult.Very important,
fusion_xzsection_extractionis set toTrue. This will allow XZ-sections to be computed after fusion, which is the best way to check whetheracquisition_leftcamera_z_stackinghas been properly set.Still important, the weighting function
fusion_weightinghas been set to'ramp'. An other z-depending weighting scheme could have been chosen as well.
Running the fusion comes to call the command line astec_fusion with the parameter file
astec_fusion -p parameters.py
After (successful) running, a FUSE directory has appeared with a sub-directory
named after the variable EXP_FUSE
$ /path/to/experiment/
├── RAWDATA/
│ ├── stack_0_channel_0_obj_left/
│ │ ├── Cam_left_00000.lux.h5
│ . .
├── FUSE/
│ └── FUSE_test_true_direct/
│ ├── 241010-Zidane_fuse_t000.mha
│ ├── LOGS
│ └── XZSECTION_000
│ ├── 241010-Zidane_xz167_fuse.mha
│ ├── 241010-Zidane_xz167_stack0_lc_reg.mha
│ ├── 241010-Zidane_xz167_stack0_lc_weight.mha
│ ├── 241010-Zidane_xz167_stack0_rc_reg.mha
│ └── 241010-Zidane_xz167_stack0_rc_weight.mha
The FUSE_test_true_direct directory contains
the fusion image(s) (here only one image,
241010-Zidane_fuse_t000.mha)a
LOGSdirectory, that contains a copy of the parameter file, and monitoring information about the fusion computation, anda
XZSECTIONnamed after the time points, that contains a XZ-section of the fusion image, of the co-registered images, and of the weight image (the weighted combination of those image wllows to recalculate the XZ-sections of the fusion image).Note that result images are named after the
EN(embryo name) variable.
XY-section of the fused image |
XZ-section of the fused image |
The XY-section (Z=165, extracted from the file 241010-Zidane_fuse_t000.mha) and the XZ-section
(Y=167, automatically extracted) demonstrate the improper choice of
acquisition_mirrors.
Below are the other images of the XZSECTION_000 directory
XZ-section of left camera acquisition image |
XZ-section of right camera acquisition image |
XZ-section of left camera weight image |
XZ-section of right camera weight image |
It can be seen that the lower part of the XZ-section of the left camera image (the upper left image) is better defined than the upper part (and conversely for the XZ-section of the right camera image), while this lower part will be more weighted (see the corresponding weight image, at the lower left; note that high values correspond to white while low values correspond to black).
Let consider now the following parameter file where the values of acquisition_mirrors
and acquisition_leftcamera_z_stacking are correctly set.
PATH_EMBRYO = "path/to/experiment/"
EN = "241010-Zidane"
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = ""
DIR_RIGHTCAM_STACKONE = ""
acquisition_leftcam_image_prefix = 'Cam_left_00'
acquisition_rightcam_image_prefix = 'Cam_right_00'
acquisition_resolution = (0.195, 0.195, 1.0)
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'inverse'
begin = 0
end = 0
target_resolution = 0.60
result_image_suffix = 'mha'
acquisition_z_cropping = True
registration_transformation_type = 'translation'
fusion_xzsection_extraction = True
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
EXP_FUSE = 'test_false_inverse'
After (successful) running of astec_fusion,
a new sub-directory appears in the FUSE directory since the value of
EXP_FUSE has chamged.
$ /path/to/experiment/
├── RAWDATA/
│ ├── stack_0_channel_0_obj_left/
│ │ ├── Cam_left_00000.lux.h5
│ . .
├── FUSE/
│ ├── FUSE_test_true_direct/
│ │ ├── ...
│ . │ .
│ └── FUSE_test_false_inverse/
│ ├── 241010-Zidane_xz167_fuse.mha
│ .
XZ-section of left camera acquisition image |
XZ-section of right camera acquisition image |
XZ-section of left camera weight image |
XZ-section of right camera weight image |
First, it has to be observed that changing the value of acquisition_leftcamera_z_stacking
from 'direct' to 'inverse' causes the acquisition images to be mirrored
along the Z direction.
Second. it can be seen that the better defined parts of the XZ-sections now correspond to high values in the weight images.
XZ-section of composite left/right images |
XZ-section of the result fusion image |
Above at the left, the composite view (red and green channels are respectively the XZ-sections of the left and the right camera images) allows to visually assess the registration (recall that this is done here by a translation only). At the right, the XZ-section of the fused image (from the left and right cameras of the first stack).
4.1.3. Setting acquisition_orientation¶
Assuming that acquisition_mirrors and acquisition_leftcamera_z_stacking
have been correctly set, a fusion image will be reconstructed with only the
left and right camera images of the second stack (after rotation of the stage,
see section multiview acquisition).
The stage rotation axis being the Y-axis of the acquired images, looking at XZ-sections
is definitively the most efficient means to assess whether acquisition_orientation
is properly set.
In the following parameter file
PATH_EMBRYO = "path/to/experiment/"
EN = "241010-Zidane"
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = ''
DIR_RIGHTCAM_STACKZERO = ''
DIR_LEFTCAM_STACKONE = 'stack_1_channel_0_obj_left'
DIR_RIGHTCAM_STACKONE = 'stack_1_channel_0_obj_right'
acquisition_leftcam_image_prefix = 'Cam_left_00'
acquisition_rightcam_image_prefix = 'Cam_right_00'
acquisition_resolution = (0.195, 0.195, 1.0)
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'inverse'
acquisition_orientation = 'left'
begin = 0
end = 0
target_resolution = 0.60
result_image_suffix = 'mha'
acquisition_z_cropping = True
registration_transformation_type = 'translation'
fusion_xzsection_extraction = True
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
EXP_FUSE = 'test_left'
only the data acquired from the second stack (stack #1) will be used for fusion
(notice that both DIR_LEFTCAM_STACKZERO and DIR_RIGHTCAM_STACKZERO have been
set to ''), fusion_xzsection_extraction have been set to True, and
acquisition_orientation has been set to 'left'.
A second parameter file, with acquisition_orientation has been set to 'right',
is also created and both are passed to astec_fusion for computation.
|
|
Even if only the left and right cameras of the second stack are used for fusion,
the fused image is directly comparable to the left camera of the first stack
(whose geometry serves as reference). From above, it can be seen
that acquisition_orientation have to be set to 'right'
to be comparable with the fusion obtained with the camera of the first stack.
4.2. Fusion (example)¶
The following parameter file performed the fusion of the 4 acquired views with the direct strategy fusion (see section Section 6.7.1).
PATH_EMBRYO = "path/to/experiment/"
EN = "241010-Zidane"
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = 'stack_1_channel_0_obj_left'
DIR_RIGHTCAM_STACKONE = 'stack_1_channel_0_obj_right'
acquisition_leftcam_image_prefix = 'Cam_left_00'
acquisition_rightcam_image_prefix = 'Cam_right_00'
acquisition_resolution = (0.195, 0.195, 1.0)
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'inverse'
acquisition_orientation = 'right'
begin = 0
end = 199
target_resolution = 0.30
result_image_suffix = 'mha'
acquisition_z_cropping = True
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
4.3. Drift compensated fusion¶
A drift-compensated fusion requires several steps.
The fusion, with
astec_fusion, of the left and right cameras of each single acquisition angle. By construction, there could not be any mismatch between the two cameras, and the fusion will profit from the two images. Section 4.3.1 and section 4.3.2 presents examples of parameter files for this step.The drift correction, with
astec_drift, within each fusion series. Section 4.3.3 and section 4.3.4 presents examples of parameter files for this step.A drift correction between the two (one per angle) drift-compensated time series of stack images, still with
astec_drift. Since time series are supposed to be still (or with few motion) after drift compensation, it should be sufficient to register one time point, and not every time point.Drifts are compensated with respect to a reference time point (by default it is the first one, defined by the
beginparameter, but an other one can by defined byreference_index). It is mandatory that this reference time point is the same for the two stacks.If there is no motion between the reference images of the two stacks, which can be easily verified by comparing the two fused images, this step is not necessary. However, it is required if some motion is observed.
Section 4.3.5 presents an examples of parameter file for this step.
Once the drifts have been estimated both intra-stack and inter-stacks, the drift-compensated fusion can be done with
astec_fusion. Section 4.3.6 presents an examples of parameter file for this step.
4.3.1. Stack #0 fusion (parameters)¶
PATH_EMBRYO = "."
EN = "241010-Zidane"
begin = 0
end = 199
acquisition_resolution = (0.195, 0.195, 1.0)
acquisition_orientation = 'right'
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'inverse'
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = ''
DIR_RIGHTCAM_STACKONE = ''
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
target_resolution = 0.60
EXP_FUSE = 'stack0'
DIR_LEFTCAM_STACKONEDIR_RIGHTCAM_STACKONEand are set to'', so they will be ignored in the fusion process.target_resolutionis set to an high value (here0.6). It will not only speed up the forcess of fusion, but also speed up the process of drift computation.
Note
a larger target_resolution can be used to reconstruct the stacks than the one
that will be used for the final fusion.
4.3.2. Stack #1 fusion (parameters)¶
This step is similar from the previous one.
PATH_EMBRYO = "."
EN = "241010-Zidane"
begin = 0
end = 199
acquisition_resolution = (0.195, 0.195, 1.0)
acquisition_orientation = 'right'
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'inverse'
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = ''
DIR_RIGHTCAM_STACKZERO = ''
DIR_LEFTCAM_STACKONE = 'stack_1_channel_0_obj_left'
DIR_RIGHTCAM_STACKONE = 'stack_1_channel_0_obj_right'
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
target_resolution = 0.60
EXP_FUSE = 'stack1'
Warning
It is mandatory to specify the second stack (stack #1) left and right
acquisition directories in the variables
DIR_LEFTCAM_STACKONE and DIR_RIGHTCAM_STACKONE, not in the
variables dedicated to the first stack (stack #0).
4.3.3. Drift estimation of stack #0 (parameters)¶
PATH_EMBRYO = "."
EN = "241010-Zidane"
begin = 0
end = 199
EXP_FUSE = 'stack0'
xy_movie_fusion_images = True
xz_movie_fusion_images = True
yz_movie_fusion_images = False
only_initialisation = True
resolution = 0.60
template_type = 'FUSION'
template_threshold = 140
EXP_DRIFT = 'stack0'
The first run of
astec_driftperformsa first intra-series registration of the fused first stack images
the drift corrections of this series, based on the calculated threshold from the scores issued from the intra-series co-registration transformations
a second intra-series registration of the fused first stack images that incorporates the computed drift transformations
However, to check the stack fusion parameters, or to adjust the
score_thresholdit may be desirable to assess the 2D+t movies (in theITER0-MOVIES_t<begin>-<end>directory) and/or the score figure (in theITER0-CO_SCOREdirectory) before computing the first correction round.To do so, the variable
only_initialisationhas to be set toTrue.The first run of
astec_drift, ifonly_initialisationis set toFalse, will generateITER0-*andITER1-*subdirectories. To assess the quality of drift estimation, one can look not only at the figures generated from the scores (see section 7.3) but also at 2D+t movies (of the fused images resampled after drift compensation).Again, these movies can be computed with a high resolution value (here
resolution = 0.6) since it aims at evaluating whether the co-registration is correct (not accurate).Obviously, 2D+t movies along other sections (with parameters
xz_movie_fusion_imagesandyz_movie_fusion_images) may be also computed as exemplified in the above parameter file.More important, the parameter
template_thresholdhas to be set accordingly to the intensity dynamic of the fused images in order to limit the size of the template (see also section 12.6 and section 17.16).Based on the analysis of the first run results, it has to be decided whether an other iteration (an other run of
astec_drift) has to be made.The selection based on the automated threshold (see section 7.5) may be too large, thus, corrections has to be selected with parameters
score_threshold,corrections_to_be_done, orcorrections_to_be_added(see section 17.11 for details).The value of parameter
rotation_sphere_radiuscontrols the number of initial rotations to be tested, and then the global computational cost of drift correction.
Once the computed drift is satisfactory, some directories or file can be removed to spare room on the physycal device: see section 7.8.
4.3.4. Drift estimation of stack #1 (parameters)¶
This step is similar from the previous one, but for the second stack/angle.
PATH_EMBRYO = "."
EN = "241010-Zidane"
begin = 0
end = 199
EXP_FUSE = 'stack1'
xy_movie_fusion_images = True
xz_movie_fusion_images = True
yz_movie_fusion_images = False
only_initialisation = True
resolution = 0.60
template_type = 'FUSION'
template_threshold = 140
EXP_DRIFT = 'stack1'
4.3.5. Drift estimation between stacks (parameters)¶
In the two previous sections,
astec_drift was used to compute the transformations
between any two successive fused images of a given stack (obtained
by fusion of the left and right images of a given angle),
and these transformations can be further used for a drift-compensated fusion.
However, it assumes that the two drift-compensated series of stacks are already (almost)
co-registered.
Then, one has to check first whether the two stacks are already co-registered. If not, one has to compute a transformation between the two stacks, which is the purpose of this section.
To check whether the two stacks are co-registered,
one has to check, by visual inspection, whether the images of
the time point reference_index (or begin
if reference_index was not set)
issued from the stack fusions
(section 4.3.2
and section 4.3.1)
can roughly be superimposed.
If yes, we can proceed to the drift-compensated fusion (section 4.3.6).
If no, the two stacks may have to be co-registered, which can be done
with astec_drift and the following parameter file.
PATH_EMBRYO = "."
EN = "241010-Zidane"
begin = 0
end = 0
EXP_FUSE = ['stack1', 'stack0']
score_threshold = 11.13
EXP_DRIFT = 'stack1'
It is mandatory to specify the two directories in the second stack (stack #1) drift parameter file, and to put the fusion directory of the first stack (stack #0) after the the fusion directory of the second stack.
endhas been set to the same value thanbegin. This way, only the stack-to-stack drift will be computed. Ifendhas a larger value (sayend = 199), a new iteration of intra-stack drift estimation (for the second stack, ie stack #1) will be performed.score_thresholdhas been set to the threshold computed at the first run. It will allow an earlier stop when testing the set of initial rotations.
See also section 7.7.
4.3.6. Drift compensated fusion (parameters)¶
PATH_EMBRYO = "."
EN = "241010-Zidane"
begin = 0
end = 199
acquisition_resolution = (0.195, 0.195, 1.0)
acquisition_orientation = 'right'
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'inverse'
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = 'stack_1_channel_0_obj_left'
DIR_RIGHTCAM_STACKONE = 'stack_1_channel_0_obj_right'
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
target_resolution = 0.30
EXP_FUSE = 'RELEASE'
EXP_DRIFT = ['stack0', 'stack1']
The only difference with a fusion parameter file without drift compensation is the line
EXP_DRIFT = ['stack0', 'stack1']
that indicates the
DRIFT/subdirectories where the drift transformations will be searched for. The order is important, and the drift directory for the first stack has to be indicated first. The stack-to-stack transformation, if any, will be searched in the second directory.The required resolution of the fusion, given by the parameter
target_resolution, can be freely chosen (independently of thetarget_resolutionused for stack fusion and of theresolutionvalue used for drift estimation).
5. Tutorial (advanced)¶
5.1. Fusion with missing slices (without drift)¶
It may happen that some slices are missing (because the excitation laser went off during the acquisition) in the (left and right) images of one camera.
Since the final fusion image is a linear combination of the (transformed) acquisition images (see section 6.8), this can be corrected by putting ‘0’ in areas in the weight images that correspond to the laser-off parts of the acquisition images. Unfortunately, the laser-off parts are unknown, thus this will require some careful manipulation, that are exemplified below for Joseph dataset where the laser went off for part of the stack #0 acquisition at time 110.
First, one has to run the fusion process with the ‘-k’ option to keep
all the intermediary result (see below the parameter file).
It is mandatory to set fusion_acquisition_z_cropping
to False (else only one portion of the embryo will be kept in laser-off acquisitions,
because of the largest connected component extraction), or better to 'border'
(see section 6.6).
PATH_EMBRYO = "."
EN = "250416-Joseph"
DIR_RAWDATA = 'RAWDATA-105-115'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = 'stack_1_channel_0_obj_left'
DIR_RIGHTCAM_STACKONE = 'stack_1_channel_0_obj_right'
acquisition_leftcam_image_prefix = 'Cam_Left_000'
acquisition_rightcam_image_prefix = 'Cam_Right_000'
acquisition_orientation = 'right'
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'direct'
acquisition_resolution = (0.195, 0.195, 1.0)
begin = 110
end = 110
target_resolution = 0.6
fusion_xzsection_extraction = True
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
EXP_FUSE = 'direct-110'
fusion_acquisition_z_cropping = False
After calculation, the directory FUSE_direct-110 has still the auxiliary calculation
images and transformations in directories TEMP_110 and TRSF_110 thanks to the
-k option. In TEMP_110 and TRSF_110, the subdirectories ANGLE_0 and ANGLE_1
corespond respectively to the left and right cameras of stack #0, while
ANGLE_2 and ANGLE_3 corespond respectively to the left and right cameras of stack #1.
$ /path/to/experiment/
├── RAWDATA/
│ └── ...
├── FUSE/
│ └── FUSE_direct-110/
│ ├── 250416-Joseph_fuse_t110.mha
│ ├── LOGS
│ │ ├── astec_fusion-2025-06-28-19-42-19.log
│ │ └── ...
│ ├── TEMP_110
│ │ ├── ...
│ │ ├── ANGLE_0
│ │ │ ├── ...
│ │ │ ├── Cam_left_00110_init_weight.mha
│ │ │ ├── Cam_left_00110_pre_crop.mha
│ │ │ └── ...
│ │ ├── ANGLE_1
│ │ │ ├── ...
│ │ │ ├── Cam_right_00110_init_weight.mha
│ │ │ ├── Cam_right_00110_pre_crop.mha
│ │ │ ├── ...
│ │ │ ├── Cam_right_00110_pre_mirror.mha
│ │ │ └── ...
│ │ ├── ANGLE_2
│ │ │ └── ...
│ │ └── ANGLE_3
│ │ └── ...
│ ├── TRSF_110
│ │ ├── ANGLE_0
│ │ │ └── ...
│ │ ├── ANGLE_1
│ │ │ └── ...
│ │ ├── ANGLE_2
│ │ │ └── ...
│ │ └── ANGLE_3
│ │ └── ...
│ └── XZSECTION_110
│ └── ...
The log file astec_fusion-2025-06-28-19-42-19.log has kept trace of the commands used to
compute the fusion image, that exhibits a darker area dure to the missing slices
(see Fig. 5.1).
Fig. 5.1 XZ section of the fused image of joseph at time 110. The missing slices of stack #0 acquisitions cause a darker area.¶
Below are the XZ-sections used for the linear combination (see Section 6.8)
that can be found in directory XZSECTION_110. Because there was no cropping along the Z direction,
large background area appears at top and bottom of each image.
However, since fusion_cropping is set to True, the final result image
(see Fig. 5.1) is cropped along the Z direction.
left cam, stack #0 |
right cam, stack #0 |
left cam, stack #1 |
right cam, stack #1 |
To deal with the laser-off parts, weight images will be recalculated by putting ‘0’ in areas contaminated by the laser-off sections (see commands in Example 5.1). First, we have to identify these laser-off sections. Then, mask images will be built in the same reference frame than the initial weight images (lines 4 and 20), with 255 for laser-on areas and 0 for laser-off areas (lines 7 and 23). Then, they will be linear resampled with the already computed transformations (line 10 and 26). Non-contamined areas will correspond to the 255 values after transformation. Contamined areas will be set to 0 in weight images (lines 14 and 16, and 30 and 32), and the linear combination will be recomputed (line 36).
Images used for registration are either *_pre_crop.* or *_pre_mirror.* images, depending of the
value of fusion_acquisition_mirrors.
A look to the contents of TEMP_110 subdirectories allows to identify
respectively Cam_left_00110_pre_crop.mha
and Cam_right_00110_pre_mirror.mha for subdirectories ANGLE_0 and ANGLE_1.
With the zpar command, one can easily get the image dimensions.
$ cd FUSE_direct-110/TEMP_110/ANGLE_0
$ zpar Cam_left_00110_pre_crop.mha
Cam_left_00110_pre_crop.mha: -x 331 -y 332 -z 411 -f -o 2 -vx 0.600000 -vy 0.600000 -vz 1.000000
$ cd ../ANGLE_1
$ zpar Cam_right_00110_pre_mirror.mha
Cam_right_00110_pre_mirror.mha: -x 330 -y 331 -z 411 -f -o 2 -vx 0.600000 -vy 0.600000 -vz 1.000000
$ cd ../../../
By visually parsing the image (Cam_left_00110_pre_crop.mha and Cam_right_00110_pre_mirror.mha)
sections (eg with Fiji), it is straightforward to identify the non-excitated sections (with Fiji,
it is the sections from Z=201 to z=215; recall that Fiji section numbering starts at 1).
Let’s detail the commands for the left camera of stack #0.
Line 4 creates a mask image with the same geometry than the initial mask image (calculated in the acquisition geometry, after cropping) filled by
255values.Line 7 fills a rectangle with
0starting at \(x=0, y=0, z=200\) and ending at \(x=330, y=331, z=214\). By convention,drawShapesstarts axe numbering at0from \(x=0\) to \(x=330\), there are
331indices that correspond to the X dimension ofCam_left_00110_pre_crop.mha\(z=200\) to \(z=214\) corresponds to the Z=201 to z=215 sections identified with Fiji.
Line 10 performs a linear resampling of the mask image. If a voxel value in the resulting image is less than 255, it means that it is contaminated by the off-laser sections
Line 14 thresholds the transformed mask image, which results in the non-contaminated areas in the reference of the fused image (before the final cropping, see section 6.9).
Line 16 mask the transformed weight image.
Line 36 recomputes the linear combination with the recomputed weight images.
Line 46 mimics the same last cropping than originally done. Bounding box information can by found in the log file
astec_fusion-2025-06-28-19-42-19.log. Here the infomation wascrop from [0,331]x[0,332]x[0,685] to [0,331]x[0,332]x[174,566]
Once the fused image have bee recomputed, auxiliary subdirectories TEMP_110 and TRSF_110
can be deleted to save disk space.
1 DIRIM=./FUSE/FUSE_direct-110/TEMP_110/
2 DIRTR=./FUSE/FUSE_direct-110/TRSF_110/
3
4 createImage ${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
5 -template ${DIRIM}/ANGLE_0/Cam_left_00110_init_weight.mha \
6 -value 255 -type u8
7 drawShapes ${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
8 ${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
9 -shape rectangle -origin 0 0 200 -end 330 331 214 -value 0
10 applyTrsf ${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
11 ${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha \
12 -trsf ${DIRTR}/ANGLE_0/Cam_left_00110_reg_full.trsf \
13 -template ${DIRIM}/ANGLE_0/Cam_left_00110_reg_final.mha -linear
14 seuillage ${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha \
15 ${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha -sb 255
16 Logic -mask ${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha \
17 ${DIRIM}/ANGLE_0/Cam_left_00110_weight.mha \
18 ${DIRIM}/ANGLE_0/Cam_left_00110_weight.mha
19
20 createImage ${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
21 -template ${DIRIM}/ANGLE_1/Cam_right_00110_init_weight.mha \
22 -value 255 -type u8
23 drawShapes ${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
24 ${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
25 -shape rectangle -origin 0 0 200 -end 330 331 214 -value 0
26 applyTrsf ${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
27 ${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha \
28 -trsf ${DIRTR}/ANGLE_1/Cam_right_00110_reg_full.trsf \
29 -template ${DIRIM}/ANGLE_1/Cam_right_00110_reg_final.mha -linear
30 seuillage ${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha \
31 ${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha -sb 255
32 Logic -mask ${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha \
33 ${DIRIM}/ANGLE_1/Cam_right_00110_weight.mha \
34 ${DIRIM}/ANGLE_1/Cam_right_00110_weight.mha
35
36 mc-linearCombination -weights ${DIRIM}/ANGLE_0/Cam_left_00110_weight.mha \
37 ${DIRIM}/ANGLE_1/Cam_right_00110_weight.mha \
38 ${DIRIM}/ANGLE_2/Cam_left_00110_weight.mha \
39 ${DIRIM}/ANGLE_3/Cam_right_00110_weight.mha \
40 -images ${DIRIM}/ANGLE_0/Cam_left_00110_tobefused.mha \
41 ${DIRIM}/ANGLE_1/Cam_right_00110_tobefused.mha \
42 ${DIRIM}/ANGLE_2/Cam_left_00110_tobefused.mha \
43 ${DIRIM}/ANGLE_3/Cam_right_00110_tobefused.mha \
44 -res ${DIRIM}/250416-Joseph_fuse_t110_uncropped_fusion.mha
45
46 extImage ${DIRIM}/250416-Joseph_fuse_t110_uncropped_fusion.mha \
47 ./FUSE/FUSE_direct-110/250416-Joseph_fuse_t110.mha \
48 -origin 0 0 174 -x 331 -y 332 -z 392
Fig. 5.2 XZ section of the fused image of joseph at time 110 after reweighting to deal with laser-off sections.¶
5.2. Drift compensated fusion with missing slices (DCFMS)¶
The point is to pay attention to the images with missing slices (as in section 5.1).
5.2.1. Stacks fusion (DCFMS)¶
The default cropping method component
(see section 6.6)
will fail, when cropping along the Z direction, in case of missing slices.
Then both z_cropping parameters are set to 'border'.
Here is a parameter file example for the first stack (aka stack #0):
PATH_EMBRYO = "."
EN = "250416-Joseph"
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = ''
DIR_RIGHTCAM_STACKONE = ''
acquisition_leftcam_image_prefix = 'Cam_left_00'
acquisition_rightcam_image_prefix = 'Cam_right_00'
acquisition_orientation = 'right'
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'direct'
acquisition_resolution = (0.195, 0.195, 1.0)
begin = 0
end = 149
target_resolution = 0.6
fusion_xzsection_extraction = False
save_transformation = False
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
EXP_FUSE = 'stack0-000-149'
fusion_acquisition_z_cropping = 'border'
fusion_z_cropping = 'border'
and the parameter file example for the second stack (aka stack #1):
PATH_EMBRYO = "."
EN = "250416-Joseph"
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = ''
DIR_RIGHTCAM_STACKZERO = ''
DIR_LEFTCAM_STACKONE = 'stack_1_channel_0_obj_left'
DIR_RIGHTCAM_STACKONE = 'stack_1_channel_0_obj_right'
acquisition_leftcam_image_prefix = 'Cam_left_00'
acquisition_rightcam_image_prefix = 'Cam_right_00'
acquisition_orientation = 'right'
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'direct'
acquisition_resolution = (0.195, 0.195, 1.0)
begin = 0
end = 149
target_resolution = 0.6
fusion_xzsection_extraction = False
save_transformation = False
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
EXP_FUSE = 'stack1-000-149'
fusion_acquisition_z_cropping = 'border'
fusion_z_cropping = 'border'
5.2.2. Drift estimation within stacks (DCFMS)¶
When estimating the drift, missing slices in image of time \(t\) will impair the registration score when comparing image of time \(t-1\) with image of time \(t\) (see section 7.1) and yield high score values.
It is then mandatory to
set only_initialisation to True
(see section 17.11)
for the first run of astec_drift. If not, the score threshold that is automatically
calculated is surely below the score due of missing slices: as a consequence,
a considerable amount of time will be spend to try to improve this score, although
this will not be possible.
Here is the parameter file for the first run of astec_drift for the first stack.
PATH_EMBRYO = "."
EN = "250416-Joseph"
begin = 0
end = 149
EXP_FUSE = 'stack0-000-149'
xy_movie_fusion_images = True
xz_movie_fusion_images = True
yz_movie_fusion_images = False
resolution = 0.60
template_type = 'FUSION'
template_threshold = 140
only_initialisation = True
EXP_DRIFT = 'stack0-000-149'
After computation, we look at the figure (figure 5.3)
generated in ITER0-CO-SCORE/
(see section 7.3
and figure 7.2)
Fig. 5.3 Top: scores with respect to time; bottom: rotation angle with respect to time.¶
Two peaks can be distinguished in figure 5.3, each of them can be due either to a poor co-registration between consecutive images (because of the drift that may cause an excessive motion), or to missing slice in one image.
To decide between these two hypothesis, one have to look to the movies generated in
ITER0-MOVIES_t000-149 (see section 7.3).
It appears that the peak for the image couple
\((29,30)\) is due to an uncorrected drift while the one for the couple
\((109,110)\) is due to missing slices in image of time point
\(110\).
In the parameter file for the second run of astec_drift for the first stack.
PATH_EMBRYO = "."
EN = "250416-Joseph"
begin = 0
end = 149
EXP_FUSE = 'stack0-000-149'
xy_movie_fusion_images = True
xz_movie_fusion_images = True
yz_movie_fusion_images = False
resolution = 0.60
template_type = 'FUSION'
template_threshold = 140
only_initialisation = False
score_threshold = 4
corrections_to_be_skipped = [109]
EXP_DRIFT = 'stack0-000-149'
the correction for the image of time point \(109\) is indicated to be skipped
thanks to corrections_to_be_skipped, while a threshold
of \(4.0\) is sufficient to detect mis-registrations.
Visual inspection of the results (figure and movies) obtained after the second run demonstrates than the drift is successfully corrected.
The drift of the second stack can be comouted accordingly, with the parameter file
PATH_EMBRYO = "."
EN = "250416-Joseph"
begin = 0
end = 149
EXP_FUSE = 'stack1-000-149'
xy_movie_fusion_images = True
xz_movie_fusion_images = True
yz_movie_fusion_images = False
resolution = 0.60
template_type = 'FUSION'
template_threshold = 140
only_initialisation = True
EXP_DRIFT = 'stack1-000-149'
The score figure (figure 5.4) demonstrates that there is no high registration score.
Fig. 5.4 Top: scores with respect to time; bottom: rotation angle with respect to time.¶
5.2.3. Drift estimation between stacks (DCFMS)¶
See section 7.7.
PATH_EMBRYO = "."
EN = "250416-Joseph"
begin = 0
end = 0
EXP_FUSE = ['stack1-000-149', 'stack0-000-149']
score_threshold = 5.0
EXP_DRIFT = 'stack1-000-149'
5.2.4. Drift compensated fusion (DCFMS)¶
First, acquisitions have to be fused with the computed drift. Basically, it is the same parameter file than the drift compensated fusion (without missing slices) section 4.3.6).
As explained in section 5.1,
we have to keep the intermediary results of the fusion to be corrected (the ones
from acquisitions with missing slices). The purpose of the the variable
keep_temporary_files (section 17.2.1)
is to give a list of the time points for which we want to keep the
auxiliary results (the -k option keeps the auxiliary results for all
processed time points).
Note that the final cropping (the one of the fused image) can be done by the historical method.
PATH_EMBRYO = "."
EN = "250416-Joseph"
DIR_RAWDATA = 'RAWDATA'
DIR_LEFTCAM_STACKZERO = 'stack_0_channel_0_obj_left'
DIR_RIGHTCAM_STACKZERO = 'stack_0_channel_0_obj_right'
DIR_LEFTCAM_STACKONE = 'stack_1_channel_0_obj_left'
DIR_RIGHTCAM_STACKONE = 'stack_1_channel_0_obj_right'
acquisition_leftcam_image_prefix = 'Cam_left_00'
acquisition_rightcam_image_prefix = 'Cam_right_00'
acquisition_orientation = 'right'
acquisition_mirrors = False
acquisition_leftcamera_z_stacking = 'direct'
acquisition_resolution = (0.195, 0.195, 1.0)
begin = 0
end = 149
target_resolution = 0.3
fusion_weighting = 'ramp'
fusion_strategy = 'direct-fusion'
EXP_FUSE = 'drift-direct-000-149'
EXP_DRIFT = ['stack0-000-149', 'stack1-000-149']
keep_temporary_files = [110]
fusion_acquisition_z_cropping = 'border'
fusion_z_cropping = 'component'
Second, we recompute a fusion image by masking the missing slice as described in section 5.1
DIRIM=./FUSE/FUSE_drift-direct-000-149/TEMP_110/
DIRTR=./FUSE/FUSE_drift-direct-000-149/TRSF_110/
DIRRES=./FUSE/FUSE_drift-direct-000-149/
createImage ${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
-template ${DIRIM}/ANGLE_0/Cam_left_00110_init_weight.mha \
-value 255 -type u8
drawShapes ${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
-shape rectangle -origin 0 0 113 -end 581 581 127 -value 0
applyTrsf ${DIRIM}/ANGLE_0/Cam_left_00110_init_mask.mha \
${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha \
-trsf ${DIRTR}/ANGLE_0/Cam_left_00110_reg_full.trsf \
-template ${DIRIM}/ANGLE_0/Cam_left_00110_reg_final.mha -linear
seuillage ${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha \
${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha -sb 255
Logic -mask ${DIRIM}/ANGLE_0/Cam_left_00110_mask.mha \
${DIRIM}/ANGLE_0/Cam_left_00110_weight.mha \
${DIRIM}/ANGLE_0/Cam_left_00110_weight.mha
createImage ${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
-template ${DIRIM}/ANGLE_1/Cam_right_00110_init_weight.mha \
-value 255 -type u8
drawShapes ${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
-shape rectangle -origin 0 0 112 -end 582 582 126 -value 0
applyTrsf ${DIRIM}/ANGLE_1/Cam_right_00110_init_mask.mha \
${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha \
-trsf ${DIRTR}/ANGLE_1/Cam_right_00110_reg_full.trsf \
-template ${DIRIM}/ANGLE_1/Cam_right_00110_reg_final.mha -linear
seuillage ${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha \
${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha -sb 255
Logic -mask ${DIRIM}/ANGLE_1/Cam_right_00110_mask.mha \
${DIRIM}/ANGLE_1/Cam_right_00110_weight.mha \
${DIRIM}/ANGLE_1/Cam_right_00110_weight.mha
mc-linearCombination -weights ${DIRIM}/ANGLE_0/Cam_left_00110_weight.mha \
${DIRIM}/ANGLE_1/Cam_right_00110_weight.mha \
${DIRIM}/ANGLE_2/Cam_left_00110_weight.mha \
${DIRIM}/ANGLE_3/Cam_right_00110_weight.mha \
-images ${DIRIM}/ANGLE_0/Cam_left_00110_tobefused.mha \
${DIRIM}/ANGLE_1/Cam_right_00110_tobefused.mha \
${DIRIM}/ANGLE_2/Cam_left_00110_tobefused.mha \
${DIRIM}/ANGLE_3/Cam_right_00110_tobefused.mha \
-res ${DIRIM}/250416-Joseph_fuse_t110_uncropped_fusion.mha
extImage ${DIRIM}/250416-Joseph_fuse_t110_uncropped_fusion.mha \
${DIRRES}/250416-Joseph_fuse_t110.mha \
-origin 20 11 19 -x 729 -y 728 -z 567





















