Friday 11 November 2011

Atlas authoring

The construction of the animations and annotated images involves several image processing steps implemented by the authoring client programs. Ideally these steps should all be automatic once a 3-D anatomic model can be deformed to fit the data. However, since we have not solved this problem we use a more traditional approach, but with an eye toward integrating advances in knowledge-based segmentation from our own lab and from those of others.
The 3-D models are generated by a process of 3-D reconstruction from serial sections. The input is an image volume consisting of a set of serial sections. Two well-known examples of this kind of input are the Visible Human male and female, but clinical image volumes can also be used. The cross-section of each structure on each of these images is segmented, using a manual segmentation tool called Morpho. The resulting stack of contours is input to our locally-developed Skandha program. Skandha is then used to reconstruct the contours into a 3-D surface, to combine surfaces into 3-D models, and to render the models, either as static 2-D images or as Quicktime animations. In our production system most of these tasks are done manually (without the aid of shape knowledge), which is not a major burden since only one or two canonical models are being developed.



The 2-D images are annotated by a Java-based software tool we call Frame Builder, which allows the author to delineate regions on the images, and to label them either with the structure names or with commands to open other images. The annotations are saved in a separate file we call a frame. The combined animations and image-frame pairs are saved in a separate directory, one for each atlas.

No comments:

Post a Comment