📄 pc_3d.txt
字号:
3D Medical Imaging
Page 1
THREE DIMENSIONAL MEDICAL IMAGING
Michael W. Vannier
Daniel Geist
Donald E. Gayou
Mallinckrodt Institute of Radiology
Washington University School of Medicine
510 S. Kingshighway Blvd.
St. Louis, Mo. 63110
ABSTRACT
Surface and volumetric three dimensional imaging methods have found application in diagnostic medical imaging and research. The acquisition, modeling, classification, and computer graphics rendering of discrete image volumes will be introduced. Applications in diagnosis (craniofacial, orthopedic, cardiovascular, and others) as well as reconstruction methods for generic serial sections will be described. C-Software for 3D reconstruction which operates on an IBM PC/AT clone is described.
INTRODUCTION
The substantial quantity of information generated by CT and MRI scanners has presented both a problem and an opportunity for clinicians. Vast quantities of data must be processed before diagnostically relevant information can be extracted. On the other hand, steady progress in the development of data reduction and visualization software has created a class of clinician-users who desire to manipulate imaging data to provide views of the data compatible with pertinent diagnostic and therapeutic needs in the management of a particular patient. For example, a radiologist may wish to apply various image enhancement techniques to better visualize small details a difficult case while a surgeon may wish to interactively manipulate the 3-D data on screen for surgical planning. The radiation oncologist may utilize CT or MRI image data in a slightly different way to facilitate true 3-D radiation therapy planning.
In each of these cases, physicians with differing requirements wish to manipulate the same 3-D data. In such situations, a workstation architecture offers an ideal solution to the problem of distributed medical imaging. Networking permits multiple users to efficiently access imaging data stored at a central location. The price/performance ratio of workstations is steadily improving, while cost per seat continues to decline. Increasingly powerful chipsets utilizing ASIC (Application-specific integrated circuits) provide powerful graphics engines for complex image processing and graphical manipulation of large quantities of 3-D data. Recently, the distinction between engineering workstations and high end PC's has blurred, and medical imaging software packages have become available for PC's at reasonable cost.
There are a number of potential advantages inherent in a workstation approach to medical imaging. The user may display and manipulate 3-D data in whatever manner is most appropriate for the task at hand. Direct interaction with the data allows the clinician to comprehend needed information in minimum time. The flexibility and power of this approach is useful not only in clinical settings but also for research and teaching. Finally, trends in both hardware and software technology ensure that the cost of workstations will decline as functionality and "user-friendliness" continue to improve, making these systems available to an ever-broadening group of users.
Image processing and computer graphics have become important software tools in the day to day practice of medicine. The use of computers in diagnosis of internal morphological abnormalities is commonplace in radiology departments. X-rays, ultrasound, radioactive drugs - inhaled or injected, and proton magnetic resonance are the physical basis of computerized medical imaging instruments, often called "scanners". Various types of scanners now exist, enabling the extraction of data in the form of images from the human body. Image processing techniques are used to acquire this raw data, reconstruct and enhance it to give the physician maximum information, and as an aid in diagnosis, treatment planning, and evaluation.
Three-dimensional (3-D) reconstruction of images has emerged as an important new tool for medical imaging(1). The original data obtained from a scanner is often a set of 2-dimensional contiguous slices (Figure 1) of the body (Figure 2). The reconstruction program takes this data set and renders a 3-D image of internal or external body surfaces. This helps physicians understand the complex 3-D anatomy present in those slices. The objective is to make the reconstruction as useful as possible for the physician. This requires the extraction of as much surface detail as possible in the 3-D image. 3-D reconstructions are used in a variety of medical applications, such as planning of craniofacial surgery(2), neurosurgery(3), and orthopedics(4).
Presently, 3-D reconstruction programs run on computed-tomographic (CT) scanner control mini-computers (DEC PDP-11 or Data General Eclipse) or on specialized computer graphics workstations. These units are more costly and are not so widely available as personal computers. We have developed 3-D surface reconstruction software for CT scans in the C language on a DEC Vaxmate personal computer, an IBM PC/AT clone. We believe that an implementation of 3-D reconstruction software on personal computers will popularize the use of these methods for CT scan data, and encourage the use of these reconstructions as a standard tool for physicians who need information on a patient's internal anatomy.
Many approaches to the computerized reconstruction of serial slice data sets have been described and implemented(1). These descriptions typically describe the implementation in obscure or incomplete fashion, and lack sufficient detail to allow the user to reproduce the work without an inordinate effort. This paper presents a limited PC implementation with a detailed description of the methods used, and availability of the source code (in the C language).
COMPUTED TOMOGRAPHY
A CT scan is an X-ray image of the body formed by computerized reconstructions. These slices can vary in thickness from 1 to 15 mm. For high resolution studies the slices are typically 2 mm thick. The pixel values of the images typically represent the average X-ray attenuation of body matter in a corresponding 1 mm x 1 mm x 2 mm cube of the body (which is called a volume element or voxel). This value is obtained by exposing the body to X-ray radiation from several directions while measuring the attenuation of the X-rays passing through the body(4). A projection in one direction is obtained with a moving X-ray source and a set of detectors. By taking all the rays that pass through one voxel in the body and using a method called filtered back-projection(5), one can accurately compute the average x-ray attenuation of that voxel. The x-ray attenuation values computed for each voxel are expressed in a scale relative to water called the Hounsfield scale(6) in honor or G. Hounsfield who (along with A. Cormack) was awarded the Nobel prize in 1979 for inventing the CT scanner. A CT scan image data set consisting of many contiguous slices is a three dimensional array of Hounsfield values (Figure 2). Each CT scan slice is a two dimensional array (Figure 1).
The slice we typically use contain 256 x 256 voxels but slice data can be 512 x 512 in size (Figure 2). The number of slices used for one patient's examination typically varies between fifty and one hundred total slices. This results in more than 5 million voxels or 10 MB of data (one voxel is two bytes) for each examination. The manipulation and interpretation of such large amounts of data is a significant problem in medical imaging.
THREE DIMENSIONAL RECONSTRUCTION
Interpretation of CT slice images requires special training and expertise, and is typically performed by a diagnostic radiologist (M.D. with special training in diagnostic medical imaging). Non-radiologist physicians and surgeons may have difficulty understanding the complex anatomy represented by these two-dimensional (2-D) images. It is often desirable to synthesize a more familiar 3-D image from a sequence of 2-D slices to aid in the interpretation and utilization of CT scan examinations. Methodologies for performing this synthesis have been developed over the past 10 years, and are generally referred to as "3-D reconstruction from serial slices". Similar approaches have been used in electronic microscopy, ultrasound and magnetic resonance imaging.
Three-dimensional (3-D) CT scan surface reconstructions were originally developed to simplify the interpretation and improve the utility of CT scans of the face and skull. These CT scans contain morphologic information regarding the skull and soft tissue structures of the head. Unfortunately, the number, complexity and redundancy of CT slice images limits their value in patients with craniofacial disorders. The skull itself is intrinsically three dimensional, and the multiplicity of two-dimensional slice images that result from a CT scan examination is a significant obstacle for the interpreter who must form a complete understanding of complex abnormalities.
In the initial development of three-dimensional surface reconstruction methods, the emulation of a dry skull appearance from a set of CT scans was sought. Among physicians and surgeons, the dry skull is a common reference for describing and communicating the results of imaging studies. The similarity between three-dimensional CT scan surface reconstructions and a dry skull is not coincidental. This simulated dry skull appearance for 3-D CT scan reconstructions is a familiar and consistent format that enhances their utility in a clinical setting.
These methods have been applied to CT scan surface reconstruction for craniofacial disorders. Three-dimensional surface views are computed from the desired perspective, based on the original sequence of thin CT scan slices. This step typically involves prior knowledge of the orientation for the desired views. Real-time interaction and display have been impractical to date; however, because we use reconstructed images only hours or days after processing, this has not been a significant limitation. The advantages of simplicity and efficiency make processing on modest computer hardware, such as that found in CT scanner control computers, practical.
Three dimensional reconstruction from serial CT scan slices can be performed by thresholding the data, projecting the contours and then shading the results. Bone has much higher CT density (x-ray attenuation) than soft tissue or air, therefore the Hounsfield number for voxels containing bone is significantly higher than those of soft tissue. By discarding all image data below some predetermined value, we are left only with voxels which contain bone. In this way we can obtain an image of the bone surface. This technique is called thresholding or level slicing(2). By using an even lower threshold value we can discard only voxels containing air, thus obtaining an image of soft tissue or the outer surface of the patient. Thresholding between various types of soft tissue is not as effective since the CT data does not have such distinct values separating various types of soft tissue. Other types of scanners such as magnetic resonance imaging or MRI scanners allow better differentiation between soft tissues which facilitates 3-D display of their surfaces.
After thresholding has been completed, rendering of bone surface is achieved by shading(7). First, a viewing direction is chosen. The 3-D view in that direction is projected onto a 2-D plane called the image plane for eventual display on a CRT screen (Figure 3).
Each pixel in the reconstruction image is given a gray scale value according to a suitable mathematical model. The surface points that are visible in the view direction are projected onto the plane of the CRT display screen. There have been many different shading techniques proposed in recent years(8-13). Some of them involve initially transforming the voxel data to a 3-D geometric database(11, 12) such as surface contours or polygons and then creating images from that database. This transformation is an expensive procedure because of the high dimensionality of CT data sets and the complexity of the surfaces they contain. Other methods create the image directly from voxel data(8-10, 13). In this case, we compare two methods, Distance and Gradient shading.
DISTANCE SHADING
To create distance shades the program passes through the CT data, computing a visible surface image from a specified direction(14). Each voxel is examined individually. If the voxel lies above a given threshold it's distance is saved. The distance is the number of voxels traversed from a fixed viewing site in the chosen direction. Typically it is just the array index (in the specified direction) of the viewer from the voxel where the transition occurred. The other two indices of the voxel are the indices of the corresponding pixel on the image plane. Only the first instance of threshold transition is saved for each pixel in the image plane. In this way we create a Z-buffer(7) of the view in the desired direction. After scaling these distances for some frame buffer and projecting them on a two dimensional display, the image obtained is a view of the object from the chosen direction. The intensity of the projected pixel on the screen is given in the formula:
I(i,j)= Imax (D-d)/D
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -