scispace - formally typeset
Search or ask a question

Showing papers by "Sabry F. El-Hakim published in 1994"


Proceedings ArticleDOI
06 Oct 1994
TL;DR: In this article, the authors use a range sensor that simultaneously acquires perfectly registered range and intensity images to determine the shape of the object (surfaces) while the intensity image is used to extract edges and surface features.
Abstract: Conventional vision techniques based on intensity data, such as the data produced by CCD cameras, cannot produce complete 3D measurements for object surfaces. Range sensors, such as laser scanners, do provide complete range data for visible surfaces; however, they may produce erroneous results on surface discontinuities such as edges. In most applications, measurements on all surfaces and edges are required to completely describe the geometric properties of the object, which means that intensity data alone or range data alone will not provide sufficiently complete or accurate information for these applications. The technique described in this paper uses a range sensor the simultaneously acquires perfectly registered range and intensity images. It can also integrate the range data with intensity data produced by a separate sensor. The range image is used to determine the shape of the object (surfaces) while the intensity image is used to extract edges and surface features such as targets. The two types of data are then integrated to utilize the best characteristics of each. Specifically, the objective of the integration is to provide highly accurate dimensional measurements on the edges and features. The sensor, its geometric model, the calibration procedure, the combined data approach, and some results of measurements on straight and circular edges (holes) are presented in the paper.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

27 citations


Proceedings ArticleDOI
06 Oct 1994
TL;DR: Clinical testing of a computer vision system capable of real time monitoring of the position of an oncology (cancer) patient undergoing radiation therapy has found that the system can easily and accurately detect patient motion during treatment as well as variations in patient setup from day to day.
Abstract: We have developed and clinically tested a computer vision system capable of real time monitoring of the position of an oncology (cancer) patient undergoing radiation therapy. The system is able to report variations in patient setup from day to day, as well as patient motion during an individual treatment. The system consists of two CCD cameras mounted in the treatment room and focused on the treatment unit isocenter. The cameras are interfaced to a PC via a two channel video board. Special targets, placed on the patient surface are automatically recognized and extracted by our 3D vision software. The three coordinates of each target are determined using a triangulation algorithm. System accuracy, stability and reproducibility were tested in the laboratory as well as in the radiation therapy room. Beside accuracy, the system must ensure the highest reliability and safety in the actual application environment. In this paper we also report on the results of clinical testing performed on a total of 23 patients having various treatment sites and techniques. The system in its present configuration is capable of measuring multiple targets placed on the patient surface during radiation therapy. In the clinical environment the system has an accuracy and repeatability of better than 0.5 mm in Cartesian space over extended periods (> 1 month). The system can measure and report patient position in less than 5 seconds. Clinically we have found that the system can easily and accurately detect patient motion during treatment as well as variations in patient setup from day to day. A brief description of the system and detailed analysis of its performance in the laboratory and in the clinic are presented.

13 citations