scispace - formally typeset
Search or ask a question

Showing papers on "Object detection published in 1994"


Proceedings ArticleDOI
Haitao Guo1, J.E. Odegard1, M. Lang1, Ramesh A. Gopinath1, Ivan Selesnick1, C.S. Burrus1 
13 Nov 1994
TL;DR: Wavelet processed imagery is shown to provide better detection performance for the synthetic-aperture radar (SAR) based automatic target detection/recognition (ATD/R) problem and several approaches are proposed to combine the data from different polarizations to achieve even better performance.
Abstract: The paper introduces a novel speckle reduction method based on thresholding the wavelet coefficients of the logarithmically transformed image. The method is computational efficient and can significantly reduce the speckle while preserving the resolution of the original image. Both soft and hard thresholding schemes are studied and the results are compared. When fully polarimetric SAR images are available, the authors propose several approaches to combine the data from different polarizations to achieve even better performance. Wavelet processed imagery is shown to provide better detection performance for the synthetic-aperture radar (SAR) based automatic target detection/recognition (ATD/R) problem. >

215 citations


Journal ArticleDOI
TL;DR: The results show that for the case considered, the binary Hough integrator improves the power budget of the radar by about 3 dB for a nonfluctuating target and 1dB for a highly fluctuating target.
Abstract: For pt.II see ibid., vol. 30, no 1, (Jan. 1994). This paper considers how well a Hough transform detector with binary integration improves the performance of a typical surveillance radar. For Hough transform detection, binary integration offers some advantages over noncoherent integration when multiple targets appear in range-time space or when the detector receives signals with a wide range of power. We derive expressions for P/sub F/ and P/sub D/ for a Hough transform binary integrator and apply the expressions to a typical surveillance radar. The results show that for the case considered, the binary Hough integrator improves the power budget of the radar by about 3 dB for a nonfluctuating target and 1 dB for a highly fluctuating target. >

160 citations


Proceedings ArticleDOI
B. Ulmer1
24 Oct 1994
TL;DR: An autonomous road vehicle is presented which will prevent collisions automatically and is part of PROMETHEUS (program for a European traffic with highest efficiency and unprecedented safety).
Abstract: An autonomous road vehicle is presented which will prevent collisions automatically. This safety relevant project is part of PROMETHEUS (program for a European traffic with highest efficiency and unprecedented safety). The vehicle demonstrator VITA II (vision technology application) consists of a passenger car which demonstrates its capabilities of collision avoidance on motorways. The video cameras installed in the vehicle acquire information about the environment. The hardware consists of two clusters of parallel processors. The application cluster hosts the computer vision, planning, decision and control modules to perform driving tasks such as: lane keeping with desired speed, reduction of the speed in narrow curves obeying the restrictions given by traffic signs, following the vehicles in front with adaptive distance control, computer vision based traffic sign recognition, object detection and recognition around the vehicle and autonomous immediate collision avoidance maneuvres including overtaking. The vehicle cluster provides the basic structure to control the vehicle by computer.

114 citations


Journal ArticleDOI
TL;DR: The degree of feature co-alignment in the output of oriented filters is the cue used by human vision to perform these tasks, particularly for object detection and image segmentation.
Abstract: When bilaterally symmetric images are spatially filtered and thresholded, a subset of the resultant 'blobs' cluster around the axis of symmetry. Consequently, a quantitative measure of blob alignment can be used to code the degree of symmetry and to locate the axis of symmetry. Four alternative models were tested to examine which components of this scheme might be involved in human detection of symmetry. Two used a blob-alignment measure, operating on the output of either isotropic or oriented filters. The other two used similar filtering schemes, but measured symmetry by calculating the correlation of one half of the pattern with a reflection of the other. Simulations compared the effect of spatial jitter, proportion of matched to unmatched dots and width or location of embedded symmetrical regions, on models' detection of symmetry. Only the performance of the oriented filter + blob-alignment model was consistent with human performance in all conditions. It is concluded that the degree of feature co-alignment in the output of oriented filters is the cue used by human vision to perform these tasks. The broader computational role that feature alignment detection could play in early vision is discussed, particularly for object detection and image segmentation. In this framework, symmetry is a consequence of a more general-purpose grouping scheme.

110 citations


Proceedings ArticleDOI
09 Nov 1994
TL;DR: An object state test model is described and a reverse engineering method for extracting object state behaviors from C++ source code is outlined that resembles the concepts of inheritance and aggregation in the object-oriented paradigm rather than the concept of state decomposition as in some existing models.
Abstract: The importance of object state testing is illustrated through a simple example. We show that certain errors in the implementation of object state behavior cannot be readily detected by conventional structural testing, functional testing, and state testing. We describe an object state test model and outline a reverse engineering method for extracting object state behaviors from C++ source code. The object state test model is a hierarchical, concurrent, communicating state machine. It resembles the concepts of inheritance and aggregation in the object-oriented paradigm rather than the concept of state decomposition as in some existing models. The reverse engineering method is based on symbolic execution to extract the states and effects of the member functions. The symbolic execution results are used to construct the state machines. The usefulness of the model and of the method is discussed in the context of object state testing in the detection of a state behavior error. >

105 citations


Proceedings ArticleDOI
09 Oct 1994
TL;DR: A new approach to track complex primitives along image sequences - integrating snake-based contour tracking and region-based motion analysis, using spatio-temporal image gradients.
Abstract: This paper describes a new approach to track complex primitives along image sequences - integrating snake-based contour tracking and region-based motion analysis. First, a snake tracks the region outline and performs segmentation. Then the motion of the extracted region is estimated by a dense analysis of the apparent motion over the region, using spatio-temporal image gradients. Finally, this motion measurement is filtered to predict the region location in the next frame, and thus to guide (i.e. to initialize) the tracking snake in the next frame. Therefore, these two approaches collaborate and exchange information to overcome the limitations of each of them. The method is illustrated by experimental results on real images.

88 citations


Proceedings ArticleDOI
12 Sep 1994
TL;DR: A fast computation method of the normalized correlation for multiple rotated templates by using multiresolution eigenimages that allows the authors to accurately detect both location and orientation of the object in a scene at faster rate than applying conventional template matching to the rotated object.
Abstract: Presents a fast computation method of the normalized correlation for multiple rotated templates by using multiresolution eigenimages. This method allows the authors to accurately detect both location and orientation of the object in a scene at faster rate than applying conventional template matching to the rotated object. Since the correlation among slightly rotated templates is high, the authors first apply the Karhunen-Loeve expansion to a set of rotated templates and extract "eigenimages" from them. Each template in this set can be approximated by a linear combination of these eigenimages and it substitute for the template in computing the normalized correlation. The number of eigenimages is smaller than that of original templates and computation cost becomes small. Second, the authors employ a multiresolution image structure to reduce the number of rotated templates and location search area. For the lower resolution image, the position and angle are coarsely obtained in a wide region. Then not only searching area for the position but also the range of rotation angle of templates at the next layer can be limited to the neighbor of the prior results. The authors implemented the proposed algorithm on a vision system and realized computation time around 600 msec and achieved sub pixel resolution for translation and 0.3 degree maximum error for 360 degree rotation on the 512 by 480 gray scale image. Experimental results are shown to demonstrate the accuracy, efficiency and feasibility of the proposed method. >

70 citations


Proceedings ArticleDOI
13 Nov 1994
TL;DR: The apparent flow field induced by the camera motion is modeled by a 2D parametric motion model and compensated for using the values of the parameters estimated by a multiresolution robust method.
Abstract: We address the problem of detecting moving objects from a moving camera. The apparent flow field induced by the camera motion is modeled by a 2D parametric motion model and compensated for using the values of the parameters estimated by a multiresolution robust method. Motion detection is achieved through a statistical regularization approach based on multiscale Markov random field (MRF) models. Particular attention has been paid to the definition of the energy function involved and to the considered observations. This method has been validated by experiments carried out on different real image sequences. >

53 citations


Journal ArticleDOI
TL;DR: This work considers the detection of multiple classes of objects in clutter with 3-D object distortions and contrast differences present and uses a correlator to locate and recognize one object whose position is not known and to handle multiple objects in the same scene.
Abstract: We consider the detection of multiple classes of objects in clutter with 3-D object distortions and contrast differences present. We use a correlator because shift invariance is necessary to locate and recognize one object whose position is not known and to handle multiple objects in the same scene. The detection filter used is a linear combination of the real part of different Gabor filters, which we refer to as a macro Gabor filter (MGF). A new analysis of the parameters for the initial set of Gabor functions in the MGF is given, and a new neural network algorithm to refine these initial filter parameters and to determine the combination coefficients to produce the final MGF detection filter is detailed. Initial detection results are given. Use of this general neural network technique to design correlation filters for other applications seems very attractive.

52 citations


Proceedings ArticleDOI
24 Oct 1994
TL;DR: In this paper, an integrated prototype system that provides "all around" automatic visual obstacle sensing for a Daimler-Benz test car is presented. But the implementation of this system is still in its infancy.
Abstract: We currently work on the implementation of an integrated prototype system that provides "all around" automatic visual obstacle sensing for a Daimler-Benz test car. Most of the machine vision techniques being used have been developed within the European PROMETHEUS programme and a number of other research projects carried out by the authors and other affiliates of their institutions. This includes robust symmetry measuring, neural net-based adaptive object detection and tracking, and inverse-perspective stereo image matching and robust scale estimation in time.

46 citations


Proceedings ArticleDOI
19 Jul 1994
TL;DR: In this paper, an approach to real-time detection and tracking of underwater objects, using image sequences from an electrically scanned high-resolution sonar, is described, that maintains the wide area of detection, without significant loss of precision or speed.
Abstract: The paper describes an approach to real time detection and tracking of underwater objects, using image sequences from an electrically scanned high-resolution sonar. The use of a high resolution sonar provides a good estimate of the location of the objects, but strains the computers on board, because of the high rate of raw data. The amount of data can be cut down by decreasing the scanned area, but this reduces the possibility of planning an optimal path. In the paper methods are described, that maintains the wide area of detection, without significant loss of precision or speed. This is done by using different scanning patterns for each sample. The detection is based on a two level threshold, making processing fast. Once detected the objects are followed through consecutive sonar images, and by use of an observer the estimation errors on position and velocities are reduced. Intensive use of different on-board sensors also makes it possible to scan a map of a larger area of the seabed in world coordinates. The work is in collaboration with partners under MAST-C-T90-0059.

Journal ArticleDOI
TL;DR: This work considers the detection of candidate objects in a scene containing high clutter, multiple objects in different classes, independent of aspect view, with hot, cold, bimodal, and partial object variations and with high and low contrast targets.
Abstract: We consider the detection of candidate objects (regions of interest) in a scene containing high clutter, multiple objects in different classes, independent of aspect view, with hot, cold, bimodal, and partial object variations and with high and low contrast targets. We use three different filters with each designed to produce high probability of detection (PD). We fuse the results from different outputs to reduce the probability of false alarms (PFA). All filters are realizable on an optical correlator.

Patent
10 May 1994
TL;DR: In this paper, a passive type moving object detection system which includes an infrared detector, infrared sensors mounted on the infrared detector and a detection field including a column of detection regions for monitoring a human intruder and a row of detecting a non-human intruder is presented.
Abstract: A passive type moving object detection system which include an infrared detector, infrared sensors mounted on the infrared detector, a detection field including a column of detection regions for monitoring a human intruder and a row of detection regions for detecting a non-human intruder, wherein the column of detection regions have a height covering a human height, an optical system located between the infrared detector and the detection field, the infrared sensors having infrared accepting areas comprising a first section and a second section wherein the first section optically corresponds to the column of detection region and the second section optically corresponds to the row of detection region, so as to receive infrared ray radiating from a moving object passing through the detection regions, and the detector including an arithmetic circuit which makes subtraction between the peak values of signals, generated by the detector, and a decision circuit whereby the balance of subtraction is compared with a reference level.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: A new fusion algorithm based on a non-hierarchical fusion scheme that uses a biologically inspired merging rule to combine multiple arbitrary sized sensor images into a single image without any parameter setting is presented.
Abstract: We present a new fusion algorithm based on a non-hierarchical fusion scheme. This new fusion algorithm uses a biologically inspired merging rule to combine multiple arbitrary sized sensor images into a single image without any parameter setting. Features from each individual sensor image are not only well retained in the fused image but also enhanced properly. Unlike the hierarchical fusion algorithms, the new algorithm does not suffer from frequency aliasing and no noise has been added to the fused image. The new algorithm is applicable to many different sensor images such as infrared (IR), TV, laser radar (LADAR) and synthetic aperture radar (SAR). It has been extensively and successfully tested on a verity of sensor data. >

Patent
14 Feb 1994
TL;DR: A situation information display device for vehicles comprises a vehicle ambient object detection means 1 for detecting objects around a vehicle, a vehicle speed detection means 5; a driver's control detection means 6; a vehicle behavior detection means 7; an object recognition means 2 for recognizing the relation in position to each object and the relative speeds of each object; vehicle behavior anticipation means 3 for anticipating the behavior of the vehicle; a risk recognition means 8 for recognizing risk based on information from the object recognition and from the vehicle behaviour anticipation mean 7; a picture originating means 9 for originating pictures by setting the persepect
Abstract: PURPOSE:To enable a driver to easily see circumstances of his vehicle is in using a situation information display device for vehicles. CONSTITUTION:A situation information display device for vehicles comprises a vehicle ambient object detection means 1 for detecting objects around a vehicle; a vehicle speed detection means 5; a driver's control detection means 6; a vehicle behavior detection means 7; an object recognition means 2 for recognizing the relation in position to each object and the relative speeds of each object; a vehicle behavior anticipation means 3 for anticipating the behavior of the vehicle; a risk recognition means 8 for recognizing risk based on information from the object recognition means 2 and from the vehicle behavior anticipation means 7; a picture originating means 9 for originating pictures by setting the persepective display level of a fundamental picture showing the circumstances which the vehicle is in and by setting a typical pictorial displaying for each object while adding risk accentuating information; and a display means 10 for displaying the picture information transmitted from the picture originating means 9.

Patent
23 Apr 1994
TL;DR: In this paper, an object detection system which detects the facial area of the driver and compares it, in an image analysis, with stored facial-area image information is presented. But this system requires the driver to be in position in the driver's seat, which may be automatically moved to a standard position.
Abstract: A device for protecting a motor vehicle against use by third parties provides an object detection system which detects the facial area of the driver and compares it, in an image analysis 7, with stored facial-area image information. An image recording camera 5 is positioned in the vehicle in such a way that it automatically takes an image of the facial area when the driver is in position in the driver's seat which may be automatically moved 4 to a standard position. The stored image information may come from a key 2. The personalized detection of driving authorization which is achieved in this way is very secure against manipulation and convenient to use.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: Instead of applying the pointwise comparison procedure in terms of the Hausdorff distance, the detection of the interesting points in image regions is introduced to guide the search for the best fit between two image sets.
Abstract: This paper describes an approach to object matching in aerial images using Hausdorff distance Instead of applying the pointwise comparison procedure in terms of the Hausdorff distance, the detection of the interesting points in image regions is introduced to guide the search for the best fit between two image sets Such a guided matching scheme based on the Hausdorff distance improves the conventional blind comparison procedure and speed up the operation It can be further implemented in parallel >

Proceedings ArticleDOI
27 Jun 1994
TL;DR: A system that incorporates color image processing and neural networks to detect and locate highway warning signs in natural roadway images could reduce the need for redundant or oversized signs by assisting drivers in acquiring roadway information.
Abstract: This study reports on the development of a system that incorporates color image processing and neural networks to detect and locate highway warning signs in natural roadway images. Such a system could reduce the need for redundant or oversized signs by assisting drivers in acquiring roadway information. Transportation agencies could use such a system as the first step in an automated highway sign inventory system. Currently, a human operator must watch hours of highway videos to complete this inventory. While only warning signs were considered in this study, the procedure was designed to be easily adapted to all highway signs. The basic approach is to digitize a roadway image and segment this image, using a back-propagation neural network, into eight colors that are important to highway sign detection. Next, the system scans the image for color regions that may possibly represent highway warning signs. Upon finding possible warning sign regions, these regions are further analyzed by a second back-propagation neural network to determine if their shape corresponds to that of a highway warning sign. >

Proceedings ArticleDOI
05 Sep 1994
TL;DR: In this article, a real-time change detection method for multiple object localization from real-world image sequences is presented, where limits, quality and time performances of the described pixel-oriented method are compared with other existing techniques.
Abstract: The aim of this paper is to show a real-time change detection method for multiple object localization from real world image sequences. Limits, quality and time performances of the described pixel-oriented method are outlined comparing it with other existing techniques. Results are presented by applying the the technique described in the architecture of a real-time surveillance system for visual control of an unattended level-crossing. The localization of detected objects is also addressed and tested on real scenes where illumination is not assumed to be constant. >

Proceedings ArticleDOI
05 Dec 1994
TL;DR: The original contribution resides in the use of this new geometrical-topological technique, size theory, so confirming its suitableness for recognition of natural objects.
Abstract: Leukocytes are divided into classes. Their automatic classification is accomplished by means of site functions, based on two measuring functions defined expressly for taking into account the specific morphological features of the cell classes. A successful experimentation on 45 cells is reported. The original contribution resides in the use of this new geometrical-topological technique, size theory, so confirming its suitableness for recognition of natural objects. >

Proceedings ArticleDOI
13 Nov 1994
TL;DR: A two stage active vision system for tracking of a moving object which is detected in an overview image of the scene; a close-up view is then taken by changing the frame grabber's parameters and by a positional change of the camera mounted on a robot's hand.
Abstract: In this paper we describe a two stage active vision system for tracking of a moving object which is detected in an overview image of the scene; a close-up view is then taken by changing the frame grabber's parameters and by a positional change of the camera mounted on a robot's hand. With a combination of several simple and fast working vision modules, a robust system for object tracking is constructed. The main principle is the use of two stages for object tracking: one for the detection of motion and one for the tracking itself. Errors in both stages can be detected in real time; then, the system switches back from the tracking to the motion detection stage. Standard UNIX interprocess communication mechanisms are used for the communication between control and vision modules. Object-oriented programming hides hardware details. >

Patent
13 Jan 1994
TL;DR: In this article, a radar system which detects the presence of objects in the proximity of a movable vehicle includes a signal source which generates object detection signals, a first antenna which transmits the object detection signal and receives the signal as reflected signals reflected from an object in the vicinity of the moving vehicle, and a control unit is responsive to the reception of the reflected signals for providing an indication of the detection of the object.
Abstract: A radar system which detects the presence of objects in the proximity of a movable vehicle includes a signal source which generates object detection signals, a first antenna which transmits the object detection signals and receives the object detection signals as reflected signals reflected from an object in the proximity of the movable vehicle. The first antenna is further operable for receiving non-reflected test signals. A second antenna is provided for transmitting test signals which correspond to a delayed portion of the object detection signal generated by the signal source. A control unit is responsive to the reception of the reflected signals for providing an indication of the detection of the object, and is responsive to the reception of the test signals for providing an indication of the operability of the system.

Proceedings ArticleDOI
21 Jun 1994
TL;DR: This paper presents a new representation called "hierarchical Gabor filters" and associated novel local measures which are used to detect potential objects of interest in images and preserves the computational efficiency of separable filters while providing the distinctiveness required to discriminate objects from clutter.
Abstract: This paper presents a new representation called "hierarchical Gabor filters" and associated novel local measures which are used to detect potential objects of interest in images. The "first stage" of the approach uses a wavelet set of wide-bandwidth separable Gabor filters to extract local measures from an image. The "second stage" makes certain spatial groupings explicit by creating small-bandwidth, non-separable Gabor filters that are tuned to elongated contours or periodic patterns. The non-separable filter responses are obtained from a weighted combination of the separable basis filters, which preserves the computational efficiency of separable filters while providing the distinctiveness required to discriminate objects from clutter. This technique is demonstrated on images obtained from a forward looking infrared (FLIR) sensor. >

Book ChapterDOI
01 Jan 1994
TL;DR: The Spacewatch program at the University of Arizona has pioneered automatic methods of detecting Near Earth Objects and the automatic streak detection is able to locate streaks whose peak signal is above ~4σ and whose length is longer than about 10 pixels.
Abstract: The Spacewatch program at the University of Arizona has pioneered automatic methods of detecting Near Earth Objects. Our software presently includes three modes of object detection : automatic motion identification; automatic streak identification; and visual streak identification. For automatic motion detection at sidereal drift rates, the 4σ detection threshold is near magnitude V = 20.9 for nearly stellar asteroid images. The automatic streak detection is able to locate streaks whose peak signal is above ~4σ and whose length is longer than about 10 pixels. Some visually detected streaks have had peak signals near ~lσ.

Proceedings ArticleDOI
12 Sep 1994
TL;DR: A simple pulse-echo ranging system whose receiver signal processing is completely digital allows one to benefit from the advantages of correlation-based detection methods for accurate ranging of multiple objects, without compromising the most prominent features of pulse- echo sonar systems, namely their relatively low cost and simplicity of operation.
Abstract: Ultrasonic pulse-echo ranging systems based on threshold detection methods are popular devices in the robotic field, as a means for determining the proximity of objects in a cost-effective manner. Despite their widespread use, serious concerns are often raised regarding the accuracy of the sensed data, particularly when the return signals are received at low signal-to-noise ratios. In principle, correlation-based detection methods provide better performance for their outstanding capability of detecting and recovering weak signals buried in noise, so as to permit ranging at longer distances, or, the distance being the same, at higher frequencies, with resulting improvements in spatial resolution in spite of the increased attenuation. In this paper the authors describe a simple pulse-echo ranging system whose receiver signal processing is completely digital: the use of appropriate sampling techniques and signal processing algorithms allows one to benefit from the advantages of correlation-based detection methods for accurate ranging of multiple objects, without compromising the most prominent features of pulse-echo sonar systems, namely their relatively low cost and simplicity of operation. The experimental results presented concern the use of relatively high-frequency ultrasonic transducers. >

Journal ArticleDOI
TL;DR: Several new optical morphological operations for use in the above detection problem and in other general low-level image-processing applications are described, and several examples of their use are provided.
Abstract: We consider the problem of detecting multiple distorted objects in an input scene with clutter. The input scenes contain different types of background clutter and multiple objects in different classes, with different object aspect views, different object representations, hot/cold/bimodal/partial object variations, and high/low contrast object variations. Several new optical morphological operations for use in the above detection problem and in other general low-level image-processing applications are described, and several examples of their use are provided. For difficult detection problems in which high detection rates and low false-alarm rates are required we combine morphological operations and optical wavelet transforms to reduce clutter and improve object detection. The details of this set of filters and initial test results are given. The most computationally demanding operations required in all cases are realizable on an optical correlator.

Proceedings ArticleDOI
12 Sep 1994
TL;DR: A new approach for online initializing the snake on the first images of the given sequence, and it is shown that the method of snakes is suited for real time motion tracking.
Abstract: In this contribution we describe steps towards the implementation of an active robot vision system. In a sequence of images taken by a camera mounted on the hand of a robot, we detect, track, and estimate the position and orientation (pose) of a three-dimensional moving object. The extraction of the region of interest is done automatically by a motion tracking step. For learning 3-D objects using two-dimensional views and estimating the object's pose, a uniform statistical method is presented which is based on the expectation-maximization-algorithm (EM-algorithm). An explicit matching between features of several views is not necessary. The acquisition of the training sequence required for the statistical learning process needs the correlation between the image of an object and its pose; this is performed automatically by the robot. The robot's camera parameters are determined by a hand/eye-calibration and a subsequent computation of the camera position using the robot position. During the motion estimation stage the moving object is computed using active, elastic contours (snakes). We introduce a new approach for online initializing the snake on the first images of the given sequence, and show that the method of snakes is suited for real time motion tracking. >

Proceedings ArticleDOI
Lin1, Zheng1, Chellappa1, Davis1, Zhang1 
21 Jun 1994
TL;DR: A site model supported image monitoring system which utilizes image understanding techniques driven by an underlying site model is presented and the results of object detection are used for monitoring changes.
Abstract: Image monitoring, the process of locating and identifying significant changes or new activities, is one of the most important imagery exploitation tasks. A site model supported image monitoring system which utilizes image understanding techniques driven by an underlying site model is presented. In our approach, we first register the image to be monitored to an existing site model, which is constructed using the RADIUS Common Development Environment; the regions of interest are then delineated based on site information, camera acquisition parameters, and goals of the image analyst; object extraction is then done using constraints on size, shape, orientation, and shadow of the target object derived from known information about image resolution, 3-D shape of the object, camera viewing and illuminant directions. The results of object detection are used for monitoring changes. >

Journal ArticleDOI
TL;DR: In this article, the authors describe weighting techniques used for the optimal coaddition of charge coupled devices (CCD) frames with differing characteristics, and derive formulas for object detection via matched filter, object detection identical to DAOFIND, aperture photometry, and ALLSTAR profile-fitting photometry.
Abstract: In this paper we describe weighting techniques used for the optimal coaddition of charge coupled devices (CCD) frames with differing characteristics. Optimal means maximum signal to noise (S/N) for stellar objects. We derive formulas for four applications: (1) object detection via matched filter, (2) object detection identical to DAOFIND, (3) aperture photometry, and (4) ALLSTAR profile-fitting photometry. We have included examples involving 21 frames for which either the sky brightness or image resolution varied by a factor of 3. The gains in S/N were modest for most of the examples, except for DAOFIND detection with varying image resolution which exhibited a substantial S/N increase. Even though the only consideration was maximizing S/N, the image resolution was seen to improve for most of the variable resolution examples. Also discussed are empirical fits for the weighting and the availability of the program, WEIGHT, used to generate the weighting for the individual frames. Finally, we include appendices describing the effects of clipping algorithms and a scheme for star/galaxy and cosmic-ray/star discrimination. scheme for star/galaxy and cosmic-ray/star discrimination.

Patent
30 Nov 1994
TL;DR: In this article, a triangulation-based optical system was proposed to construct an optical system on the photodetecting side simply in a position detection sensor using triangulations.
Abstract: PURPOSE: To construct an optical system on the photodetecting side simply in a position detection sensor using triangulation. CONSTITUTION: Light is radiated from a projection element 2 to irradiate an object detection area with a projection beam 4. The reflected light from an object to be detected is received by a position detection sensor 7. When a mask 20 is arranged right before the position detection sensor 7, a shadow of the mask 20 is formed on a position detector 7 and hence, a position signal is outputted to the object to be detected from the position of the center of the shadow.