scispace - formally typeset
Search or ask a question

Showing papers on "Object detection published in 1990"


Patent
10 Apr 1990
TL;DR: In this paper, an object detection module (50) is mounted on a vehicle for detecting the presence of an object (68) within a monitored zone of space (56) adjacent the vehicle (54).
Abstract: A collision avoidance system includes an object detection module (50) mounted on a vehicle (54) for detecting the presence of an object (68) within a monitored zone of space (56) adjacent the vehicle (54). The detection unit (50) emits a plurality of beams (66) of infrared energy and detects the reflection of such energy from objects (68) within the zone. The detection module (50) is typically activated by the host vehicle's electrical turn signal. The detection module (50) includes a plurality of associated pairs of light emitting diodes (134) and photosensitive detectors (128) for sensing the reflected light.

110 citations


Book ChapterDOI
23 Apr 1990
TL;DR: In this article, the displacement field is defined as the transformation of the image points between successive time points for an observer translating relative to a planar road, which can be exploited to give direct solutions to obstacle detection.
Abstract: When a visual observer moves forward, the projections of the objects in the scene will move over the visual image. If an object extends vertically from the ground, its image will move differently from the immediate background. This difference is called motion parallax [1, 2]. Much work in automatic visual navigation and obstacle detection has been concerned with computing motion fields or more or less complete 3-D information about the scene [3–5]. These approaches, in general, assume a very unconstrained environment and motion. If the environment is constrained, for example, motion occurs on a planar road, then this information can be exploited to give more direct solutions to, for example, obstacle detection [6]. Figure 6.1 shows superposed the images from two successive times for an observer translating relative to a planar road. The arrows show the displacement field, that is, the transformation of the image points between the successive time points.

95 citations


Journal ArticleDOI
13 May 1990
TL;DR: The algorithm for identifying contact points of a multi-fingered robot hand using joint compliance is developed and sufficient conditions are developed to demonstrate the robustness of the SPC technique to variations in the coefficient of friction.
Abstract: Determining where the fingers of a multi-fingered robot hand touch an object of unknown shape plays an important role in achieving a stable grasp. This paper focuses on a scheme for identifying such contact points. Instead of mounting a distributed tactile sensor all over the finger links, we propose an active sensing approach using joint compliance. The proposed scheme is composed of two phases. In the first phase, the approach phase, each finger is extended to its most distal position as it approaches the object. This phase continues until any part of a finger link contacts the object. During the second phase, the detection phase, each finger's posture is strategically changed by sliding the finger over the object while maintaining contact between the object and the finger. Using two selected postures during the detection phase, we can compute an intersecting point that gives an approximate contact point. This paper develops the algorithm and provides results from an experimental implementation of the scheme on a two-fingered robot hand running a joint level compliance controller. The process of changing the posture of the finger while maintaining object-finger contact is called self posture changeability (SPC). It also develops sufficient conditions to demonstrate the robustness of the SPC technique to variations in the coefficient of friction. >

89 citations


Journal ArticleDOI
TL;DR: Saturation and glooming problems associated with present consumer-use video cameras caused by the narrow dynamic range of the single CCD camera system and lack of signal processing capability are solved.
Abstract: A new video camera control system developed for extending the dynamic range of present single-chip CCD (charge coupled device) cameras whose dynamic range is inherently limited is discussed. This is accomplished by controlling the dynamic range using a signal which discriminates the contrast of the object, by compensating the white balance by detecting achromatic parts of the object out of video signals, and by reducing pseudocolor effects produced at a region where high-frequence components are abundant. Saturation and glooming problems associated with present consumer-use video cameras caused by the narrow dynamic range of the single CCD camera system and lack of signal processing capability are solved. The approach adopted prevents excessive iris closing under back lighting and saturation of a locally bright area of images produced by excessive forward lighting. >

26 citations


Journal ArticleDOI
TL;DR: The computer-vision problem of determining object orientation from the consensus of orientations of individual symbols or marks is examined, and the optimal Bayesian detector is derived and found to have the highly parallel structure of a feedforward network.
Abstract: The computer-vision problem of determining object orientation from the consensus of orientations of individual symbols or marks is examined. The problem arises in automatic inspection where orientation can be detected from printed text but there is no knowledge of the content of the text. This is a high-dimensional classification problem, and there is a requirement for highly accurate detection and rapid processing. The typical multilayer threshold networks are seen as unsuitable, and the optimal Bayesian detector is derived and found to have the highly parallel structure of a feedforward network. The learning vector quantization neural network method of T. Kohonen (1988) is also applied. Experimental results, comparisons, and a complete implementation are described. >

24 citations


Proceedings ArticleDOI
03 Jul 1990
TL;DR: Dynamic vision may be utilized for detecting and classifying objects that could be obstacles for a mobile robot if conditions are fairly favorable obstacles are reliably recognized and false alarms are rejected.
Abstract: Dynamic vision may be utilized for detecting and classifying objects that could be obstacles for a mobile robot. Methods for accomplishing this are introduced. They have been implemented on a multiprocessor vision system and tested in outdoor environments. If conditions are fairly favorable obstacles are reliably recognized and false alarms (e.g. caused by shadows) are rejected. Among the main problems which have not yet been completely solved are the tracking of the road in a great distance, the recognition of the contours of an object when many features are visible on the object's surface, and the separation of the object from the background. >

24 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: The architecture of an image analysis system called MESSIE (Multi Expert System for Scene Interpretation and Evaluation) is presented, which reasons from geometric models which are represented by four concepts (geometry, radiometry, context, and function).
Abstract: The architecture of an image analysis system called MESSIE (Multi Expert System for Scene Interpretation and Evaluation) is presented. This system reasons from geometric models which are represented by four concepts (geometry, radiometry, context, and function). The aim is to find the class instances from the generic models which are present in the scene. The necessity of having a hierarchic and opportunistic method of solving the problem of interpretation is shown. In a first step, MESSIE tries to detect salient objects. Then, using characteristics of salient objects and knowledge about the context of objects, MESSIE tries to confirm the object hypothesis and to infer new objects in the scene. The domain used to develop MESSIE is aerial imagery interpretation. Results on the detection of roads and buildings in suburban images are given. >

14 citations


Journal ArticleDOI
TL;DR: An unmanned intrusion detection system for power stations or substations that detects trespassers in real time, both indoors and outdoors, and is based on image processing is given.
Abstract: A description is given of an unmanned intrusion detection system for power stations or substations that detects trespassers in real time, both indoors and outdoors, and is based on image processing. The main part of the system consists of a video camera, an image processor, and a microprocessor. Images are input from the video camera to the image processor every 1/60 s, and objects that enter the field of the image are detected by measuring the changes of the intensity level in selected sensor areas. The shapes and locations of active sensor areas can be determined based on detection application, using techniques tailored to the application. Noise removal filters prevent spurious detections. High detection sensitivity is guaranteed under any environmental condition. The system configuration and the detection method are described. Experimental results under a range of environmental conditions are given. >

13 citations


Proceedings ArticleDOI
04 Dec 1990
TL;DR: A simple and reliable method is presented for detecting concave and convex discontinuous points in plane curves that can be applied to detect feature points in 2-D binary images or used in detecting surface boundaries in range images.
Abstract: A simple and reliable method is presented for detecting concave and convex discontinuous points in plane curves. The method can be applied to detect feature points in 2-D binary images or it can be used in detecting surface boundaries in range images. The author also shows that the impulse response of the detection method is similar to that of the second derivative of a Gaussian operator. Some experimental results are given with 2-D images and range images. >

13 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: An algorithm which approximates the maximum likelihood estimator (MLE) for the locations of constant-density discs in integral projection data is presented and requires only approximately 25% more computation than the MLE of the single-location problem.
Abstract: An algorithm which approximates the maximum likelihood estimator (MLE) for the locations of constant-density discs in integral projection data is presented. The algorithm is of linear complexity in the object count and requires only approximately 25% more computation than the MLE of the single-location problem. Approximation error analysis, in terms of error probability, is provided through empirical matching of densities for the extremal values of the random-noise field. >

12 citations


Proceedings ArticleDOI
04 Dec 1990
TL;DR: A vision system is presented which automatically generates an object recognition strategy from a 3D model, and recognizes the object using this strategy using the line representation generated from the3D model and the image features to localize the object.
Abstract: A vision system is presented which automatically generates an object recognition strategy from a 3D model, and recognizes the object using this strategy. In this system, the appearances of an object from various viewpoints are described with visible 2D features, such as parallel lines and ellipses. Then, the features in the appearances are ranked according to the number of viewpoints from which they are visible. The rank and the feature extraction cost for each feature are considered to generate a tree-like strategy graph. It shows an efficient feature search order when the viewpoint is unknown, starting with commonly occurring features and ending with features specific to a certain viewpoint. The system searches for features in the order indicated by the graph. After detection, the system compares the line representation generated from the 3D model and the image features to localize the object. >

Journal ArticleDOI
TL;DR: This paper addresses the problem of object detection in analyzing high resolution multispectral aerial images with high accuracy and low false alarm rates and demonstrates the robustness of the step-wise analysis approach.
Abstract: In many computer vision systems accurate identification of various objects appearing in a scene is required. In this paper we address the problem of object detection in analyzing high resolution multispectral aerial images. Development of a practical object detection approach should consider issues of speed, accuracy, robustness, and amount of supervision allowed. The approach is based upon extraction of information from images and their systematic analysis utilizing available prior knowledge of various physical attributes of the objects. The step-wise approach examines spectral, spatial, and topographic features in making the object vs background decision. Techniques for the analysis of the spectral, spatial, and topographic features tend to be of increasing levels of computational complexity. The computationally simpler spectral feature analysis is performed for the entire image to detect candidate object regions. Only these regions are considered in the spatial feature analysis step to further reduce the number of candidate regions which need to be analyzed in the topographic feature analysis step. Such step-wise analysis makes the entire object detection process efficient by incorporating the process of “focus of attention” to identify regions of interest thus eliminating a relatively large portion of image from further detailed examination at every stage. Results of the experiments performed using several high resolution multispectral images have demonstrated the basic feasibility of the approach. The images utilized in the experiments are acquired from geographically different locations, at different times, with different types of background, and are of different resolution. Successful object detection with high accuracy and low false alarm rates indicate the robustness of this approach.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: This two-stage algorithm is fast due to the fact that locating bisecting points solves only very simple additions and shifting operations, and the second stage involves very few computations for each object.
Abstract: The horizontal and vertical chord bisectors of objects are used to detect elliptic objects in a binary image. In 3-D applications, a circular plane viewed from different viewing angles will always be projected as a pseudoellipse. Hence, the detection of ellipses, which are parameterized with five parameters (namely x0, y0, a, b and theta ), is much more general and useful for both 2-D and 3-D image recognition tasks. For any general ellipse, the loci of its chord bisectors and chord lengths possess various properties which can be utilized to extract the center (0, y0) and to facilitate discrimination of elliptic objects from others. Different pairs of parallel strips are used to calculate the hypothesized values of two newly defined parameters of an ellipse. The statistical modes for these two parameters are extracted for the computation of the remaining parameters theta , a, and b. This two-stage algorithm is fast due to the fact that locating bisecting points solves only very simple additions and shifting operations, and the second stage involves very few computations for each object. >

Proceedings ArticleDOI
03 Jul 1990
TL;DR: The authors propose an active touch approach using joint compliance to find a contact point between a multifingered hand and an unknown object using a suitable combination of compliant joints and position-controlled joints.
Abstract: Focuses on a scheme for searching a contact point between a multifingered hand and an unknown object. Instead of mounting a distributed tactile sensor all over the finger links, the authors propose an active touch approach using joint compliance. The algorithm is composed of two phases. One is the approach phase, in which each finger is opened widely and approaches to an object until a part of a finger link is in contact with the object. The other is the detection phase, in which each finger posture is changed with slip while maintaining contact between object and finger. Using two selected postures during the detection phase, one can compute the intersecting point which leads to an approximate contact point. With a suitable combination of compliant joints and position-controlled joints, a finger link has the capability of changing its posture while maintaining contact with an object over a small angular displacement at a particular joint. This motion is essential in the contact point detection phase. The proposed algorithm is confirmed through simple experiments using a two-fingered robot hand. >

Proceedings Article
01 Aug 1990
TL;DR: The location and identification of compact ferrous objects is of major concern for detection of vehicles, submarines, achaeological objects, buried or hidden explosive objects, geological ore inclusions and inclusions in the human body as mentioned in this paper.
Abstract: Location and identification of compact ferrous objects is of major concern for detection of vehicles, submarines, achaeological objects, buried or hidden explosive objects, geological ore inclusions and inclusions in the human body. In principle, measurement of the spatial variation of the magnetostatic field associated with a compact ferrous object may be used to locate and identify the object and considerable success has been achieved in this area [1].

Proceedings ArticleDOI
05 Nov 1990
TL;DR: Combining spectral and parametric techniques on CTFM active sonar data as a preprocessor for neural network classifiers to detect and discriminate between mine like objects (spheres) and naturally occurring objects is considered.
Abstract: The ability to accurately detect and classify underwater objects of interest in an automated system is of great importance. It provides human operators with enhanced capability and can greatly reduce information overload. As is often the case, human classification of signals of interest is often impractical due to situational or operational constraints. Two benefits of automated classifier systems are immediately apparent. The first is the tireless nature of computers to perform tedious tasks such as data analysis. The second and equally important benefit of machine pattern recognition is the ability to perform classification in areas where the presence of humans may not be desirable or possible. Highly accurate classification and fast parallel processing speeds can be obtained by using neural network classifiers for pattern recognition problems. Problems associated with conventional training methods can be either alleviated or bypassed by using Evolutionary Programming as a training mechanism. Combining these efficient classifiers with effective signal pre-processing techniques provides the basis for robust automated detection of undersea objects of interest. This paper considers combined spectral and parametric techniques on CTFM active sonar data as a preprocessor for neural network classifiers to detect and discriminate between mine like objects (spheres) and naturally occurring objects.

Journal ArticleDOI
TL;DR: A new method for circular object detection and location that can find the center point and the radius of each circular object in an input image and locate circular objects which are defective or partially occluded is proposed.


Proceedings ArticleDOI
13 May 1990
TL;DR: A method is presented for using the high-level descriptions of objects (i.e. their models) to recognize them in an image to detect and recognize partially occluded and camouflaged objects.
Abstract: A method is presented for using the high-level descriptions of objects (i.e. their models) to recognize them in an image. A complex object is viewed as a congregation of a set of component parts with simple shapes. The model of an object, therefore, describes the shapes of its component parts and states the geometrical relationships among those parts. This method also includes a recognition strategy which is a simple high-level description of how that object must be recognized. The shape descriptions of the parts are first used to extract a set of candidates for each part from the image. An object candidate is formed whenever a group of part candidates satisfy the model's geometrical relationships. A model-based prediction and verification scheme is used to verify (or refute) the existence of the object candidates with low certainty. The scheme not only substantially increases the accuracy of recognition, but also makes it possible to detect and recognize partially occluded and camouflaged objects. Another advantage of the approach is that to recognize a new object, one only needs to define its model, and thus no programming is required. The user's task is further simplified by the fact that each newly defined model is sufficient for recognizing a new category of objects. >

Proceedings ArticleDOI
13 May 1990
TL;DR: An edge detection and classification scheme for range images which produces a multiscale representation in terms of well-localized depth and orientation edges is presented and comparisons with recently published techniques point out the improved performance of the approach, especially when the images contain substantially overlapping objects.
Abstract: An edge detection and classification scheme for range images which produces a multiscale representation in terms of well-localized depth and orientation edges is presented. The extraction is accomplished by detecting the presence of significant edges at a coarse scale and then determining their precise location by tracking them over decreasing scale. An adaptive multiscale thresholding is applied during this focusing process ro inhibit the attraction of insignificant details. Once focused, the edges are classified into the categories of true edge and diffuse edge by invoking classification rules derived from a mathematical analysis of edge displacement and branching over scale-space. Experimental results illustrate the robustness of the approach in the presence of noise and its performance with synthetic and real images of varying complexity. Comparisons with recently published techniques point out the improved performance of the approach, especially when the images contain substantially overlapping objects. >

Patent
21 Apr 1990
TL;DR: In this paper, an object detection apparatus of the photoelectric reflection type is comprised of an optical source, an optical collecting system, and an output circuit connected to output an object signal only when the photosensor detects the spot, location of which depends on the axial distance of the object.
Abstract: The object detection apparatus of the photoelectric reflection type is comprised of an optical source (2) for generating an optical ray, an optical incident system (3) having an optical axis extending through an observation zone in a direction of its definite observation range for directing the optical ray along the optical axis, and an optical collecting system (5) for collecting the optical ray reflected by the object (4) which traverses the optical axis to focus on a receiving area (6) an optical spot, location of which depends on an axial distance of the object Further, a definite photosensor is disposed to cover a part of the receiving area and operates only when an object enters into the observation zone within the observation range for detecting an optical spot within the covered part of the receiving area, and an output circuit is connected to output an object detection signal only when the photosensor detects the spot Preferrably, the definite photosensor has a sensing face extending between one edge registered with a spot which corre­ sponds to the farthest end of the observation range and another edge registered with another spot which corresponds to the closest end of the observation range Further preferrably, the optical source includes a photoemitter for intermittently emitting a pulsive optical ray in response to a sampling signal, and the output circuit operates to produce successively a sampled data according to the intensity of the detected optical ray and to calculate a relative change between preceding and succeeding sampled data so as to output the object detection signal, when for example the object passes between the detection apparatus and a reflective background surface

Proceedings ArticleDOI
01 Jan 1990
TL;DR: In this article, a methodology for developing an object detection system which examines the spectral, spatial and topographic features in a step-wise manner is presented, and the algorithms developed are tested using high resolution thermal infrared images.
Abstract: Accurate detection of unique objects with minimal false alarm rates is an important requirement for most reconnaissance tasks. This paper presents a methodology for developing an object detection system which examines the spectral, spatial and topographic features in a step-wise manner. The algorithms developed are tested using high resolution thermal infrared images. Multiresolution analysis to improve the performance this approach is also discussed. Results of these experiments show a definite promise for the approach.

Journal ArticleDOI
01 Feb 1990
TL;DR: A motion vision system is developed in which a moving object can be detected and image displacement can be estimated based on human visual characteristics and use of a multiresolution image.
Abstract: A motion vision system is developed in which a moving object can be detected and image displacement can be estimated based on human visual characteristics and use of a multiresolution image. The system consists of four parts: (1) Temporal gradient, logic AND, and dynamic thresholding operations are used to obtain the primary mask. (2) A region growing algorithm is applied. (3) A hierarchical object detection algorithm is used to identify image patterns. (4) Displacement of the image is estimated by breaking each frame of the motion sequence into local regions (edges). A search is undertaken to discover how the image pattern within a given region appears displaced. This search takes the form of motion channels, the output of which are used to obtain the estimation of displacement. A correlative measure is proposed to match the patterns.

Proceedings ArticleDOI
16 Jun 1990
TL;DR: A segmentation method based on the gradient method for multiple moving objects which may include an object for which a unique interpretation of the motion is difficult, which is extracted by iterating the segmentation and merge on the image.
Abstract: The author proposes a segmentation method based on the gradient method for multiple moving objects which may include an object for which a unique interpretation of the motion is difficult. By the gradient method, the 3-D motion parameters of a rigid object can be estimated without determining correspondence as a pseudoinverse solution for a system of linear equations if the 3-D structure of the object is already given. Based on the residual square-error in the motion estimation, the image is segmented if it contains regions with different motions. If the motions are recognized as the same, the regions are merged. Thus, the object is extracted by iterating the segmentation and merge on the image. >

03 Jan 1990
TL;DR: A parallel distributive algorithm which consists of multiple processing stages--mainly anisotropic edge filtering, corner detection, and spatial coherence check, which gives good segmentation and behaves reasonably well against random noise.
Abstract: An edge-based segmentation algorithm based on the knowledge in human vision was developed. The research followed Grossberg's boundary contour system and developed a parallel distributive algorithm which consists of multiple processing stages--mainly anisotropic edge filtering, corner detection, and spatial coherence check. The two-dimensional input information is processed in parallel within each stage and pipelined among stages. Within each stage, local operations are performed at each pixel. The application of this algorithm to many test patterns shows that the algorithm gives good segmentation and behaves reasonably well against random noise. A multiscale mechanism in the algorithm can segment an object into contours at different levels of detail. The algorithm was compared with an approximation of Grossberg's boundary contour system. Both algorithms gave reasonable performance for segmentation. The differences lie in the level of image dependency of the configuration parameters of the algorithm. Also, the way random noise affects the algorithm was compared with the way it affects human object detection. Data obtained from psychophysical experiments and from application of the algorithm show a similar trend.

Patent
01 Jun 1990
TL;DR: In this paper, a hardware system is used both to initiate processing information about an object for display when it is visible in the field of view of a display and to stop processing information when that object is no longer visible in a display.
Abstract: A hardware system (70) is used both to initiate processing information about an object for display when it is visible in the field of view of a display and to stop processing information about an object for display when that object is no longer visible in the field of view of the display. In operation, the system (70) monitors the activity of the actual display drawing circuits to notice if any pixel of a particular object is ever painted into the current display buffer. At the end of processing of this object, the flip-flop (90) will be ON if the object was NEVER painted into the display buffer. This NEVERON bit can later be examined by the software to decide whether or not this particular object needs further processing.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: It is shown that the adaptive decision threshold derived from the change measure minimizes the total probability of error as measured in false alarms and missed detections.
Abstract: A statistical analysis of a change detector for motion detection based on image modeling of a difference picture is presented. The approach is founded upon the fact that complete silhouettes of moving objects can be segmented at each frame time using a change detector based on image modeling of the difference picture. The change detection problem can be treated as a signal detection problem in which very little is known about the signal to be detected. The exact modeling of the background event is necessary for a robust change detector which can adapt to changing environments. The governing statistics for change measures, which have a form of sum of squares of the difference picture, can be inferred from the assumed Gaussian distribution of the stationary background in the difference picture and are used for calculation of the adaptive decision threshold for the change detector. It is shown that the adaptive decision threshold derived from the change measure minimizes the total probability of error as measured in false alarms and missed detections. Experimental results that support the underlying statistical analysis are presented. >

Proceedings ArticleDOI
Y. Okamoto1, Yoshinori Kuno1, S. Okada
27 Nov 1990
TL;DR: A vision system that automatically generates an object recognition strategy from a 3D model and recognizes the object by this strategy is presented and shows an efficient feature search order when the viewer direction is unknown.
Abstract: A vision system that automatically generates an object recognition strategy from a 3D model and recognizes the object by this strategy is presented. In this system, the appearances of an object from various view directions are described with 2D features, such as parallel lines and ellipses. These appearances are then ranked, and a tree-like strategy graph is generated. It shows an efficient feature search order when the viewer direction is unknown. The object is recognized by feature detection guided by the strategy. After the features are detected, the system compares the line representation generated from a 3D model and the image features to localize the object. Perspective projection is used in the localization process to obtain the precise position and attitude of the object, while orthographic projection is used in the strategy generation process to allow symbolic manipulation. >

Proceedings ArticleDOI
03 Jul 1990
TL;DR: A two-arm telerobotic system, which includes a vision arm and a working arm, which can accomplish the task, such as the antenna assembly, without human extravehicular activities is proposed.
Abstract: Describes an antenna assembly robot system. The system is composed of a teleoperating console, a real-time simulator, a robot arm, a robot controller and an assembly model space antenna. The console is equipped with a vision sensor using target marks to guide the antenna segment. The target mark image is superimposed with the reference target mark figure, so relative position differences from a desired object position can be detected visually. The real-time robot simulator is used to check the robot motion during the teleoperation. Antenna assembly experiments were carried out to investigate the effectiveness of using this system. Also, the vision sensor position detection accuracy was evaluated experimentally. To reduce operator's burdens, another target mark for a visual feedback was designed and a two-arm telerobotic system, which includes a vision arm and a working arm, was proposed. Measurement accuracy of the vision sensor using the mark was also evaluated by numerical simulation. Using these space telerobotic systems, an astronaut can accomplish the task, such as the antenna assembly, without human extravehicular activities. >

Patent
27 Mar 1990
TL;DR: In this article, the 1st and 2nd detection coils and the oscillation circuit as a proximity switch are used to output a normal object detection signal even if either of the detection coils is broken or an oscillation is stopped by providing the first and second detection coils.
Abstract: PURPOSE:To output a normal object detection signal even if either of detection coils is broken or an oscillation circuit is stopped by providing the 1st and 2nd detection coils and the oscillation circuit as a proximity switch. CONSTITUTION:When an object to be sensed approaches, the oscillation of the oscillation circuits 1, 2 is stopped respectively at the normal operation. Thus, a terminal voltage of capacitors C3, C4 is decreased and an OR output of outputs of the oscillating circuits subject to waveform shaping goes to an L level. Thus, an object sensing signal is outputted. Then if a detection coil L2 is opened or the oscillation of the oscillation circuit 2 is stopped due to any fault, the oscillation circuit 2 always stops the oscillation independently of the presence or absence of the sensing object. If the oscillation is interrupted corresponding to the proximity of the object, the object sensing signal is obtained from the output circuit 8 by using the OR output. Since the terminal voltages of the capacitors C3, C4 are dissident, a fault discrimination output is obtained from a dissidence discrimination circuit 5.