scispace - formally typeset
Search or ask a question

Showing papers in "Systems and Computers in Japan in 1985"


Journal ArticleDOI
TL;DR: A highly accurate approximation is realized, which has not been achieved by the linear approximation using the traditional AZTEC method, and is suited to the long-term monitoring of the electrocardiogram to record abnormal waveforms.
Abstract: This paper proposes a highly efficient encoding method for electrocardiogram using spline functions. The method consists of the extraction algorithm, which extracts feature samples from the electrocardiogram from the A/D converter, and the restoration algorithm which restores the original waveform from the extracted samples, using spline functions. In the extraction algorithm, maxima and minima, which is not due to noise, are extracted from the original waveform. Then the spline function is applied to perform a smooth interpolation. By this scheme, a highly accurate approximation is realized, which has not been achieved by the linear approximation using the traditional AZTEC method. According to the result of experiment, the root-mean-square error of the proposed method is approximately half the AZTEC method if the compression ratio is the same. The proposed method consists in the simple extraction algorithm, and can be implemented on a microprocessor. Thus, it is suited to the long-term monitoring of the electrocardiogram to record abnormal waveforms.

38 citations


Journal ArticleDOI
TL;DR: A heuristic algorithm CP/MISF (Critical Path/Most Immediate Successors First) and an optimization/approximation algorithm DF/IHS (Depth First/ Implicit Heuristic Search) are proposed which can reduce markedly the space complexity and average computation time.
Abstract: This paper describes practical optimization/approximation algorithms for scheduling a set of partially ordered computational tasks with different processing times onto a multiprocessor system so that the schedule length is minimized. Since this problem belongs to the class of “strong” NP hard problems, we must eliminate the possibility of constructing not only pseudopolynomial time optimization algorithms, but also fully polynomial time approximation schemes unless P = NP. This paper proposes a heuristic algorithm CP/MISF (Critical Path/Most Immediate Successors First) and an optimization/approximation algorithm DF/IHS (Depth First/ Implicit Heuristic Search). DF/IHS is an excellent scheduling method which can reduce markedly the space complexity and average computation time by combining the branch-and-bound method with CP/MISF; it allows us to solve very large-scale problems with a few hundred tasks.

34 citations


Journal ArticleDOI
TL;DR: It is shown by experiment that the angle of the line can be detected with accuracy within 2 degrees between -45 and +45 degrees.
Abstract: A processing method is described for the document image composed of characters, graphics and photographs, extracting those regions by the following procedures. (1) Two-dimensional Fourier transformation is performed for the entire image, the angle of the character lines is corrected utilizing the peak coordinate of the transform result, and the period of the character lines is calculated. (2) Based on the real and imaginary parts of the transform result, the coordinates for the characters are calculated. (3) The two-dimensional Fourier transformation is performed for the subpictures determined by the character period. The entire document is scanned and the regions are detected by determining the existence of the peak corresponding to the character-line period. It is shown by experiment that the angle of the line can be detected with accuracy within 2 degrees between -45 and +45 degrees. An experiment was performed for the image (journal article and patent document) composed of graphics and photographs, and it is verified that the method can extract the regions with sufficient accuracy.

25 citations


Journal ArticleDOI
TL;DR: It is clarified that this architecture makes it possible to improve the performance 1.8 times faster than without it, and the additional hardware increase for this architecture is 33%.
Abstract: This paper considers a high-performance architecture for Variable Length Instructions (VLI) and its evaluation. The VLI studied here are characterized by a multioperand which can treat several independent operands, and orthogonality which can specify the operand address independently of the operation code. This architecture consists of the pipeline processing method suitable for multioperand instruction format, instruction/data separate-type dual cache memory configuration to obtain memory throughput necessary for efficient operation of pipeline, and the operand location-free microprogramming method with virtualized operand interface. The basic ideas of the pipeline processing method are the operand specifier (OSP) based pipeline processing which is realized by Inter-OSP based pipeline processing and two simultaneously processing OSPs. From the mixed value analysis and the simulation result, it is clarified that this architecture makes it possible to improve the performance 1.8 times faster than without it. The additional hardware increase for this architecture is 33%.

13 citations


Journal ArticleDOI
TL;DR: It was verified that the proposed method is useful in extracting the character region and the correct extraction rate was 100 percent for individual characters, and 94.6 percent for a letterhead.
Abstract: In the realization of a mixed-mode communication, it is necessary before recognizing individual characters, to separate text and black and white figure regions and to extract efficiently the character region. Problems in such a procedure are the detection and correction of the inclination of a document, separation of contact characters, the merge of disconnected characters, and extraction of a letterhead. This paper describes the results of studies on such problems. The document considered is the English text image, containing binary figures and a letterhead. The basic idea is as follows. Connected regions are obtained by directional propagation and shrinking to merge figures. Then: (1) a thinning process is performed to detect the inclination angle of the input text; (2) the sizes of the connected regions and their relative locations are examined to extract the letterhead. Estimation of pitch is performed and statistical data about individual characters are used to separate contact characters or merge disconnected characters. The experiment was made for 10 and 12 pitch printed characters, and the correct extraction rate was 100 percent for individual characters, and 94.6 percent for a letterhead. Thus it was verified that the proposed method is useful in extracting the character region.

10 citations


Journal ArticleDOI
TL;DR: A speaker-independent word recognition based on multiple word templates is described using the SPLIT method and the effectiveness of this system was verified via several experiments using telephone switch.
Abstract: A speaker-independent word recognition based on multiple word templates is described using the SPLIT method. Since the spectral distance calculation amount is independent of the number of word templates in the SPLIT method, it can be made best use of in speaker-independent word recognition based on multiple word templates. Phoneme-like templates and multiple word templates are selected automatically by a clustering technique. The effectiveness of this system was verified via several experiments using telephone switch. Recognition accuracy of 96.9 and 96.2 percent was obtained for training and test speech samples, respectively.

7 citations


Journal ArticleDOI
TL;DR: A new coding method of gray-valued pictures which facilitates the coding of the high information content of original pictures by only about 20% of the contour lines is illustrated.
Abstract: First, the concept of density contour lines is introduced for gray-valued pictures. Its application for pattern recognition has already been reported. However, in this paper it is considered for description and restoration of gray-valued pictures via the extraction of density contour lines as well as for coding gray-valued pictures by assigning priority to the density contour lines. Furthermore, a method for sequential restoration of pictures from general information is described. In this paper, sequential restoration from the main contour' lines and experimental restoration of gray-valued pictures using only a part of the density contour lines are demonstrated. These results illustrate a new coding method of gray-valued pictures which facilitates the coding of the high information content of original pictures by only about 20% of the contour lines. Finally, the effectivity of the present method is shown for the fast search of pictures.

7 citations


Journal ArticleDOI
Takafumi Miyatake1
TL;DR: This paper considers the map which is the most complex among images, and describes an automatic method to extract roads, which is applied to a map issued by the National Geographical Institute and verified as effective as a method of road extraction.
Abstract: One of the most important techniques in drawing recognition and understanding is to extract the effective components from a complex image according to the purpose of analysis. This paper considers the map which is the most complex among images, and describes an automatic method to extract roads. In general, roads are represented in the map by parallel lines. Consequently, the extraction of roads from a map is formulated as a problem of extracting parallel lines. A method is proposed which directly extracts parallel lines from the image and describes the parallel lines by the spacing and the path of the central line. Two technical problems are contained in this approach. One is the parallel-line extraction, which extracts parallel-line areas (inside the parallel lines) from the image, and the other is the line vectorization, which determines the coordinates for the path line of the extracted area. For the latter, numerous proposals have been made such as line-image thinning and line tracing, and the problem is almost solved. From such a viewpoint, this paper discusses primarily the former, i.e., the parallel-line extraction. One of the difficult problems in the extraction of parallel lines from the image is the processing of the area where parallel lines intersect. A generalized expansion-contraction of the image is proposed to solve this problem. The proposed method is applied to a map issued by the National Geographical Institute, and is verified as effective as a method of road extraction.

6 citations


Journal ArticleDOI
TL;DR: It is shown that the bottleneck effect is not so large in memory-shared multiprocessor systems of fast access times, and a method of removing the bottleneck of the H-R bus by using 4-port bus switches (called TT switches) is presented.
Abstract: In MIMD highly parallel processing systems, such as parallel computers and data flow computers, their capability, cost, and implementability are dependent largely on the implementation of their access mechanis m. This paper proposes a hierarchical routing bus (H-R bus) which is useful as an access mechanism for such system s. In the H-R bus, 3-port bus switches, called T switches, are connected in a hierarchical fashio n. This bus has the following characteristics; (1) a modular structure, thus yielding high expandability, and when many modules are added, its access multiplicity increases; (2) when a system becomes large, its average access distance does not increase greatly; (3) in case of an access contention and collision, distributed arbitration is possible without a deadlock; (4) it requires a small amount of hardware, and a simple interfac e. It is known that any hierarchy-structured multiprocessor system contains a bottleneck, which causes a problem for highly parallel processin g. This paper considers this issue and shows that the bottleneck effect is not so large in memory-shared multiprocessor systems of fast access tim e. Finally, a method of removing the bottleneck of the H-R bus by using 4-port bus switches (called TT switches) is presented.

6 citations


Journal ArticleDOI
TL;DR: An algorithm for extracting road information using parallel vector tracers as a second step is proposed and it became clear that a search could be done correctly even for an intersection with eight edges.
Abstract: We have been studying computer processing of urban maps. As a first step of data processing of urban maps, we proposed an automatic extraction algorithm for built-up areas using topological variances by pyramid structures. In this paper we propose an algorithm for extracting road information using parallel vector tracers as a second step. A graph structure extracting method for urban maps is introduced using parallel vector tracers and an analysis method for intersection structures. Results obtained from experiments applying the proposed method to real urban maps are given. From the experimental results it became clear that a search could be done correctly even for an intersection with eight edges. This method is efficient for extracting graph structures from various figures.

6 citations


Journal ArticleDOI
TL;DR: A new data tablet using electromagnetic 1 induction is described which is made so large that the reading accuracy is improved and an electromagnetic white hoard and a tablet unified with a plasma display panel for teleconferencing equipment are described.
Abstract: A new data tablet using electromagnetic 1 induction is described. The position on the i tablet is detected by the sensor in the pen holder which is coupled with the rotating magnetic field whose phase varies linearly with the position of the pen on the tablet. In this method, the sensor height “h” and the phase change factor “a” of the generated magnetic field are selected as ah ± 1 so that the positional error caused by the pen inclination is readily compensated. The phase change factor of the magnetic field, in addition, is made so large that the reading accuracy is improved. As practical examples, an electromagnetic white hoard (size: 1.4 m x 1.06 m) and a tablet unified with a plasma display panel for teleconferencing equipment are described.

Journal ArticleDOI
TL;DR: A multitask scheduler is developed, which provides the variable representing the number of access to the global area, the flag indicating the interrupt-inhibit area, and the variable keeping the record of the accepted levels, and also controls the interrupt process by mutually referring those variables and flags.
Abstract: This paper discusses problems in the implementation of interrupt handling in the loosely synchronized Triple Modular Redundancy (TMR) system where synchronization, majority-decision and fault diagnosis are performed by software. The method of solution for the problems also is discussed. The major problems are as follows: (1) For the interrupt coming into three processors with a certain time difference in the task progress, the consistency among the global areas of the processors must be maintained; (2) The synchronization must be maintained among processors, which do not recognize interrupts due to the interrupt-inhibit period containing synchronization, and the processors, which recognized the interrupts; (3) When interrupts at different levels arrive at a processor, each processor must identify the interrupt with the highest priority consistently. To solve these problems, we have developed a multitask scheduler, which provides the variable representing the number of access to the global area, the flag indicating the interrupt-inhibit area, and the variable keeping the record of the accepted levels. This scheduler also controls the interrupt process by mutually referring those variables and flags. The system is implemented in SAFE system, and the overhead in the interrupt handling was measured.

Journal ArticleDOI
TL;DR: In this paper, a fan-beam reconstruction using two-dimensional Fourier transformation method is proposed and is proved to be so effective that the image quality is equal and the calculation speed about one-fifteenth.
Abstract: Since the X-ray CT has been introduced, various efforts have been made to obtain better quality and higher speed [1 - 5]. Development of a high-resolution CT and a superhigh-speed CT for heart diagnosis is still required. For this purpose, a highspeed fan-beam image reconstruction method must be devised. While the two-dimensional Fourier transformation method much faster than the filtered back projection method has been proposed, it has not given a satisfactory result for fan-beam projection data [4, 6]. In this paper, a fan-beam reconstruction using two-dimensional Fourier transformation method is proposed. The fan-beam projection data are transformed to parallel beam projection data by a rebinning algorithm. Then a two-dimensional Fourier transformation method is applied. While in this method high-speed computation is available by the use of a two-dimensional Fourier transformation method [6], the rebinning algorithm is enhanced by various interpolation methods; thus the high quality and high speed is obtained [7]. Compared with the filtered back projection algorithm for fan-beam projection data, the algorithm is proved to be so effective that the image quality is equal and the calculation speed about one-fifteenth.

Journal ArticleDOI
TL;DR: A design method for the unordered codes TSCC with a reduced number of gates is presented, utilizing a new condition defined for the codeword sets obtained by bi-partitioning the code.
Abstract: Self-checking logic has been proposed as a means of highly efficient fault-detection in a logical circuit. As the checking circuit (checker), the design of a totally self-checking checker (TSCC) has been investigated for various kinds of codes. The author has considered the unordered codes which include M-out-of-N codes and Berger codes, and proposed a design method by the transformation, from the logical function with code-disjoint (CD) derived based on the property of the unordered codes, to the logical function with self-testing (ST) based on the design conditions of the entire codewords. This paper is based on the above transformation of a logical function, and presents a design method for the unordered codes TSCC with a reduced number of gates. In the proposed method, the absorption law for the logical function is applied to eliminate the redundant terms in the logical function with CD, and the function is transformed into the logical function with ST, utilizing a new condition defined for the codeword sets obtained by bi-partitioning the code. It is shown that by this method, the number of gates is reduced in correspondence to the small strictness of the condition for ST. By extending the method, the number of gate levels is also reduced, leading to the design of 2- and 3-level TSCC, which is conjectured as the minimum possible number of levels structure.

Journal ArticleDOI
TL;DR: The rotation delay of a disk caused by RPS-miss, which is larger than what is usually assumed to be, is nonnegligible in systems of heavy load by improving the response time and throughput: a buffer-contained disk unit.
Abstract: The rotation delay of a disk caused by RPS-miss, which is larger than what is usually assumed to be, is nonnegligible in. systems of heavy load. A method is proposed to cope with this delay by improving the response time and throughput: a buffer-contained disk unit. Its performance evaluation is shown by simulation. The buffer-contained disk unit has a small amount of buffer (some tens of kB). Data transmission between a buffer and a disk control unit and disk rotation occur asynchronously, which considerably reduces the delay time. In writing operation, the seek time is overlapped by data transmission time, and seek and rotation delay time appear to be zero. In simulation, an access pattern of real system is used which is obtained by tracing the disk input/output operation. Hence, the response and throughput are clearly improved by 1.7±3.8 times and by 2.0±3.3 times, respectively. The method can be applied to a large variety of disks with great performance improvement and little add-on hardware.

Journal ArticleDOI
TL;DR: A new type of list-processing oriented data flow machine is presented which achieves pipelined processing among activated functions and can utilize a high degree of parallelism even for simple programs due to the partial function-body execution and lenient cons effects.
Abstract: This paper describes some issues of parallel list processing under data flow control. Also a new type of list-processing oriented data flow machine is presented which achieves pipelined processing among activated functions. Performance evaluation through software simulation gave the following conclusions. (1) This machine can utilize a high degree of parallelism even for simple programs due to the partial function-body execution and lenient cons effects. (2) Parallel processing overhead does not affect the processing time. (3) Memory contention is reduced by dividing the structure memory into many banks and by uniformly distributing cons operations in each bank.

Journal ArticleDOI
TL;DR: The paper includes the border-following algorithm applicable to any of 6-, 18-, and 26-connected figures, together with the algorithm to reconstruct the original picture from the list of border voxels, which is compared with the algorithms by Herman et al.
Abstract: This paper proposes a border-following method for three-dimensional digitized pictures. The method is applied to actual data and the results are discussed. Then the proposed algorithm is applied to the analysis of pictures. Up to now, of the few researches and proposals concerning the border-following algorithms for three-dimensional binary pictures, that by Herman et al. has as its goal the display of a three-dimensional picture. Here the pair of 0- and 1-voxels is traced in a way which is an extension of the edge-tracing in the border-following for two-dimensional pictures. However, in their method, the number of voxels to be traced becomes larger, compared with the method which traces only 1-voxel. The connective relations among voxels are also difficult to extract in a direct way. The method proposed here adopts the procedure corresponding to the pixel-tracing in the border-following of two-dimensional picrures, i.e., tracing of border-voxels of value 1. The paper includes the border-following algorithm applicable to any of 6-, 18-, and 26-connected figures, together with the algorithm to reconstruct the original picture from the list of border voxels. These algorithms are compared with the algorithm by Herman et al. A method is also shown which can extract the surrounding relations among three-dimensional figures during execution of the proposed algorithm.

Journal ArticleDOI
TL;DR: This paper uses two-stage ternary up, down and up-down type JK flip-flops and considers the constructions of divide-by-2 to 9 counters, the input equations for the ternaries up- and down-type JK flips are determined.
Abstract: Construction methods for ternary logic circuits and ternary tri-stable flip-flops, using ternary basic operational circuits composed only of CMOS-ICs have been reported. This paper discusses the construction methods for ternary sequential circuits, such as ternary counters, using those ternary logic circuits and ternary tri-stable flip-flops. Among various types of ternary tri-stable flip-flops, this paper uses two-stage ternary up, down and up-down type JK flip-flops. As the logical-type ternary counters, up, down and up-down type divide-by-10 counters are realized. Considering then, the constructions of divide-by-2 to 9 counters, the input equations for the ternary up- and down-type JK flip-flops are determined. The feedback-type ternary divide-by-8 counter is also discussed. As the ternary shift-register type counters, ternary ring counter, ternary Johnson counter, clockwise and counter-clockwise cycling counters are realized.

Journal ArticleDOI
TL;DR: A method of network analysis is proposed which provides a nonsingular hierarchical tearing of subnetworks so that the decomposed sub-matrix is always of admittance type and the sparsity technique of the submatrices is unnecessary.
Abstract: Diakoptics is a kind of hybrid analysis. It has a problem wherein the calculation of the inverse of the admittance sparse submatrix for the subnetwork produces a non-sparse impedance submatrix and the sparsity of the submatrix is not utilized effectively for a large-scale network. From such a viewpoint, tearing methods have recently been proposed which consider the sparsity of sub-matrices. In the methods proposed till now, however, a complex sparsity technique is required for the processing of submatrices, which is different from the tearing algoriithm. For these kinds of algorithms, the computational complexity is difficult to evaluate, and the optimal tearing based on evaluation has not been considered. This paper avoids the hybrid analysis and proposes a method of network analysis which provides a nonsingular hierarchical tearing of subnetworks so that the decomposed sub-matrix is always of admittance type. As a result, the node conductance matrix is of a block diagonal structure with bordered block-diagonal matrices as the blocks. By iterating the hierarchical tearing, each block is reduced further to the bordered block-diagonal structure with denser sub-matrices as blocks. By this method, the preprocessing of tearing and LU decomposition can be separated, making the sparsity technique of the submatrices unnecessary. Based on the evaluation of the computational complexity, almost optimal tearing can be performed.

Journal ArticleDOI
TL;DR: A new design method for the hardware algorithm using recurrence relations is proposed, and it is shown that its algorithm can be applied to a class of recurrence Relations which is wider than the class C.
Abstract: With the progress in VLSI technology, a remarkable attempt, called the hardware algorithm, to realize various kinds of algorithms directly as hardware on VLSI chip has become practical. No significant result, however, has been observed on the systematic formulation of the hardware algorithm or its design method. This paper proposes a new design method for the hardware algorithm using recurrence relations. As the first step, a class C of problems is introduced which is defined by the recurrence relations with two variables. Several practically important problems such as string matching, are contained in this class. It is shown that letting the size of the problem be m, one can always construct a hardware algorithm of 0(m) steps to solve any problem in the class C. Then by imposing a certain restriction on the recurrence relation, a subclass C1 of the class C is defined. It is shown that the hardware algorithm of 0 (log m + m/log m) steps can be constructed for the subclass C1. Finally, the extension of the class C is discussed, and it is shown that its algorithm can be applied, with some modifications, to a class of recurrence relations which is wider than the class C.

Journal ArticleDOI
TL;DR: This paper presents a proposal for infrared ray emission computed tomography (IRECT), a method for visualizing the two-dimensional physical quantum distributions on an arbitrary transaxial layer of an object by calculating the infrared radiation intensity emitted from the object as the projection data.
Abstract: This paper presents a proposal for infrared ray emission computed tomography (IRECT). This is a method for visualizing the two-dimensional physical quantum distributions (e.g., temperature, concentration) on an arbitrary transaxial layer of an object by calculating the infrared radiation intensity emitted from the object as the projection data. To verify the usefulness of this method, an experimental system with pyroelectric detector is developed for measuring flame temperature distribution and this system is applied to a Bunsen flame. The experiments yield good images of thermic intensity distribution in the cross section of the flame. In addition, the usefulness of this method in measuring temperature distribution is confirmed by comparing the results with thermocouple probe measurement.

Journal ArticleDOI
TL;DR: The proposed class of codes is so characterized that, with byte length b, it is constructed from conventional SEC-DED-SbED codes ofbyte length b/2 and the weight of the column vector of its parity check matrix H (the number of l's) is odd.
Abstract: The SEC-DED-SbED code is capable of correcting a single-bit eror, detecting a random double-bit error and detecting a b-bit block error. It is expected to be widely used for highly reliable semiconductor memory systems composed of memory devices with b-bit output. In this paper a new theoretical construction method for SEC-DED-SbED codes is proposed. The proposed class of codes is so characterized that, with byte length b, it is constructed from conventional SEC-DED-SbED codes of byte length b/2 and the weight of the column vector of its parity check matrix H (the number of l's) is odd. The bit length n of the code is given as n _ b2(r-b) . r/2Cb/2 bits, when the check bit length r and the byte length b are even. It has the longest bit length among the known code classes in a large range of byte lengths.

Journal ArticleDOI
TL;DR: An edge detection algorithm is described which is the most basic operation of color image processing and attention is given to effective gray scale intensity analysis of the gray level image.
Abstract: In the present layout scanner, the operations for changing color partially, or extracting a portion of it, have been done interactively by specifying the location through a digitizer. Usually, image analysis techniques and artificial intelligence techniques are applied to region extraction and region matching operations. This paper describes an edge detection algorithm which is the most basic operation of color image processing. First, attention is given to effective gray scale intensity analysis of the gray level image. Then the concept of the reduction of brightness and chroma is considered as an addition of the differences in terms of hue and chroma to the differences in terms of brightness. Based on that concept, the difference equation is derived and the edge detection experiments are reported.

Journal ArticleDOI
TL;DR: A systematic control scheme for load and function distribution of the data-driven execution function through the construction of clusters adapted to the hierarchical diagrammatical program structure is described and a semiquantitative guideline is presented for the distributed assignment of fundamental functions in the hierarchical cluster structure.
Abstract: The authors have proposed a high-level parallel processing system representing the data-drive principle (including history-sensitive processing by diagrammatical language) and have been investigating its realization. Realization of high-level processing by the data-driven principle requires a load and function distribution scheme to secure the smooth data flow corresponding to the input stream into the system. Together with the results of experiment, this paper describes a systematic control scheme for load and function distribution of the data-driven execution function through the construction of clusters adapted to the hierarchical diagrammatical program structure. In other words, the proposed system realizes the data-driven execution functions of the hierarchical system by an iterative structure composed of fundamental functions, namely input-output, function processing, history-sensitive processing and firing control. This paper first describes the load and function distribution scheme for the fundamental function in the data-driven execution control. Then, using an experimental system with common-bus multiprocessor structure, it is shown that the proposed system is useful in realizing high-level parallel processing. Lastly, based on the geometrical connection structure of the diagrammatical program, a semiquantitative guideline is presented for the distributed assignment of fundamental functions in the hierarchical cluster structure.

Journal ArticleDOI
TL;DR: A new high -speed pattern matching method is described for the binary image, based on the sequential similarity detection algorithm (SSDA), which is known as a means of high-speed realizatio n.
Abstract: The template matching method is utilized in the document information processing and industrial visio n. It is a method to extract from the given image patterns similar to the specified patter n. With the increase of the size of the image and the kinds of template patterns, however, the high-speed realization of the method becomes a proble m. In this paper, a new high-speed pattern matching method is described for the binary image, based on the sequential similarity detection algorithm (SSDA), which is known as a means of high-speed realizatio n. In the first part, the expected computation time by SSDA is calculated using the occurrence probability of the subpattern in the templat e. Then a method is proposed in which the order of pixel comparison is determined based on the occurrence probability of the blockwise subpatterns in the given image .A detailed discussion is made for the case where no matching error is permitted in the metho d. The method is applied to the case of two-dimensional IC chip pattern, indicating the effectiveness of the method in the high -speed realization of the pattern matching.

Journal ArticleDOI
TL;DR: This paper proposes the control scheme for the data flow in the bucket collection in GRACE, as well as a method of realizing the Bucket collection network, and the partitioning of the interconnection network for parallel operations is discussed.
Abstract: A database machine is required at present which can process with high speed the complex query to the large-scale database. We are now developing a high-performance database machine which can cope with such a requirement. Part of the machine for relational algebraic processing is called GRACE. GRACE is a parallel machine with a multimodule structure, and the interconnection networks among its components are important. From a functional viewpoint, the interconnection networks of GRACE can be divided into the bucket distribution network, which sends data into the staging space, and the bucket collection network, which sends data into the set of processors. This paper proposes the control scheme for the data flow in the bucket collection in GRACE, as well as a method of realizing the bucket collection network. The validity of the proposal was verified by evaluation and examination using simulation. To utilize the features of the processing scheme of GRACE, the indirect binary h-cube network is used as the interconnection network, which is a kind of multistage interconnection network. A very low transfer overhead was realized. Furthermore, the partitioning of the interconnection network for parallel operations is discussed.

Journal ArticleDOI
TL;DR: A word spotting method using endpolnt-free matching for input pattern with asymmetric DP path is presented and a similar method proposed by Nakatsu et al.
Abstract: Because utterance duration differs even for the same words spoken by the same person, normalization for time variances is essential in any pattern matching method of speech recognition. Several methods have been proposed to date but there have been some drawbacks. We present herein a word spotting method using endpolnt-free matching for input pattern with asymmetric DP path. A similar method proposed by Nakatsu et al. [8], is also discussed.

Journal ArticleDOI
Shozo Tokinaga1
TL;DR: A method is presented which describes the feature of elec-troencephalographic time-series in a simple linguistic form and to perform data management based on that description, which indicates that the proposed method is a simple and effective means of representing EEG.
Abstract: In the retrieval of stored time-series data, as means for improving the retrieval efficiency and for the pre-processing of classification, it is effective to utilize the time-domain characteristics of the time-series data as the key in the retrieval. This paper presents a method which describes the feature of elec-troencephalographic time-series in a simple linguistic form and to perform data management based on that description. First, the ARMA model is applied to the general non-stationary time-series which is partitioned into small intervals based on the spectrum estimation error. At the same time, for the cases where a sporadic (transient) waveform is included, the transient wave is detected effectively by adaptively modifying the input to the model. Then identifying the class of the transient wave using recognition procedures such as smoothing, the features of EEG are determined for the background and the transient waves. A linguistic expression is derived by applying predetermined production rules to those features. The properties of the linguistic expression are discussed, together with the reproducibility of the original data time-series and the compression of the original data. This indicates that the proposed method is a simple and effective means of representing EEG. Finally, a method is described for retrieval of time-series data based on linguistic expression, which can cope with the ambiguity of expressions.

Journal ArticleDOI
TL;DR: Algorithms not affected by the picture frame are presented for the general-purpose sequential computers and for the dedicated parallel-processing hardwares and shown to be suited to parallel processing with a small number of processors.
Abstract: This paper discusses the properties and the algorithms of the fusion operations, which are one of the fundamental techniques of binary image processing. First, a theoretical discussion is made for infinitely spread images, deriving several properties for the fusion operations. Those properties indicate the equivalence among the fusion operations and their iterations and combinations. Since the actual input images are finite, those properties are not satisfied by a simple algorithm due to the effect of the picture frame. This paper presents algorithms not affected by the picture frame for the general-purpose sequential computers and for the dedicated parallel-processing hardwares. Those algorithms perform the fusion operation by executing the first operation in the fusion (that is, the expansion or the contraction) together with the distance transformation, and by performing the second operation using the distance information obtained in the first. The proposed algorithms have the feature that most of the properties for infinite images also apply to images of finite size. With this feature, the proposed algorithms are shown to be suited to parallel processing with a small number of processors.

Journal ArticleDOI
TL;DR: Experimental results show that execution time and dynamic memory space requirement of the object program are reduced by these optimizations and the program can be executed in almost the same time as an equivalent program written in a procedural language such as PASCAL.
Abstract: Functional languages are known to have such advantages as simply defined semantics. However little study has been made regarding compilation or optimization methods for efficient execution on a conventional machine. This paper formulates some optimization methods compiling a functional language ASL/F program into an object program in a procedural language, and further gives some sufficient conditions to perform these optimizations. It introduces “needed-argument-first computation” and “globalization of sorts (data types),” which are new optimization methods and have not been discussed before. Sample programs which are compiled and executed by an experimental compiler perform these optimization methods, as well as elimination of duplicated computation for common subterms, elimination of tail recursion, and rewriting in compilation. These experimental results show that execution time and dynamic memory space requirement of the object program are reduced by these optimizations and the program can be executed in almost the same time as an equivalent program written in a procedural language such as PASCAL.