scispace - formally typeset
Search or ask a question
Dissertation

Cellular Automata-based Algorithm for Liquid Diffusion Phenomenon Modeling using imaging technique

01 Jul 2013-
TL;DR: The result obtained from the proposed LDP model compared to other competitive LDP models has higher accuracy and less computation time, and the results showed that there is a direct relationship between the temperature and the diffusion speed.
Abstract: Recently, the prediction of the dynamical behavior of Liquid Diffusion Phenomenon (LDP) has been used in many applications especially in physical and biological fields. Many models have been proposed to predict the LDP behavior, but most of them require complex mathematical calculations causing computation time consumption. This thesis proposes a dynamical behavior prediction algorithm using Cellular Automata (CA) to model the LDP. A real liquid diffusion phenomenon is recorded whereas the observed images are later extracted for comparing purpose with the predicted phenomenon. First, a mathematical method is proposed in order to track and then analyze the real diffusion behavior. This method has used thousands of original images. Then, thousands of images, as the same number of original images, are created by the CA-based algorithm. In this study, the diffusion speed of the predicted LDP is also computed by using a mathematical proposed algorithm which is the Diffusion Speed Algorithm (DSA). Finally, three benchmark strategies are used in order to compare the predicted images to the original images, which are:pixel intensity, Region-of-Diffusion (ROD) area, and ROD shape. The experiments of this thesis are divided into original and predicted images. The original images are classified into three groups based on the temperature used, which are: ±18 °C, ±24 °C, and ±30 °C. Each temperature-based experiment contains five levels of the height of droplets source. The diffusion time has been equal to 32 seconds with 15 fps comprising 480 images per each experiment. On the other hand,the predicted images are similarly classified. There will be 15 predicted experiments created by the proposed CA algorithm. The whole predicted images are compared to their corresponding original ones. Under the experiments samples, there are 30 processed experiments comprising 14400 original and predicted images. The obtained results show that the averaged similarity percentage is equal to 94.4%. Additionally,the average computation time needed to complete processing a single experiment is 1.3 second. The result obtained from the proposed LDP model compared to other competitive LDP models has higher accuracy and less computation time. The results also show that the proposed LDP model is about 15 times faster than a neural network-based model. A detailed study to explore the effects and relationships between the model‟s parameters such as temperature and liquids‟ viscosity has been performed. The results showed that there is a direct relationship between the temperature and the diffusion speed.
Citations
More filters
01 Jan 2002
TL;DR: This work proposes using a genetic algorithm (GA) as an inverse method to model fluid flow in a pore network cellular automaton (CA) to produce specified flow dynamic responses.
Abstract: Fluid flow in porous media is a dynamic process that is traditionally modeled using PDE (partial differential equations). In this approach, physical properties related to fluid flow are inferred from rock sample data. However, due to the limitations posed in the sample data (sparseness and noise), this method often yields inaccurate results. Consequently, production information is normally used to improve the accuracy of property estimation. This style of modeling is equivalent to solving inverse problems. We propose using a genetic algorithm (GA) as an inverse method to model fluid flow in a pore network cellular automaton (CA). This GA evolves the CA to produce specified flow dynamic responses. We apply this method to a rock sample data set. The results are presented and discussed. Additionally, the prospect of building the pore network CA machine is discussed.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: Experimental results show that the proposed diamond search (DS) algorithm is better than the four-step search (4SS) and block-based gradient descent search (BBGDS), in terms of mean-square error performance and required number of search points.
Abstract: Based on the study of motion vector distribution from several commonly used test image sequences, a new diamond search (DS) algorithm for fast block-matching motion estimation (BMME) is proposed in this paper. Simulation results demonstrate that the proposed DS algorithm greatly outperforms the well-known three-step search (TSS) algorithm. Compared with the new three-step search (NTSS) algorithm, the DS algorithm achieves close performance but requires less computation by up to 22% on average. Experimental results also show that the DS algorithm is better than the four-step search (4SS) and block-based gradient descent search (BBGDS), in terms of mean-square error performance and required number of search points.

1,949 citations

Journal ArticleDOI
TL;DR: Simulation results show that the proposed 4SS performs better than the well-known three- step search and has similar performance to the new three-step search (N3SS) in terms of motion compensation errors.
Abstract: Based on the real world image sequence's characteristic of center-biased motion vector distribution, a new four-step search (4SS) algorithm with center-biased checking point pattern for fast block motion estimation is proposed in this paper. A halfway-stop technique is employed in the new algorithm with searching steps of 2 to 4 and the total number of checking points is varied from 17 to 27. Simulation results show that the proposed 4SS performs better than the well-known three-step search and has similar performance to the new three-step search (N3SS) in terms of motion compensation errors. In addition, the 4SS also reduces the worst-case computational requirement from 33 to 27 search points and the average computational requirement from 21 to 19 search points, as compared with N3SS.

1,619 citations

Book ChapterDOI
19 May 1992
TL;DR: In this paper, a hierarchical estimation framework for the computation of diverse representations of motion information is described, which includes a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-fine refinement strategy.
Abstract: This paper describes a hierarchical estimation framework for the computation of diverse representations of motion information. The key features of the resulting framework (or family of algorithms) are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-fine refinement strategy. Four specific motion models: affine flow, planar surface flow, rigid body motion, and general optical flow, are described along with their application to specific examples.

1,501 citations

Journal ArticleDOI
TL;DR: This paper describes a hierarchical computational framework for the determination of dense displacement fields from a pair of images, and an algorithm consistent with that framework, based on a scale-based separation of the image intensity information and the process of measuring motion.
Abstract: THE ROBUST MEASUREMENT OF VISUAL MOTION FROM DIGITIZED IMAGE SEQUENCES HAS BEEN AN IMPORTANT BUT DIFFICULT PROBLEM IN COMPUTER VISION. THIS PAPER DESCRIBES A HIERARCHICAL COMPUTATIONAL FRAMEWORK FOR THE DETERMINATION OF DENSE DISPLACEMENT FIELDS FROM A PAIR OF IMAGES, AND AN ALGORITHM CONSIST- ENT WITH THAT FRAMEWORK. OUR FRAMEWORK IS BASED ON THE SEPARATION OF THE IMAGE INTENSITY INFORMATION AS WELL AS THE PROCESS OF MEASURING MOTION ACCORDING TO SCALE. THE LARGE SCALE INTENSITY INFORMATION IS FIRST USED TO OBTAIN ROUGH ESTIMATES OF IMAGE MOTION, WHICH ARE THEN REFINED BY USING INTENSITY INFORMATION AT SMALLER SCALES. THE ESTIMATES ARE IN THE FORM OF DISPLACEMENT (OR VELOCITY) VECTORS FOR PIXELS AND ARE ACCOMPANIED BY A DIRECTION-DEPENDENT CONFIDENCE MEASURE. A SMOOTHNESS CONSTRAINT IS EMPLOYED TO PROPAGATE THE MEASUREMENTS WITH HIGH CONFIDENCE TO THEIR NEIGBORING AREAS WHERE THE CONFIDENCES ARE LOW. AT ALL LEVELS, THE COMPUTATIONS ARE PIXEL-PARALLEL, UNIFORM ACROSS THE IMAGE, AND BASED ON INFORMATION FROM A SMALL NEIGHBORHOOD OF A PIXEL. FOR OUR ALGORITHM, THE LOCAL DISPLACEMENT VECTORS ARE DETERMIND BY MINI- MIZING THE SUM-OF-SQUARED DIFFERENCES (SSD) OF INTENSITIES, THE CONFIDENCE MEASURES ARE DERIVED FROM THE SHAPE OF THE SSD SURFACE, AND THE SMOOTHNESS CONSTRAINT IS CAST IN THE FORM OF ENERGY MINIMIZATION. RESULTS OF APPLYING OUR ALGORITHM TO PAIRS OF REAL IMAGES ARE INCLUDED. IN ADDITION TO OUR OWN

1,175 citations

Journal ArticleDOI
TL;DR: The resulting technique is predominantly linear, efficient, and suitable for parallel processing, and is local in space-time, robust with respect to noise, and permits multiple estimates within a single neighborhood.
Abstract: We present a technique for the computation of 2D component velocity from image sequences. Initially, the image sequence is represented by a family of spatiotemporal velocity-tuned linear filters. Component velocity, computed from spatiotemporal responses of identically tuned filters, is expressed in terms of the local first-order behavior of surfaces of constant phase. Justification for this definition is discussed from the perspectives of both 2D image translation and deviations from translation that are typical in perspective projections of 3D scenes. The resulting technique is predominantly linear, efficient, and suitable for parallel processing. Moreover, it is local in space-time, robust with respect to noise, and permits multiple estimates within a single neighborhood. Promising quantiative results are reported from experiments with realistic image sequences, including cases with sizeable perspective deformation.

1,113 citations