scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2005"


Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations


Journal ArticleDOI
TL;DR: It is observed that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best and Moments and steerable filters show the best performance among the low dimensional descriptors.
Abstract: In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.

7,057 citations


Journal ArticleDOI
TL;DR: A snapshot of the state of the art in affine covariant region detectors, and compares their performance on a set of test images under varying imaging conditions to establish a reference test set of images and performance software so that future detectors can be evaluated in the same framework.
Abstract: The paper gives a snapshot of the state of the art in affine covariant region detectors, and compares their performance on a set of test images under varying imaging conditions. Six types of detectors are included: detectors based on affine normalization around Harris (Mikolajczyk and Schmid, 2002; Schaffalitzky and Zisserman, 2002) and Hessian points (Mikolajczyk and Schmid, 2002), a detector of `maximally stable extremal regions', proposed by Matas et al. (2002); an edge-based region detector (Tuytelaars and Van Gool, 1999) and a detector based on intensity extrema (Tuytelaars and Van Gool, 2000), and a detector of `salient regions', proposed by Kadir, Zisserman and Brady (2004). The performance is measured against changes in viewpoint, scale, illumination, defocus and image compression. The objective of this paper is also to establish a reference test set of images and performance software, so that future detectors can be evaluated in the same framework.

3,359 citations


Journal ArticleDOI
01 Sep 2005
TL;DR: This paper builds on the idea of the Harris and Förstner interest point operators and detects local structures in space-time where the image values have significant local variations in both space and time and illustrates how a video representation in terms of local space- time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.
Abstract: Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.

2,684 citations


Book ChapterDOI
TL;DR: This paper presents a new classification of schema-based matching techniques that builds on the top of state of the art in both schema and ontology matching and distinguishes between approximate and exact techniques at schema-level; and syntactic, semantic, and external techniques at element- and structure-level.
Abstract: Schema and ontology matching is a critical problem in many application domains, such as semantic web, schema/ontology integration, data warehouses, e-commerce, etc. Many different matching solutions have been proposed so far. In this paper we present a new classification of schema-based matching techniques that builds on the top of state of the art in both schema and ontology matching. Some innovations are in introducing new criteria which are based on (i) general properties of matching techniques, (ii) interpretation of input information, and (iii) the kind of input information. In particular, we distinguish between approximate and exact techniques at schema-level; and syntactic, semantic, and external techniques at element- and structure-level. Based on the classification proposed we overview some of the recent schema/ontology matching systems pointing which part of the solution space they cover. The proposed classification provides a common conceptual basis, and, hence, can be used for comparing different existing schema/ontology matching techniques and systems as well as for designing new ones, taking advantages of state of the art solutions.

1,285 citations


Book ChapterDOI
06 Jul 2005
TL;DR: AVISPA is a push-button tool for the automated validation of Internet security-sensitive protocols and applications that provides a modular and expressive formal language for specifying protocols and their security properties.
Abstract: AVISPA is a push-button tool for the automated validation of Internet security-sensitive protocols and applications. It provides a modular and expressive formal language for specifying protocols and their security properties, and integrates different back-ends that implement a variety of state-of-the-art automatic analysis techniques. To the best of our knowledge, no other tool exhibits the same level of scope and robustness while enjoying the same performance and scalability.

1,278 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: It is shown that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.
Abstract: Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.

817 citations


Journal ArticleDOI
01 Jun 2005
TL;DR: An overview of the main ideas behind JML, details about JML’s wide range of tools, and a glimpse into existing applications of JML are given.
Abstract: The Java Modeling Language (JML) can be used to specify the detailed design of Java classes and interfaces by adding annotations to Java source files. The aim of JML is to provide a specification language that is easy to use for Java programmers and that is supported by a wide range of tools for specification typechecking, runtime debugging, static analysis, and verification.This paper gives an overview of the main ideas behind JML, details about JML’s wide range of tools, and a glimpse into existing applications of JML.

789 citations


Journal ArticleDOI
TL;DR: A stochastic model is introduced that accurately models the message delay in mobile ad hoc networks where nodes relay messages and the networks are sparsely populated and accurately predicts the messagedelay for both relay strategies for a number of mobility models.

615 citations


Journal ArticleDOI
TL;DR: In this paper, various approaches based on bounding volume hierarchies, distance fields and spatial partitioning are discussed for collision detection of deformable objects in interactive environments for surgery simulation and entertainment technology.
Abstract: Interactive environments for dynamically deforming objects play an important role in surgery simulation and entertainment technology. These environments require fast deformable models and very efficient collision handling techniques. While collision detection for rigid bodies is well investigated, collision detection for deformable objects introduces additional challenging problems. This paper focuses on these aspects and summarizes recent research in the area of deformable collision detection. Various approaches based on bounding volume hierarchies, distance fields and spatial partitioning are discussed. In addition, image-space techniques and stochastic methods are considered. Applications in cloth modeling and surgical simulation are presented.

591 citations


Proceedings ArticleDOI
25 May 2005
TL;DR: This paper studies the dynamic aspects of the coverage of a mobile sensor network that depend on the process of sensor movement, and derives optimal mobility strategies for sensors and targets from their own perspectives.
Abstract: Previous work on the coverage of mobile sensor networks focuses on algorithms to reposition sensors in order to achieve a static configuration with an enlarged covered area. In this paper, we study the dynamic aspects of the coverage of a mobile sensor network that depend on the process of sensor movement. As time goes by, a position is more likely to be covered; targets that might never be detected in a stationary sensor network can now be detected by moving sensors. We characterize the area coverage at specific time instants and during time intervals, as well as the time it takes to detect a randomly located stationary target. Our results show that sensor mobility can be exploited to compensate for the lack of sensors and improve network coverage. For mobile targets, we take a game theoretic approach and derive optimal mobility strategies for sensors and targets from their own perspectives.

Proceedings ArticleDOI
17 Jul 2005
TL;DR: This paper proposes a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data and shows that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain.
Abstract: Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. One well-known approach is to aggregate sensor data (e.g., by adding) along the path from sensors to the sink. Aggregation becomes especially challenging if end-to-end privacy between sensors and the sink is required. In this paper, we propose a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions (with very small moduli) and is therefore very well suited for CPU-constrained devices. We show that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain.

Journal ArticleDOI
TL;DR: Context is not simply the state of a predefined environment with a fixed set of interaction resources; it's part of a process of interacting with an ever-changing environment composed of reconfigurable, migratory, distributed, and multiscale resources.
Abstract: Context is not simply the state of a predefined environment with a fixed set of interaction resources. It's part of a process of interacting with an ever-changing environment composed of reconfigurable, migratory, distributed, and multiscale resources.

Book ChapterDOI
02 Oct 2005
TL;DR: This paper explores the idea of using aspect-oriented modeling to add precise action specifications with static type checking and genericity at the meta level, and believes that such a combination would bring significant benefits to the community, such as the specification, simulation and testing of operational semantics of metamodels.
Abstract: Nowadays, object-oriented meta-languages such as MOF (Meta-Object Facility) are increasingly used to specify domain-specific languages in the model-driven engineering community. However, these meta-languages focus on structural specifications and have no built-in support for specifications of operational semantics. In this paper we explore the idea of using aspect-oriented modeling to add precise action specifications with static type checking and genericity at the meta level, and examine related issues and possible solutions. We believe that such a combination would bring significant benefits to the community, such as the specification, simulation and testing of operational semantics of metamodels. We present requirements for such statically-typed meta-languages and rationales for the aforementioned benefits.

Journal ArticleDOI
TL;DR: The methods are applicable to general nonlinear and non-Gaussian models for the target dynamics and measurement likelihood, and provide efficient solutions to two very pertinent problems: the data association problem that arises due to unlabelled measurements in the presence of clutter, and the curse of dimensionality that arose due to the increased size of the state-space associated with multiple targets.
Abstract: We present Monte Carlo methods for multi-target tracking and data association. The methods are applicable to general nonlinear and non-Gaussian models for the target dynamics and measurement likelihood. We provide efficient solutions to two very pertinent problems: the data association problem that arises due to unlabelled measurements in the presence of clutter, and the curse of dimensionality that arises due to the increased size of the state-space associated with multiple targets. We develop a number of algorithms to achieve this. The first, which we refer to as the Monte Carlo joint probabilistic data association filter (MC-JPDAF), is a generalisation of the strategy proposed by Schulz et al. (2001) and Schulz et al. (2003). As is the case for the JPDAF, the distributions of interest are the marginal filtering distributions for each of the targets, but these are approximated with particles rather than Gaussians. We also develop two extensions to the standard particle filtering methodology for tracking multiple targets. The first, which we refer to as the sequential sampling particle filter (SSPF), samples the individual targets sequentially by utilising a factorisation of the importance weights. The second, which we refer to as the independent partition particle filter (IPPF), assumes the associations to be independent over the individual targets, leading to an efficient component-wise sampling strategy to construct new particles. We evaluate and compare the proposed methods on a challenging synthetic tracking problem.

Journal ArticleDOI
TL;DR: It is proved that if /spl gamma/ is nonzero but small enough, there exist node spatial densities for which the network contains a large (theoretically infinite) cluster of nodes, enabling distant nodes to communicate in multiple hops.
Abstract: We study the impact of interferences on the connectivity of large-scale ad hoc networks, using percolation theory. We assume that a bi-directional connection can be set up between two nodes if the signal to noise ratio at the receiver is larger than some threshold. The noise is the sum of the contribution of interferences from all other nodes, weighted by a coefficient /spl gamma/, and of a background noise. We find that there is a critical value of /spl gamma/ above which the network is made of disconnected clusters of nodes. We also prove that if /spl gamma/ is nonzero but small enough, there exist node spatial densities for which the network contains a large (theoretically infinite) cluster of nodes, enabling distant nodes to communicate in multiple hops. Since small values of /spl gamma/ cannot be achieved without efficient CDMA codes, we investigate the use of a very simple TDMA scheme, where nodes can emit only every nth time slot. We show that it achieves connectivity similar to the previous system with a parameter /spl gamma//n.

Journal ArticleDOI
TL;DR: In this article, an Eulerian diffuse interface model for the simulation of compressible multifluid and two-phase flow problems is presented. But this model is based on a seven equation, two pressure, two velocity model of Baer-Nunziato type using an asymptotic analysis in the limit of zero relaxation time.

Journal ArticleDOI
TL;DR: A symmetric formulation is proposed, which combines single- and double-layer potentials, and which is new to the field of EEG, although it has been applied to other problems in electromagnetism.
Abstract: The forward electroencephalography (EEG) problem involves finding a potential V from the Poisson equation /spl nabla//spl middot/(/spl sigma//spl nabla/V)=f, in which f represents electrical sources in the brain, and /spl sigma/ the conductivity of the head tissues. In the piecewise constant conductivity head model, this can be accomplished by the boundary element method (BEM) using a suitable integral formulation. Most previous work uses the same integral formulation, corresponding to a double-layer potential. We present a conceptual framework based on a well-known theorem (Theorem 1) that characterizes harmonic functions defined on the complement of a bounded smooth surface. This theorem says that such harmonic functions are completely defined by their values and those of their normal derivatives on this surface. It allows us to cast the previous BEM approaches in a unified setting and to develop two new approaches corresponding to different ways of exploiting the same theorem. Specifically, we first present a dual approach which involves a single-layer potential. Then, we propose a symmetric formulation, which combines single- and double-layer potentials, and which is new to the field of EEG, although it has been applied to other problems in electromagnetism. The three methods have been evaluated numerically using a spherical geometry with known analytical solution, and the symmetric formulation achieves a significantly higher accuracy than the alternative methods. Additionally, we present results with realistically shaped meshes. Beside providing a better understanding of the foundations of BEM methods, our approach appears to lead also to more efficient algorithms.

Journal ArticleDOI
TL;DR: The TGV tool is presented, which allows for the automatic synthesis of conformance test cases from a formal specification of a (non-deterministic) reactive system and some ongoing work on test synthesis is described.
Abstract: This paper presents the TGV tool, which allows for the automatic synthesis of conformance test cases from a formal specification of a (non-deterministic) reactive system. TGV was developed by Irisa Rennes and Verimag Grenoble, with the support of the Vasy team of Inria Rhones-Alpes. The paper describes the main elements of the underlying testing theory, which is based on a model of transitions system which distinguishes inputs, outputs and internal actions, and is based on the concept of conformance relation. The principles of the test synthesis process, as well as the main algorithms, are explained. We then describe the main characteristics of the TGV tool and refer to some industrial experiments that have been conducted to validate the approach. As a conclusion, we describe some ongoing work on test synthesis.

Book ChapterDOI
11 Apr 2005
TL;DR: The PASCAL Visual Object Classes Challenge (PASCALVOC) as mentioned in this paper was held from February to March 2005 to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects).
Abstract: The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). Four object classes were selected: motorbikes, bicycles, cars and people. Twelve teams entered the challenge. In this chapter we provide details of the datasets, algorithms used by the teams, evaluation criteria, and results achieved.

Journal ArticleDOI
TL;DR: An algorithm to split an image into a sum u + v of a bounded variation component and a component containing the textures and the noise is constructed, inspired from a recent work of Y. Meyer.
Abstract: We construct an algorithm to split an image into a sum u + v of a bounded variation component and a component containing the textures and the noise. This decomposition is inspired from a recent work of Y. Meyer. We find this decomposition by minimizing a convex functional which depends on the two variables u and v, alternately in each variable. Each minimization is based on a projection algorithm to minimize the total variation. We carry out the mathematical study of our method. We present some numerical results. In particular, we show how the u component can be used in nontextured SAR image restoration.

Journal ArticleDOI
TL;DR: A new model to simulate the three-dimensional (3-D) growth of glioblastomas multiforma (GBMs), the most aggressive glial tumors, and a new coupling equation taking into account the mechanical influence of the tumor cells on the invaded tissues are proposed.
Abstract: We propose a new model to simulate the three-dimensional (3-D) growth of glioblastomas multiforma (GBMs), the most aggressive glial tumors. The GBM speed of growth depends on the invaded tissue: faster in white than in gray matter, it is stopped by the dura or the ventricles. These different structures are introduced into the model using an atlas matching technique. The atlas includes both the segmentations of anatomical structures and diffusion information in white matter fibers. We use the finite element method (FEM) to simulate the invasion of the GBM in the brain parenchyma and its mechanical interaction with the invaded structures (mass effect). Depending on the considered tissue, the former effect is modeled with a reaction-diffusion or a Gompertz equation, while the latter is based on a linear elastic brain constitutive equation. In addition, we propose a new coupling equation taking into account the mechanical influence of the tumor cells on the invaded tissues. The tumor growth simulation is assessed by comparing the in-silico GBM growth with the real growth observed on two magnetic resonance images (MRIs) of a patient acquired with 6 mo difference. Results show the feasibility of this new conceptual approach and justifies its further evaluation.

Journal ArticleDOI
TL;DR: In this paper, error-correcting codes from perfect nonlinear mappings are constructed, and then employed to construct secret sharing schemes, and many of them are optimal or almost optimal.
Abstract: In this paper, error-correcting codes from perfect nonlinear mappings are constructed, and then employed to construct secret sharing schemes. The error-correcting codes obtained in this paper are very good in general, and many of them are optimal or almost optimal. The secret sharing schemes obtained in this paper have two types of access structures. The first type is democratic in the sense that every participant is involved in the same number of minimal-access sets. In the second type of access structures, there are a few dictators who are in every minimal access set, while each of the remaining participants is in the same number of minimal-access sets.

Journal ArticleDOI
TL;DR: This paper compares the properties of various norms that are dual of Sobolev or Besov norms, and proposes a decomposition model which splits an image into three components: a first one containing the structure of the image, a second one the texture of theimage, and a third one the noise.
Abstract: Following a recent work by Y. Meyer, decomposition models into a geometrical component and a textured component have recently been proposed in image processing. In such approaches, negative Sobolev norms have seemed to be useful to modelize oscillating patterns. In this paper, we compare the properties of various norms that are dual of Sobolev or Besov norms. We then propose a decomposition model which splits an image into three components: a first one containing the structure of the image, a second one the texture of the image, and a third one the noise. Our decomposition model relies on the use of three different semi-norms: the total variation for the geometrical component, a negative Sobolev norm for the texture, and a negative Besov norm for the noise. We illustrate our study with numerical examples.

Journal ArticleDOI
01 Sep 2005
TL;DR: The notion of loose e-sample is introduced and it is shown that the set of loosee-samples contains and is asymptotically identical to the setof e-s samples, which are easier to check and to construct.
Abstract: The notion of e-sample, introduced by Amenta and Bern, has proven to be a key concept in the theory of sampled surfaces. Of particular interest is the fact that, if E is an e-sample of a C2-continuous surface S for a sufficiently small e, then the Delaunay triangulation of E restricted to S is a good approximation of S, both in a topological and in a geometric sense. Hence, if one can construct an e-sample, one also gets a good approximation of the surface. Moreover, correct reconstruction is ensured by various algorithms. In this paper, we introduce the notion of loose e-sample. We show that the set of loose e-samples contains and is asymptotically identical to the set of e-samples. The main advantage of loose e-samples over e-samples is that they are easier to check and to construct. We also present a simple algorithm that constructs provably good surface samples and meshes. Given a C2-continuous surface S without boundary, the algorithm generates a sparse e-sample E and at the same time a triangulated surface Dels(E). The triangulated surface has the same topological type as S, is close to S for the Hausdorff distance and can provide good approximations of normals, areas and curvatures. A notable feature of the algorithm is that the surface needs only to be known through an oracle that, given el line segment, detects whether the segment intersects the surface and in the affirmative, returns the intersection points. This makes the algorithm useful in a wide variety of contexts and for a large class of surfaces.

Journal ArticleDOI
TL;DR: ABF++ robustly parameterizes mesh models of hundreds of thousands and millions of triangles within minutes, and is extremely suitable for robustly and efficiently parameterizing models forgeometry-processing applications.
Abstract: Conformal parameterization of mesh models has numerous applications in geometry processing. Conformality is desirable for remeshing, surface reconstruction, and many other mesh processing applications. Subject to the conformality requirement, these applications typically benefit from parameterizations with smaller stretch. The Angle Based Flattening (ABF) method, presented a few years ago, generates provably valid conformal parameterizations with low stretch. However, it is quite time-consuming and becomes error prone for large meshes due to numerical error accumulation. This work presents ABFpp, a highly efficient extension of the ABF method, that overcomes these drawbacks while maintaining all the advantages of ABF. ABF++ robustly parameterizes meshes of hundreds of thousands and millions of triangles within minutes. It is based on three main components: (1) a new numerical solution technique that dramatically reduces the dimension of the linear systems solved at each iteration, speeding up the solution; (2) a new robust scheme for reconstructing the 2D coordinates from the angle space solution that avoids the numerical instabilities which hindered the ABF reconstruction scheme; and (3) an efficient hierarchical solution technique. The speedup with (1) does not come at the expense of greater distortion. The hierarchical technique (3) enables parameterization of models with millions of faces in seconds at the expense of a minor increase in parametric distortion. The parameterization computed by ABF++ are provably valid, that is they contain no flipped triangles. As a result of these extensions, the ABF++ method is extremely suitable for robustly and efficiently parameterizing models for geometry-processing applications.

Journal ArticleDOI
TL;DR: YASS applies transition-constrained seeds to specify the most probable conserved motifs between homologous sequences, combined with a flexible hit criterion used to identify groups of seeds that are likely to exhibit significant alignments.
Abstract: YASS is a DNA local alignment tool based on an efficient and sensitive filtering algorithm. It applies transition-constrained seeds to specify the most probable conserved motifs between homologous sequences, combined with a flexible hit criterion used to identify groups of seeds that are likely to exhibit significant alignments. A web interface (http://www.loria.fr/projects/YASS/) is available to upload input sequences in fasta format, query the program and visualize the results obtained in several forms (dot-plot, tabular output and others). A standalone version is available for download from the web page.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: It is established that any piecewise linear movement applied to a user preserves the uniform distribution of position and direction provided that users were initially uniformly throughout the space with equal likelihood of being pointed in any direction.
Abstract: A number of mobility models have been proposed for the purpose of either analyzing or simulating the movement of users in a mobile wireless network. Two of the more popular are the random waypoint and the random direction models. The random waypoint model is physically appealing but difficult to understand. Although the random direction model is less appealing physically, it is much easier to understand. User speeds are easily calculated, unlike for the waypoint model, and, as we observe, user positions and directions are uniformly distributed. The contribution of this paper is to establish this last property for a rich class of random direction models that allow future movements to depend on past movements. To this end, we consider finite oneand two-dimensional spaces. We consider two variations, the random direction model with wrap around and with reflection. We establish a simple relationship between these two models and, for both, show that positions and directions are uniformly distributed for a class of Markov movement models regardless of initial position. In addition, we establish a sample path property for both models, namely that any piecewise linear movement applied to a user preserves the uniform distribution of position and direction provided that users were initially uniformly throughout the space with equal likelihood of being pointed in any direction.

Journal ArticleDOI
TL;DR: A partitioned Newton based method for solving nonlinear coupled systems arising in the numerical approximation of fluid-structure interaction problems involving the use of exact cross jacobians evaluation involving the shape derivative of the fluid state with respect to solid motion perturbations is provided.

Journal ArticleDOI
TL;DR: A general purpose error estimate is proposed based on the interpolation error that produces an anisotropic metric map used to govern the mesh element creation to improve the accuracy of the solutions.