scispace - formally typeset
Search or ask a question

Showing papers by "University of Marne-la-Vallée published in 2014"


Journal ArticleDOI
TL;DR: The theory of Gaussian multiplicative chaos was introduced by Kahane's seminal work in 1985 as discussed by the authors, and it has been applied in many applications, ranging from finance to quantum gravity.
Abstract: In this article, we review the theory of Gaussian multiplicative chaos initially introduced by Kahane’s seminal work in 1985. Though this beautiful paper faded from memory until recently, it already contains ideas and results that are nowadays under active investigation, like the construction of the Liouville measure in $2d$-Liouville quantum gravity or thick points of the Gaussian Free Field. Also, we mention important extensions and generalizations of this theory that have emerged ever since and discuss a whole family of applications, ranging from finance, through the Kolmogorov-Obukhov model of turbulence to $2d$-Liouville quantum gravity. This review also includes new results like the convergence of discretized Liouville measures on isoradial graphs (thus including the triangle and square lattices) towards the continuous Liouville measures (in the subcritical and critical case) or multifractal analysis of the measures in all dimensions.

469 citations


Journal ArticleDOI
TL;DR: A short overview of recent results on a specific class of Markov process: the Piecewise Deterministic Markov Processes (PDMPs) and a short overview on numerical methods used for simulating PDMPs.
Abstract: We give a short overview of recent results on a specific class of Markov process: the Piecewise Deterministic Markov Processes (PDMPs). We first recall the definition of these processes and give some general results. On more specific cases such as the TCP model or a model of switched vector fields, better results can be proved, especially as regards long time behaviour. We continue our review with an infinite dimensional example of neuronal activity. From the statistical point of view, these models provide specific challenges: we illustrate this point with the example of the estimation of the distribution of the inter-jumping times. We conclude with a short overview on numerical methods used for simulating PDMPs.

68 citations


Journal ArticleDOI
TL;DR: A primal-dual interior-point optimization algorithm allowing a constrained least squares estimation approach that is more flexible since it can handle any linear equality and/or inequality constraint and has the advantage of a reduced computational cost.
Abstract: Hyperspectral data unmixing aims at identifying the components (endmembers) of an observed surface and at determining their fractional abundances inside each pixel area. Assuming that the spectral signatures of the surface components have been previously determined by an endmember extraction algorithm, or to be part of an available spectral library, the main problem is reduced to the estimation of the fractional abundances. For large hyperspectral image data sets, the estimation of the abundance maps requires the resolution of a large-scale optimization problem subject to linear constraints such as non-negativity and sum less or equal to one. This paper proposes a primal-dual interior-point optimization algorithm allowing a constrained least squares estimation approach. In comparison with existing methods, the proposed algorithm is more flexible since it can handle any linear equality and/or inequality constraint and has the advantage of a reduced computational cost. It also presents an algorithmic structure suitable for a parallel implementation on modern intensive computing devices such as Graphics Processing Units (GPU). The implementation issues are discussed and the applicability of the proposed approach is illustrated with the help of examples on synthetic and real hyperspectral data.

58 citations


Journal ArticleDOI
21 Mar 2014
TL;DR: The application of a quantitative approach to publicly available data collected from pornographic websites shows that the accumulation of categories does not separate scripts from each other, but instead draws a multitude of significant paths between fuzzy categories.
Abstract: The development of the web has increased the diversity of pornographic content, and at the same time the rise of online platforms has initiated a new trend of quantitative research that makes possible the analysis of data on an unprecedented scale. This paper explores the application of a quantitative approach to publicly available data collected from pornographic websites. Several analyses are applied to these digital traces with a focus on keywords describing videos and their underlying categorization systems. The analysis of a large network of tags shows that the accumulation of categories does not separate scripts from each other, but instead draws a multitude of significant paths between fuzzy categories. The datasets and tools we describe have been made publicly available for further study.

42 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a general supervised learning problem with strongly convex and Lipshitz loss and study the problem of model selection aggre- gation, given a finite dictionary function (learners) together with the prior.
Abstract: We consider a general supervised learning problem with strongly convex and Lipshitz loss and study the problem of model selection aggre- gation. In particular, given a finite dictionary functions (learners) together with the prior, we generalize the results obtained by Dai, Rigollet and Zhang (2012) for Gaussian regression with squared loss and fixed design to this learning setup. Specifically, we prove that the Q-aggregation pro- cedure outputs an estimator that satisfies optimal oracle inequalities both in expectation and with high probability. Our proof techniques somewhat depart from traditional proofs by making most of the standard arguments on the Laplace transform of the empirical process to be controlled. AMS 2000 subject classifications: Primary 62H25; secondary 62F04, 90C22.

30 citations


Journal ArticleDOI
TL;DR: The algorithm is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as of its exponential decay rate, which can be interpreted as a good-quality denoised version of the data.
Abstract: The problem of estimating the parameters of a Poisson-Gaussian model from experimental data has recently raised much interest in various applications, for instance in confocal fluorescence microscopy. In this context, a field of independent random variables is observed, which is varying both in time and space. Each variable is a sum of two components, one following a Poisson and the other a Gaussian distribution. In this paper, a general formulation is considered where the associated Poisson process is nonstationary in space and also exhibits an exponential decay in time, whereas the Gaussian component corresponds to a stationary white noise with arbitrary mean. To solve the considered parametric estimation problem, we follow an iterative Expectation-Maximization (EM) approach. The parameter update equations involve deriving finite approximation of infinite sums. Expressions for the maximum error incurred in the process are also given. Since the problem is non-convex, we pay attention to the EM initialization, using a moment-based method where recent optimization tools come into play. We carry out a performance analysis by computing the Cramer-Rao bounds on the estimated variables. The practical performance of the proposed estimation procedure is illustrated on both synthetic data and real fluorescence macroscopy image sequences. The algorithm is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as of its exponential decay rate. In particular, the mean estimate of the Poisson component can be interpreted as a good-quality denoised version of the data.

29 citations


Posted Content
TL;DR: The Cambrian Hopf algebras as discussed by the authors generalizing J.-L. Loday and M. Ronco's algebra on binary trees have been studied in terms of operations on Cambrian trees.
Abstract: Cambrian trees are oriented and labeled trees which fulfill local conditions around each node generalizing the conditions for classical binary search trees. Based on the bijective correspondence between signed permutations and leveled Cambrian trees, we define the Cambrian Hopf algebra generalizing J.-L. Loday and M. Ronco's algebra on binary trees. We describe combinatorially the products and coproducts of both the Cambrian algebra and its dual in terms of operations on Cambrian trees. We also define multiplicative bases of the Cambrian algebra and study structural and combinatorial properties of their indecomposable elements. Finally, we extend to the Cambrian setting different algebras connected to binary trees, in particular S. Law and N. Reading's Baxter Hopf algebra on quadrangulations and S. Giraudo's equivalent Hopf algebra on twin binary trees, and F. Chapoton's Hopf algebra on all faces of the associahedron.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focused on the neotectonics of the southern Atlas of Tunisia based on morphometric analyses and found that two structures in the study area (Jebels Ben Younes and Bou Ramli) are distinguished by a specific morphometric footprint: high residual topography, high hypsometric integral value of some watersheds (HI < 0.6), high frequency of drainage anomalies, high maximum vertical curvature, and high roughness index.

28 citations


Proceedings ArticleDOI
TL;DR: In this article, a Si-Ge bipolar process was used to realize on-chip optical links of 50 micron length, utilizing 650-850 nm propagation wavelength, with a power consumption of only 0.1 mW per GHz bandwidth.
Abstract: Micron dimensioned on-chip optical links of 50 micron length, utilizing 650 – 850 nm propagation wavelength, have been realized in a Si Ge bipolar process. Key design strategies is the utilization of high speed avalanche based Si light emitting devices (Si Av LEds) in combination with silicon nitride based wave guides and high speeds Si Ge based optical detectors. The optical source, waveguide and detector were all integrated on the same chip. TEOS densification strategies and state of the art Si-Ge bipolar technology were further used as key design strategies. Best performances show up to 25 GHz RF carrier modulation and - 40dBm total optical link budget loss with a power consumption of only 0.1 mW per GHz bandwidth. Improvement possibilities still exist. The process used is in regular production. The technology is particularly suitable for application as optical interconnects utilizing low loss, side surface, waveguide to optical fibre coupling.

21 citations


Proceedings ArticleDOI
06 Mar 2014
TL;DR: A solution to re-introduce coherence in the application, in order to keep the advantages of choreography while dealing with this main issue of coherence, and a comparison between the checking cost and the reliability improvement is compared.
Abstract: Facilitating the creation of Internet of Things (IoT) applications is a major concern to increase its development D-LITe, our previous work, is a framework for that purpose In D-LITe, Objects are considered as part of a whole application They offer a REST web service that describes Object capabilities, receives the logic to be executed, and interacts with other stakeholders Then, the complete application is seen as a choreography dynamically deployed on various objects But the main issue of choreographies is the loss of coherence Because of their unreliability, some networks used in IoT may introduce de-synchronization between Objects, leading to errors and failures In this paper, we propose a solution to re-introduce coherence in the application, in order to keep the advantages of choreography while dealing with this main issue An overlay of logical check-points at the application layer defines links between the coherent states of a set of objects and triggers re-synchronization messages Correcting statements are thus spread through the network, which enables fault recovery in Choreographies This paper ends with a comparison between the checking cost and the reliability improvement

20 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of the efficient estimation of the jumps for stochastic processes, and showed that a threshold estimator of the sequence of jumps, based on the discrete observations, reaches the minimal variance of the previous convolution theorem.
Abstract: We study the problem of the efficient estimation of the jumps for stochastic processes. We assume that the stochastic jump process (Xt )t∈[0,1] is observed discretely, with a sampling step of size 1/n. In the spirit of Hajek’s convolution theorem, we show some lower bounds for the estimation error of the sequence of the jumps (�X Tk )k. As an intermediate result, we prove a LAMN property, with rate √ n, when the marks of the underlying jump component are deterministic. We deduce then a convolution theorem, with an explicit asymptotic minimal variance, in the case where the marks of the jump component are random. To prove that this lower bound is optimal, we show that a threshold estimator of the sequence of jumps (�X Tk )k based on the discrete observations, reaches the minimal variance of the previous convolution theorem.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of the efficient estimation of the jumps for stochastic processes, and showed that a threshold estimator of the sequence of jumps, based on the discrete observations, reaches the minimal variance of the previous convolution theorem.
Abstract: We study the problem of the efficient estimation of the jumps for stochastic processes. We assume that the stochastic jump process $(X_t)_{t\in[0,1]}$ is observed discretely, with a sampling step of size $1/n$. In the spirit of Hajek's convolution theorem, we show some lower bounds for the estimation error of the sequence of the jumps $(\Delta X_{T_k})_k$. As an intermediate result, we prove a LAMN property, with rate $\sqrt{n}$, when the marks of the underlying jump component are deterministic. We deduce then a convolution theorem, with an explicit asymptotic minimal variance, in the case where the marks of the jump component are random. To prove that this lower bound is optimal, we show that a threshold estimator of the sequence of jumps $(\Delta X_{T_k})_k$ based on the discrete observations, reaches the minimal variance of the previous convolution theorem.

Journal ArticleDOI
TL;DR: In this paper, a compress-and-forward relaying strategy using nested lattice codes is proposed and the proposed scheme is improved by performing a layered coding: a common layer is decoded by both receivers and a refinement layer is recovered only by the receiver which has the best channel conditions.
Abstract: This paper studies Gaussian Two-Way Relay Channel where two communication nodes exchange messages with each other via a relay. It is assumed that all nodes operate in half duplex mode without any direct link between the communication nodes. A compress-and-forward relaying strategy using nested lattice codes is first propos ed. Then, the proposed scheme is improved by performing a layered coding : a common layer is decoded by both receivers and a refinement layer is recovered only by the receiver which has t he best channel conditions. The achievable rates of the new scheme are characterized and are shown to be higher than those provided by the decode-and-forward strategy in some regions.

Journal ArticleDOI
21 May 2014
TL;DR: The saturation technique has been adapted to the model checking problem of pushdown systems by Bouajjani et al. as mentioned in this paper, where the saturation technique is used for global reachability analysis of a pushdown system, which can be used to construct a regular representation of all configurations reachable from a given set of initial configurations.
Abstract: Pushdown systems have, over the past 15 years, been popular with the software verification community Their stack can be used to model the call stack of a first-order recursive program, with the control state holding valuations of the program’s global variables, and stack characters encoding the local variable valuations As such the control flow of first-order recursive programs (such as C and Java programs) can be accurately modelled [29] Pushdown systems have played a key role in the automata-theoretic approach to software model checking and considerable progress has been made in the implementation of scalable model checkers of pushdown systems These tools (eg Bebop [3] and Moped [21, 39, 52, 50]) are an essential back-end components of high-profile model checkers such as SLAM [2] A fundamental result for the model-checking of pushdown systems was established by Buchi in [12] He showed that the set of stack contents reachable from the initial configuration of a pushdown system form a regular language and hence can be represented by a finite state automaton The procedure provided by Buchi to compute this automaton from the pushdown system is exponential In [15], Caucal gave the first polynomial time algorithm to solve this problem This efficient computation is obtained by a saturation process where transitions are incrementally added to the finite automaton This technique, which is the topic of this survey, was simplified and adapted to the model-checking setting by Bouajjani et al in [7] and independently by Finkel et al in [22] The saturation technique allows global model checking of pushdown systems For example, one may construct a regular representation of all configurations reachable from a given set of initial configurations, or, dually, the set of all configurations that may reach a given set of target configurations As well as providing direct solutions to simple reachability properties (eg can an error state be reached from a designated initial configuration), the representations constructed by global analyses may be reused in a variety of settings For example, once may perform multiple (and dynamic) queries on the set of reachable states without having to re-run the model checking routine Additionally, these representations may be combined as part of a larger algorithm or proof For example, Bouajjani et al provided solutions to the model checking problem for the alternation free μ-calculus by combining the results obtained through multiple global reachability analyses [7] In this survey, we present the saturation method under its different forms for reachability problems in Section 3 The saturation technique also generalises to the analysis of two-players games played over the configuration graph of a pushdown systems This extension based on the work of Cachat [13] and Hague and Ong [28] is presented in Section 4 In Section 5, we review the various model-checking tools that

Proceedings ArticleDOI
01 Jun 2014
TL;DR: This result shows the synergistic S doF gains of alternating CSIT and illustrates that, as opposed to encoding separately over different states, an improved SDoF region is achievable by joint encoding across these states.
Abstract: We study the problem of secure transmission over a twouser Gaussian multi-input single-output (MISO) broadcast channel under the assumption that the channel to each receiver is conveyed either perfectly (P) or with delay (D) to the transmitter. Denoting S1 and S2 to be the channel state information at the transmitter (CSIT) of user 1 and user 2, respectively; the overall CSIT can then alternate between four states, i.e., (S1,S2) ∈ {P,D} 2. We denote λS1S2 be the fraction of time the state S1S2 occurs, and focus on the symmetric case such that λS1S2 = λS2S1 . Under these assumptions, we first consider the Gaussian MISO wiretap channel and characterize the secure degrees of freedom (SDoF). Next, we generalize this model to the two-user Gaussian MISO broadcast channel and establish an inner bound on the SDoF region. This result shows the synergistic SDoF gains of alternating CSIT and illustrates that, as opposed to encoding separately over different states, a higher SDoF region is achievable by joint encoding across these states.

Journal ArticleDOI
TL;DR: In this article, the authors developed an efficient bit allocation strategy for subband-based image coding systems based on a rate-distortion optimality criterion and formulated the problem as a convex optimization problem.
Abstract: In this paper, we develop an efficient bit allocation strategy for subband-based image coding systems. More specifically, our objective is to design a new optimization algorithm based on a rate-distortion optimality criterion. To this end, we consider the uniform scalar quantization of a class of mixed distributed sources following a Bernoulli-generalized Gaussian distribution. This model appears to be particularly well-adapted for image data, which have a sparse representation in a wavelet basis. In this paper, we propose new approximations of the entropy and the distortion functions using piecewise affine and exponential forms, respectively. Because of these approximations, bit allocation is reformulated as a convex optimization problem. Solving the resulting problem allows us to derive the optimal quantization step for each subband. Experimental results show the benefits that can be drawn from the proposed bit allocation method in a typical transform-based coding application.

Proceedings ArticleDOI
04 May 2014
TL;DR: This work combines the Forward-Backward algorithm with an alternating minimization strategy to address a broad class of optimization problems involving large-size signals and an application example to a nonconvex spectral unmixing problem will be presented.
Abstract: Many inverse problems require to minimize a criterion being the sum of a non necessarily smooth function and a Lipschitz differentiable function. Such an optimization problem can be solved with the Forward-Backward algorithm which can be accelerated thanks to the use of variable metrics derived from the Majorize-Minimize principle. The convergence of this approach is guaranteed provided that the criterion satisfies some additional technical conditions. Combining this method with an alternating minimization strategy will be shown to allow us to address a broad class of optimization problems involving large-size signals. An application example to a nonconvex spectral unmixing problem will be presented.


Posted Content
TL;DR: It is proved that with high probability an automaton admits a synchronizing word of length smaller than n^(1+\epsilon), and therefore that the Cerny conjecture holds withhigh probability.
Abstract: A synchronizing word for an automaton is a word that brings that automaton into one and the same state, regardless of the starting position. Cerny conjectured in 1964 that if a n-state deterministic automaton has a synchronizing word, then it has a synchronizing word of size at most (n-1)^2. Berlinkov recently made a breakthrough in the probabilistic analysis of synchronization by proving that with high probability, an automaton has a synchronizing word. In this article, we prove that with high probability an automaton admits a synchronizing word of length smaller than n^(1+\epsilon), and therefore that the Cerny conjecture holds with high probability.

Proceedings ArticleDOI
TL;DR: In this paper, the carrier momentum and energy distributions were modeled in graded junction Silicon p+-i-n structures, and utilized to increase optical yield, achieving up to 750nW emission in a 7 micron square active area at 10 V and 1mA.
Abstract: Graded junction, carrier energy and momentum engineering concepts have been utilized to realize a high intensity 100 nW 5GHz Silicon Avalanche based LED (Si Av LED). A silicon 0.35 micron RF bi-polar process was used as design and processing technology. Particularly, the carrier momentum and energy distributions were modeled in graded junction Silicon p+-i-n structures, and utilized to increase optical yield. Best performance are up to 750nW emission in a 7 micron square active area at 10 V and 1mA. The device show up to 5 GHz modulation bandwidth. The spectral range is from 450 nm to 850 nm with an emphasized components in the white spectral region. The process is greatly CMOS compatible. The technology is particularly suitable for application in futuristic on- chip micro-photonic systems, lab-on chip systems, silicon- based micro display systems, on chip optical links, and optical inter-connects systems.

01 Jan 2014
TL;DR: Pattern matching is the problem of locating a specific pattern inside raw data as discussed by the authors, where the pattern is usually a collection of strings described in some formal language, and the aim of data compression is to provide representation of data in a reduced form in order to save both storage place and transmission time.
Abstract: Pattern matching is the problem of locating a specific pattern inside raw data. The pattern is usually a collection of strings described in some formal language. Applications require two kinds of solution depending upon which string, the pattern, or the text, is given first. Solutions based on the use of automata or combinatorial properties of strings are commonly implemented to preprocess the pattern. The notion of indices realized by trees or automata is used in the second kind of solutions. The aim of data compression is to provide representation of data in a reduced form in order to save both storage place and transmission time. There is no loss of information, the compression processes are reversible. Pattern-matching and text-compression algorithms are two important subjects in the wider domain of text processing. They apply to the manipulation of texts (word editors), to the storage of textual data (text compression), and to data retrieval systems (full text search). They are basic components used in implementations of practical softwares existing under most operating systems. Moreover, they emphasize programming methods that serve as paradigms in other fields of computer science (system or software design). Finally, they also play an important role in theoretical computer science by providing challenging problems. Although data are recorded in various ways, text remains the main way to exchange information. This is particularly evident in literature or linguistics where data are composed of huge corpora and dictionaries, but applies as well to computer science where a large amount of data is stored in linear files. And it is also the case, for instance, in molecular biology because biological molecules can often be approximated as sequences of nucleotides or amino acids. Furthermore, the quantity of available data in these fields tend to double every 18 months. This is the reason that algorithms must be efficient even if the speed and storage capacity of computers increase continuously.

Proceedings ArticleDOI
24 Apr 2014
TL;DR: It is proved that the zeta function of a finite-type-Dyck shift is a computable N-algebraic series, i.e. is the generating series of some unambiguous context-free language.
Abstract: Constrained coding is a technique for converting unrestricted sequences of symbols into constrained sequences, i.e. sequences with a predefined set of properties. Regular constraints are described by finite-state automata and the set of bi-infinite constrained sequences are finite-type or sofic shifts. A larger class of constraints, described by sofic-Dyck automata, are the visibly pushdown constraints whose corresponding set of biinfinite sequences are the sofic-Dyck shifts. An algebraic formula for the zeta function, which counts the periodic sequences of these shifts, can be obtained for sofic-Dyck shifts having a right-resolving presentation. We extend the formula to all sofic-Dyck shifts. This proves that the zeta function of all sofic-Dyck shifts is a computable Z-algebraic series. We prove that the zeta function of a finite-type-Dyck shift is a computable N-algebraic series, i.e. is the generating series of some unambiguous context-free language. We conjecture that the result holds for all sofic-Dyck shifts.

Proceedings ArticleDOI
04 May 2014
TL;DR: A learning algorithm for multiclass support vector machines is designed that allows us to enforce sparsity through various nonsmooth regularizations, such as the mixed ℓ1, p-norm with p ≥ 1, and the proposed constrained convex optimization approach involves an epigraphical constraint.
Abstract: Sparsity inducing penalizations are useful tools in variational methods for machine learning. In this paper, we design a learning algorithm for multiclass support vector machines that allows us to enforce sparsity through various nonsmooth regularizations, such as the mixed l 1, p -norm with p ≥ 1. The proposed constrained convex optimization approach involves an epigraphical constraint for which we derive the closed-form expression of the associated projection. This sparse multiclass SVM problem can be efficiently implemented thanks to the flexibility offered by recent primal-dual proximal algorithms. Experiments carried out for handwritten digits demonstrate the interest of considering nonsmooth sparsity-inducing regularizations and the efficiency of the proposed epigraphical projection method.

Proceedings ArticleDOI
01 Jun 2014
TL;DR: The results in this paper shed more light on the utility of delayed CSI for increasing the capacity region of multiaccess channels; and tie with some recent progress in this framework.
Abstract: We consider a two-user multiaccess channel with degraded messages sets in which the channel state information (CSI) is revealed, strictly causally or with one-unit delay, to only the encoder that sends the common message. We study the capacity region of this model. We establish inner and outer bounds on the capacity region. We also identify some special cases in which the bounds meet, thereby characterizing the capacity region in these cases. The outer bound is non-trivial and has a relatively simple convenient expression (it incorporates only one auxiliary random variable). The coding scheme that we use for the inner bound utilizes rate-splitting to resolve a tension at the informed encoder among exploiting the knowledge of the (delayed) CSI (through a noisy network coding or quantize-map-and-forward state compression) and sending information cooperatively with the other encoder. Together with some previous results on closely related models, the results in this paper shed more light on the utility of delayed CSI for increasing the capacity region of multiaccess channels; and tie with some recent progress in this framework.

Proceedings ArticleDOI
13 Oct 2014
TL;DR: This comparison shows that the implementation of the new instructions is faster than the reflection although the Java interpreter of combiners is slowed down by frequent object creations.
Abstract: Android OS uses for its applicative part a Java-based virtual machine called Dalvik. Java 7 introduces a new instructions set as well as a new API specified by the JSR 292 [11] in order to improve and simplify the implementation of non-Java languages on top of the Java Virtual Machine. As far as we know, neither the JVMs that are compatible with Java ME 8 nor Android, while compatible at syntactical level with Java 7, implement this JSR. We believe this is due to the fact that current implementations for desktop and server rely on code generation at runtime which requires too much computing power and energy.This paper describes our implementation of the JSR 292 in Dalvik. First, we adapt the class format specification of the JSR 292 which is stack based and implicitly relies on the fact that the bytecode can be rewritten at runtime, to the opcode specification of Dalvik which is register based and read only. Then we explain, how we represent the different kinds of method handles in Java, and how method handles are interpreted, using either the C interpreter for the low level method handle or a Java flexible but slow interpreter for the "combiner" method handles.At the end, we present a comparison of the execution time of the different kinds of calls. This comparison shows that the implementation of the new instructions is faster than the reflection although the Java interpreter of combiners is slowed down by frequent object creations.

Proceedings ArticleDOI
04 May 2014
TL;DR: In this article, a quantized Bernoulli-generalized Gaussian source with a sparse representation in a transformed domain is considered and the authors provide accurate approximations of the entropy and the distortion functions evaluated through a p-th order error measure.
Abstract: The objective of this paper is to study rate-distortion properties of a quantized Bernoulli-Generalized Gaussian source. Such source model has been found to be well-adapted for signals having a sparse representation in a transformed domain. We provide here accurate approximations of the entropy and the distortion functions evaluated through a p-th order error measure. These theoretical results are then validated experimentally. Finally, the benefit that can be drawn from the proposed approximations in bit allocation problems is illustrated for a wavelet-based compression scheme.

Posted Content
TL;DR: In this paper, the first component of the solution is Lipschitz in space and 1/2-Holder in time with respect to the initial data of the forward process and the terminal condition is given by a function of the terminal value of a forward stochastic differential equation.
Abstract: We consider the minimal super-solution of a backward stochastic differential equation with constraint on the gains-process. The terminal condition is given by a function of the terminal value of a forward stochastic differential equation. Under boundedness assumptions on the coefficients, we show that the first component of the solution is Lipschitz in space and 1/2-Holder in time with respect to the initial data of the forward process. Its path is continuous before the time horizon at which its left-limit is given by a face-lifted version of its natural boundary condition. This first component is actually equal to its own face-lift. We only use probabilistic arguments. In particular, our results can be extended to certain non-Markovian settings.

Proceedings ArticleDOI
01 Dec 2014
TL;DR: Results from scalable simulations using realistic vehicular mobility traces show that GRank is an efficient ranking algorithm to find important vehicles in comparison to other ranking metrics in the literature.
Abstract: Modern cars are transforming towards autonomous cars capable to make intelligent decisions to facilitate our travel comfort and safety. Such "Smart Vehicles" are equipped with various sensor platforms and cameras that are capable to constantly sense tremendous amount of heterogeneous data from urban streets. This paper aims to identify the appropriate vehicles, important to be selected as information hubs for the efficient collection, storage and distribution of such massive data. Therefore, we propose an Information-Centric algorithm, "GRank" for vehicles to autonomously find their importance based on their reachability for different location-aware information in a collaborative manner, without relying on any infrastructure network. GRank is the first step to identify socially important information hubs to be used in the network. Results from scalable simulations using realistic vehicular mobility traces show that GRank is an efficient ranking algorithm to find important vehicles in comparison to other ranking metrics in the literature.

Proceedings ArticleDOI
TL;DR: This work proposes a fully automatic approach for accurate severe cardiac stenoses detection using the Random Forest classi cation to detect stenotic areas and can reliably detect severe stenosis with a sensitivity of 64%.
Abstract: 3D cardiac computed tomography angiography (CCTA) is becoming a standard routine for non-invasive heart diseases diagnosis. Thanks to its high negative predictive value, CCTA is increasingly used to decide whether or not the patient should be considered for invasive angiography. However, an accurate assessment of cardiac lesions using this modality is still a time consuming task and needs a high degree of clinical expertise. Thus, providing automatic tool to assist clinicians during the diagnosis task is highly desirable. In this work, we propose a fully automatic approach for accurate severe cardiac stenoses detection. Our algorithm uses the Random Forest classi cation to detect stenotic areas. First, the classi er is trained on 18 CT cardiac exams with CTA reference standard. Then, then classi cation result is used to detect severe stenoses (with a narrowing degree higher than 50%) in a 30 cardiac CT exam database. Features that best captures the di erent stenoses con guration are extracted along the vessel centerlines at di erent scales. To ensure the accuracy against the vessel direction and scale changes, we extract features inside cylindrical patterns with variable directions and radii. Thus, we make sure that the ROIs contains only the vessel walls. The algorithm is evaluated using the Rotterdam Coronary Artery Stenoses Detection and Quantication Evaluation Framework. The evaluation is performed using reference standard quanti cations obtained from quantitative coronary angiography (QCA) and consensus reading of CTA. The obtained results show that we can reliably detect severe stenosis with a sensitivity of 64%.

01 Jan 2014
TL;DR: Transport and tourism relationship has been subject of discussion in many literatures and the role of transportation in tourism industries becomes important and the information and the analysis of traffic problems are needed to increase the efficiency of complex interaction in tourism area.
Abstract: Transport and tourism relationship has been subject of discussion in many literatures. The role of transportation in tourism industries becomes important. For instance, the information and the analysis of traffic problems are needed to increase the efficiency of complex interaction in tourism area. Some traffic problems such as how the traffic flow in a street, how to develop the traffic light system in a street, particularly in a cross-section and etc will be a very challenging problem for researchers. Two methods have been proposed to describe a traffic flow model, for instance see Haberman [1]. First model is a microscopic model where the model describes the movement of single vehicle-driver units. Here, the position and velocity of the vehicle are dynamic variables to describe the properties of the model. The second model is macroscopic model where the model describes the cars/vehicle as a density. In this model, the flow is introduced as