scispace - formally typeset
Search or ask a question

Showing papers on "Entropy (information theory) published in 2007"


Journal ArticleDOI
TL;DR: The aim of this paper is to provide a detailed overview of information theoretic approaches for measuring causal influence in multivariate time series and to focus on diverse approaches to the entropy and mutual information estimation.

727 citations


Journal ArticleDOI
18 Jun 2007
TL;DR: Its performance on characterizing surface EMG signals, as well as independent, identically distributed (i.i.d.) random numbers and periodical sinusoidal signals, shows that FuzzyEn can more efficiently measure the regularity of time series.
Abstract: Fuzzy entropy (FuzzyEn), a new measure of time series regularity, was proposed and applied to the characterization of surface electromyography (EMG) signals. Similar to the two existing related measures ApEn and SampEn, FuzzyEn is the negative natural logarithm of the conditional probability that two vectors similar for m points remain similar for the next m+1 points. Importing the concept of fuzzy sets, vectors' similarity is fuzzily defined in FuzzyEn on the basis of exponential function and their shapes. Besides possessing the good properties of SampEn superior to ApEn, FuzzyEn also succeeds in giving the entropy definition in the case of small parameters. Its performance on characterizing surface EMG signals, as well as independent, identically distributed (i.i.d.) random numbers and periodical sinusoidal signals, shows that FuzzyEn can more efficiently measure the regularity of time series. The method introduced here can also be applied to other noisy physiological signals with relatively short datasets.

710 citations


Journal ArticleDOI
TL;DR: This work introduces the concepts of discrimination information and cross-entropy in the intuitionistic fuzzy setting and derives an extension of the De Luca-Termini nonprobabilistic entropy for IFSs and reveals an intuitive and mathematical connection between the notions of entropy for fuzzy sets (FSs) and I FSs in terms of fuzziness and intuitionism.

681 citations


Journal ArticleDOI
TL;DR: This paper presents a new k-means type algorithm for clustering high-dimensional objects in sub-spaces that can generate better clustering results than other subspace clustering algorithms and is also scalable to large data sets.
Abstract: This paper presents a new k-means type algorithm for clustering high-dimensional objects in sub-spaces. In high-dimensional data, clusters of objects often exist in subspaces rather than in the entire space. For example, in text clustering, clusters of documents of different topics are categorized by different subsets of terms or keywords. The keywords for one cluster may not occur in the documents of other clusters. This is a data sparsity problem faced in clustering high-dimensional data. In the new algorithm, we extend the k-means clustering process to calculate a weight for each dimension in each cluster and use the weight values to identify the subsets of important dimensions that categorize different clusters. This is achieved by including the weight entropy in the objective function that is minimized in the k-means clustering process. An additional step is added to the k-means clustering process to automatically compute the weights of all dimensions in each cluster. The experiments on both synthetic and real data have shown that the new algorithm can generate better clustering results than other subspace clustering algorithms. The new algorithm is also scalable to large data sets.

591 citations


Journal ArticleDOI
TL;DR: A representation space is introduced, to be called the complexity-entropy causality plane, which contains suitable functionals of the pertinent probability distribution, namely, the entropy of the system and an appropriate statistical complexity measure, respectively.
Abstract: Chaotic systems share with stochastic processes several properties that make them almost undistinguishable. In this communication we introduce a representation space, to be called the complexity-entropy causality plane. Its horizontal and vertical axis are suitable functionals of the pertinent probability distribution, namely, the entropy of the system and an appropriate statistical complexity measure, respectively. These two functionals are evaluated using the Bandt-Pompe recipe to assign a probability distribution function to the time series generated by the system. Several well-known model-generated time series, usually regarded as being of either stochastic or chaotic nature, are analyzed so as to illustrate the approach. The main achievement of this communication is the possibility of clearly distinguishing between them in our representation space, something that is rather difficult otherwise.

516 citations


Journal ArticleDOI
TL;DR: Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images, and it is shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio.
Abstract: We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Renyi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, in-focus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio.

303 citations


Patent
25 Jan 2007
TL;DR: In this article, the authors present a method for malware detection based on data entropy, which includes acquiring a block of data, calculating an entropy value for the block, and comparing the entropy value to a threshold value.
Abstract: Systems and methods for performing malware detection for determining suspicious data based on data entropy are provided. The method includes acquiring a block of data, calculating an entropy value for the block of data, comparing the entropy value to a threshold value, and recording the block of data as suspicious when the entropy value exceeds the threshold value. An administrator may then investigate suspicious data.

248 citations


Proceedings ArticleDOI
26 Dec 2007
TL;DR: A novel extraction method which utilises global information from each video input so that moving parts such as a moving hand can be identified and are used to select relevant interest points for a condensed representation.
Abstract: Local spatiotemporal features or interest points provide compact but descriptive representations for efficient video analysis and motion recognition. Current local feature extraction approaches involve either local filtering or entropy computation which ignore global information (e.g. large blobs of moving pixels) in video inputs. This paper presents a novel extraction method which utilises global information from each video input so that moving parts such as a moving hand can be identified and are used to select relevant interest points for a condensed representation. The proposed method involves obtaining a small set of subspace images, which can synthesise frames in the video input from their corresponding coefficient vectors, and then detecting interest points from the subspaces and the coefficient vectors. Experimental results indicate that the proposed method can yield a sparser set of interest points for motion recognition than existing methods.

245 citations


Journal ArticleDOI
TL;DR: This paper proposes a dynamic form of CRE and obtains some of its properties, and shows how CRE (and its dynamic version) is connected with well-known reliability measures such as the mean residual life time.

208 citations


Journal ArticleDOI
TL;DR: The experiment results show that the implementation of the proposed fuzzy entropy method incorporating with the ACO provides improved search performance and requires significantly reduced computations, making it suitable for real-time vision applications, such as automatic target recognition (ATR).

196 citations


Journal ArticleDOI
TL;DR: A method is presented for extracting the configurational entropy of solute molecules from molecular dynamics simulations, in which the entropy is computed as an expansion of multidimensional mutual information terms, which account for correlated motions among the various internal degrees of freedom of the molecule.
Abstract: A method is presented for extracting the configurational entropy of solute molecules from molecular dynamics simulations, in which the entropy is computed as an expansion of multidimensional mutual information terms, which account for correlated motions among the various internal degrees of freedom of the molecule. The mutual information expansion is demonstrated to be equivalent to estimating the full-dimensional configurational probability density function (PDF) using the generalized Kirkwood superposition approximation (GKSA). While the mutual information expansion is derived to the full dimensionality of the molecule, the current application uses a truncated form of the expansion in which all fourth- and higher-order mutual information terms are neglected. Truncation of the mutual information expansion at the nth order is shown to be equivalent to approximating the full-dimensional PDF using joint PDFs with dimensionality of n or smaller by successive application of the GKSA. The expansion method is u...

Journal ArticleDOI
TL;DR: In this paper, a new family of Fisher information and entropy power inequalities for sums of independent random variables are presented, which relate the information in the sum of n independent variables to the information contained in sums over subsets of the random variables, for an arbitrary collection of subsets.
Abstract: New families of Fisher information and entropy power inequalities for sums of independent random variables are presented. These inequalities relate the information in the sum of n independent random variables to the information contained in sums over subsets of the random variables, for an arbitrary collection of subsets. As a consequence, a simple proof of the monotonicity of information in central limit theorems is obtained, both in the setting of independent and identically distributed (i.i.d.) summands as well as in the more general setting of independent summands with variance-standardized sums.

Journal ArticleDOI
TL;DR: This paper presents an approach to single-product dynamic revenue management that accounts for errors in the underlying model at the optimization stage and obtains an optimal pricing policy through a version of the so-called Isaacs' equation for stochastic differential games.
Abstract: In the area of dynamic revenue management, optimal pricing policies are typically computed on the basis of an underlying demand rate model. From the perspective of applications, this approach implicitly assumes that the model is an accurate representation of the real-world demand process and that the parameters characterizing this model can be accurately calibrated using data. In many situations, neither of these conditions are satisfied. Indeed, models are usually simplified for the purpose of tractability and may be difficult to calibrate because of a lack of data. Moreover, pricing policies that are computed under the assumption that the model is correct may perform badly when this is not the case. This paper presents an approach to single-product dynamic revenue management that accounts for errors in the underlying model at the optimization stage. Uncertainty in the demand rate model is represented using the notion of relative entropy, and a tractable reformulation of the “robust pricing problem” is obtained using results concerning the change of probability measure for point processes. The optimal pricing policy is obtained through a version of the so-called Isaacs' equation for stochastic differential games, and the structural properties of the optimal solution are obtained through an analysis of this equation. In particular, (i) closed-form solutions for the special case of an exponential nominal demand rate model, (ii) general conditions for the exchange of the “max” and the “min” in the differential game, and (iii) the equivalence between the robust pricing problem and that of single-product revenue management with an exponential utility function without model uncertainty, are established through the analysis of this equation.

Book ChapterDOI
02 Jul 2007
TL;DR: A new method for constructing compact statistical point-based models of ensembles of similar shapes that does not rely on any specific surface parameterization, applicable to a wider range of problems than existing methods, including nonmanifold surfaces and objects of arbitrary topology.
Abstract: This paper presents a new method for constructing compact statistical point-based models of ensembles of similar shapes that does not rely on any specific surface parameterization. The method requires very little preprocessing or parameter tuning, and is applicable to a wider range of problems than existing methods, including nonmanifold surfaces and objects of arbitrary topology. The proposed method is to construct a point-based sampling of the shape ensemble that simultaneously maximizes both the geometric accuracy and the statistical simplicity of the model. Surface point samples, which also define the shape-to-shape correspondences, are modeled as sets of dynamic particles that are constrained to lie on a set of implicit surfaces. Sample positions are optimized by gradient descent on an energy function that balances the negative entropy of the distribution on each shape with the positive entropy of the ensemble of shapes. We also extend the method with a curvature-adaptive sampling strategy in order to better approximate the geometry of the objects. This paper presents the formulation; several synthetic examples in two and three dimensions; and an application to the statistical shape analysis of the caudate and hippocampus brain structures from two clinical studies.

Journal ArticleDOI
TL;DR: This paper extends Bode's integral equation for the case where the preview is made available to the controller via a general, finite capacity, communication system, and derives a universal lower bound which uses Shannon's entropy rate as a measure of performance.
Abstract: In this paper, we study fundamental limitations of disturbance attenuation of feedback systems, under the assumption that the controller has a finite horizon preview of the disturbance. In contrast with prior work, we extend Bode's integral equation for the case where the preview is made available to the controller via a general, finite capacity, communication system. Under asymptotic stationarity assumptions, our results show that the new fundamental limitation differs from Bode's only by a constant, which quantifies the information rate through the communication system. In the absence of asymptotic stationarity, we derive a universal lower bound which uses Shannon's entropy rate as a measure of performance. By means of a case-study, we show that our main bounds may be achieved

Journal ArticleDOI
TL;DR: A new approach based on an information theoretic measure called the cumulative residual entropy (CRE), which is a measure of entropy defined using cumulative distributions, which accommodates images to be registered of varying contrast+brightness and is well suited for situations where the source and the target images have field of views with large non-overlapping regions.
Abstract: In this paper we present a new approach for the non-rigid registration of multi-modality images. Our approach is based on an information theoretic measure called the cumulative residual entropy (CRE), which is a measure of entropy defined using cumulative distributions. Cross-CRE between two images to be registered is defined and maximized over the space of smooth and unknown non-rigid transformations. For efficient and robust computation of the non-rigid deformations, a tri-cubic B-spline based representation of the deformation function is used. The key strengths of combining CCRE with the tri-cubic B-spline representation in addressing the non-rigid registration problem are that, not only do we achieve the robustness due to the nature of the CCRE measure, we also achieve computational efficiency in estimating the non-rigid registration. The salient features of our algorithm are: (i) it accommodates images to be registered of varying contrast+brightness, (ii) faster convergence speed compared to other information theory-based measures used for non-rigid registration in literature, (iii) analytic computation of the gradient of CCRE with respect to the non-rigid registration parameters to achieve efficient and accurate registration, (iv) it is well suited for situations where the source and the target images have field of views with large non-overlapping regions. We demonstrate these strengths via experiments on synthesized and real image data.

Proceedings ArticleDOI
13 Jun 2007
TL;DR: A direct-sum theorem in communication complexity is derived by employing a rejection sampling procedure that relates the relative entropy between two distributions to the communication complexity of generating one distribution from the other.
Abstract: We examine the communication required for generating random variables remotely. One party Alice is given a distribution D, and she has to send a message to Bob, who is then required to generate a value with distribution exactly D. Alice and Bob are allowed to share random bits generated without the knowledge of D. There are two settings based on how the distribution D provided to Alice is chosen. If D is itself chosen randomly from some set (the set and distribution are known in advance) and we wish to minimize the expected communication in order for Alice to generate a value y, with distribution D, then we characterize the communication required in terms of the mutual information between the input to Alice and the output Bob is required to generate. If D is chosen from a set of distributions D, and we wish to devise a protocol so that the expected communication (the randomness comes from the shared random string and Alice's coin tosses) is small for each D isin D, then we characterize the communication required in this case in terms of the channel capacity associated with the set D. Our proofs are based on an improved rejection sampling procedure that relates the relative entropy between two distributions to the communication complexity of generating one distribution from the other. As an application of these results, we derive a direct sum theorem in communication complexity that substantially improves the previous such result shown by Jain et al. (2003).

Proceedings ArticleDOI
07 Jan 2007
TL;DR: A novel extension of a method introduced by Alon, Matias, and Szegedy for approximating the empirical entropy of a stream of m values in a single pass, using ε(ε-2 / log-1) words of space, meaning that the algorithm is near-optimal in terms of its dependency on ε.
Abstract: We describe a simple algorithm for approximating the empirical entropy of a stream of m values in a single pass, using O(e-2 log(δ-1) log m) words of space. Our algorithm is based upon a novel extension of a method introduced by Alon, Matias, and Szegedy [1]. We show a space lower bound of Ω(e-2 / log(e-1)), meaning that our algorithm is near-optimal in terms of its dependency on e. This improves over previous work on this problem [8, 13, 17, 5]. We show that generalizing to kth order entropy requires close to linear space for all k ≥ 1, and give additive approximations using our algorithm. Lastly, we show how to compute a multiplicative approximation to the entropy of a random walk on an undirected graph.

Journal ArticleDOI
TL;DR: A general information-theoretical inequality is developed that measures the statistical complexity of some deterministic and randomized density estimators and can lead to improvements of some classical results concerning the convergence of minimum description length and Bayesian posterior distributions.
Abstract: We consider an extension of $\epsilon$-entropy to a KL-divergence based complexity measure for randomized density estimation methods. Based on this extension, we develop a general information-theoretical inequality that measures the statistical complexity of some deterministic and randomized density estimators. Consequences of the new inequality will be presented. In particular, we show that this technique can lead to improvements of some classical results concerning the convergence of minimum description length and Bayesian posterior distributions. Moreover, we are able to derive clean finite-sample convergence bounds that are not obtainable using previous approaches.

Journal ArticleDOI
TL;DR: The newly introduced multiple-point simulation (mps) algorithms borrow the high order statistics from a visually and statistically explicit model, a training image, and it is shown that mps can simulate realizations with high entropy character as well as traditional Gaussian-based algorithms, while offering the flexibility of considering alternative training images with various levels of low entropy structures.
Abstract: Any interpolation, any hand contouring or digital drawing of a map or a numerical model necessarily calls for a prior model of the multiple-point statistics that link together the data to the unsampled nodes, then these unsampled nodes together. That prior model can be implicit, poorly defined as in hand contouring; it can be explicit through an algorithm as in digital mapping. The multiple-point statistics involved go well beyond single-point histogram and two-point covariance models; the challenge is to define algorithms that can control more of such statistics, particularly those that impact most the utilization of the resulting maps beyond their visual appearance. The newly introduced multiple-point simulation (mps) algorithms borrow the high order statistics from a visually and statistically explicit model, a training image. It is shown that mps can simulate realizations with high entropy character as well as traditional Gaussian-based algorithms, while offering the flexibility of considering alternative training images with various levels of low entropy (organized) structures. The impact on flow performance (spatial connectivity) of choosing a wrong training image among many sharing the same histogram and variogram is demonstrated.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the concept of time-frequency entropy based on Hilbert-Huang transform and proposed a gear fault diagnosis method based on timefrequency entropy, which could identify gear status with or without fault accurately and effectively.

Proceedings ArticleDOI
07 Jan 2007
TL;DR: In this article, a storage scheme for a string S[1, n] drawn from an alphabet σ, that requires space close to the κ-th order empirical entropy of S, and allows to retrieve any l-long substring of S in optimal O(1+l/log|∑|n) time.
Abstract: We propose a storage scheme for a string S[1, n], drawn from an alphabet σ, that requires space close to the κ-th order empirical entropy of S, and allows to retrieve any l-long substring of S in optimal O(1+l/log|∑|n) time. This matches the best known bounds [14, 7], via the use of binary encodings and tables only. We also apply this storage scheme to prove new time vs space trade-offs for compressed self-indexes [5, 12] and the Burrows-Wheeler Transform [2].

Journal ArticleDOI
TL;DR: An overview of the construction of meshfree basis functions is presented, with particular emphasis on moving least‐squares approximant, natural neighbour‐based polygonal interpolants, and entropy approximants.
Abstract: In this paper, an overview of the construction of meshfree basis functions is presented, with particular emphasis on moving least-squares approximants, natural neighbour-based polygonal interpolants, and entropy approximants. The use of information-theoretic variational principles to derive approximation schemes is a recent development. In this setting, data approximation is viewed as an inductive inference problem, with the basis functions being synonymous with a discrete probability distribution and the polynomial reproducing conditions acting as the linear constraints. The maximization (minimization) of the Shannon–Jaynes entropy functional (relative entropy functional) is used to unify the construction of globally and locally supported convex approximation schemes. A JAVA applet is used to visualize the meshfree basis functions, and comparisons and links between different meshfree approximation schemes are presented. Copyright © 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: New theoretical convergence results on the cross-entropy (CE) method for discrete optimization are presented, and it is shown that a popular implementation of the method converges, and finds an optimal solution with probability arbitrarily close to 1.

Proceedings ArticleDOI
24 Jun 2007
TL;DR: By carefully bounding the constrained regions in the entropy space, the exact characterization of the capacity region is obtained, thus closing the existing gap between the above inner and outer bounds.
Abstract: The capacity problem for general acyclic multi- source multi-sink networks with arbitrary transmission requirements has been studied by L. Song, et al (2003). Specifically, inner and outer bounds of the capacity region were derived respectively in terms of Gamman* and Gamma macrn*, the fundamental regions of the entropy function. In this paper, we show that by carefully bounding the constrained regions in the entropy space, we obtain the exact characterization of the capacity region, thus closing the existing gap between the above inner and outer bounds.

Journal ArticleDOI
TL;DR: This application shows that even in the presence of strong correlations, the methods constrain precisely the amount of information encoded by real spike trains recorded in vivo, which can provide data-robust upper and lower bounds to the mutual information.
Abstract: The estimation of the information carried by spike times is crucial for a quantitative understanding of brain function, but it is difficult because of an upward bias due to limited experimental sampling. We present new progress, based on two basic insights, on reducing the bias problem. First, we show that by means of a careful application of data-shuffling techniques, it is possible to cancel almost entirely the bias of the noise entropy, the most biased part of information. This procedure provides a new information estimator that is much less biased than the standard direct one and has similar variance. Second, we use a nonparametric test to determine whether all the information encoded by the spike train can be decoded assuming a low-dimensional response model. If this is the case, the complexity of response space can be fully captured by a small number of easily sampled parameters. Combining these two different procedures, we obtain a new class of precise estimators of information quantities, which can provide data-robust upper and lower bounds to the mutual information. These bounds are tight even when the number of trials per stimulus available is one order of magnitude smaller than the number of possible responses. The effectiveness and the usefulness of the methods are tested through applications to simulated data and recordings from somatosensory cortex. This application shows that even in the presence of strong correlations, our methods constrain precisely the amount of information encoded by real spike trains recorded in vivo.

Journal ArticleDOI
TL;DR: The NN method is illustrated by providing a much closer upper bound on the configurational entropy of internal rotation of a pentapeptide molecule than that obtained by the standard quasi‐harmonic method.
Abstract: A method for estimating the configurational (i.e., non-kinetic) part of the entropy of internal motion in complex molecules is introduced that does not assume any particular parametric form for the underlying probability density function. It is based on the nearest-neighbor (NN) distances of the points of a sample of internal molecular coordinates obtained by a computer simulation of a given molecule. As the method does not make any assumptions about the underlying potential energy function, it accounts fully for any anharmonicity of internal molecular motion. It provides an asymptotically unbiased and consistent estimate of the configurational part of the entropy of the internal degrees of freedom of the molecule. The NN method is illustrated by estimating the configurational entropy of internal rotation of capsaicin and two stereoisomers of tartaric acid, and by providing a much closer upper bound on the configurational entropy of internal rotation of a pentapeptide molecule than that obtained by the standard quasi-harmonic method. As a measure of dependence between any two internal molecular coordinates, a general coefficient of association based on the information-theoretic quantity of mutual information is proposed. Using NN estimates of this measure, statistical clustering procedures can be employed to group the coordinates into clusters of manageable dimensions and characterized by minimal dependence between coordinates belonging to different clusters.

Journal ArticleDOI
TL;DR: A minimum output entropy conjecture is proposed that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection.
Abstract: Previous work on the classical information capacities of bosonic channels has established the capacity of the single-user pure-loss channel, bounded the capacity of the single-user thermal-noise channel, and bounded the capacity region of the multiple-access channel. The latter is a multiple-user scenario in which several transmitters seek to simultaneously and independently communicate to a single receiver. We study the capacity region of the bosonic broadcast channel, in which a single transmitter seeks to simultaneously and independently communicate to two different receivers. It is known that the tightest available lower bound on the capacity of the single-user thermal-noise channel is that channel's capacity if, as conjectured, the minimum von Neumann entropy at the output of a bosonic channel with additive thermal noise occurs for coherent-state inputs. Evidence in support of this minimum output entropy conjecture has been accumulated, but a rigorous proof has not been obtained. We propose a minimum output entropy conjecture that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection. We provide some evidence that supports this conjecture, but again a full proof is not available.

Journal ArticleDOI
TL;DR: A quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two- photon polarization states is reported, and a quantum tomographic method is introduced and implemented to measure a lower bound on the 'min-entropy' of the system.
Abstract: Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require ``strong'' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographic method to measure a lower bound on the ``min-entropy'' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.

Journal ArticleDOI
TL;DR: This work presents a maximum entropy like method enabling to determine, if it exists, the “least specific” capacity compatible with the initial preferences of the decision maker.