scispace - formally typeset
Search or ask a question

Showing papers by "Polytechnic University of Milan published in 1996"


Journal ArticleDOI
01 Feb 1996
TL;DR: It is shown how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling, and the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.
Abstract: An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.

11,224 citations


Journal ArticleDOI
TL;DR: This work uses a mathematical framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling) and believes that the formalisms and properties it introduces are convenient and intuitive and contributes constructively to a firmer theoretical ground of software measurement.
Abstract: Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts, regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysts, and better evaluation methods for commercial static analyzers for practitioners. We propose a mathematical framework which is generic, because it is not specific to any particular software artifact, and rigorous, because it is based on precise mathematical concepts. We use this framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling). It does not intend to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalisms and properties we introduce are convenient and intuitive. This framework contributes constructively to a firmer theoretical ground of software measurement.

771 citations


Proceedings ArticleDOI
15 Oct 1996
TL;DR: This paper proposes an adaptive contention window mechanism, which dynamically selects the optimal backoff window according to the estimate of the number of contending stations, and shows that this technique leads to stable behavior, and it outperforms the standard protocol when the network load and theNumber of mobile stations are high.
Abstract: The IEEE 802.11 protocol for wireless local area networks adopts a CSMA/CA protocol with exponential backoff as medium access control technique. As the throughput performance of such a scheme becomes critical when the number of mobile stations increases, in this paper we propose an adaptive contention window mechanism, which dynamically selects the optimal backoff window according to the estimate of the number of contending stations. We show that this technique leads to stable behavior, and it outperforms the standard protocol when the network load and the number of mobile stations are high. We also investigate the CSMA/CA with the optional RTS/CTS technique, and we show that our adaptive technique reaches better performance only when the packet size is short. Finally, the performance of a system environment with hidden terminals show that the RTS/CTS mechanism, which can also be used in conjunction with the adaptive contention window mechanism, provides significant improvements.

646 citations


Journal ArticleDOI
TL;DR: In this paper, the principles on which the technique is based and the various protecting effects induced by the cathodic polarization of reinforcement are illustrated; the differences between cathodic protection applied to control corrosion rate of chloride contaminated constructions and that applied to improve the corrosion resistance of the reinforcement of new structures expected to become contaminated are underlined; and the negative consequences of the method and the way to control them are shown.

320 citations


Journal ArticleDOI
TL;DR: In this paper, cylindrical silicon drift detectors have been designed, fabricated and tested, which include an integrated on-chip amplifier system with continuous reset, onchip voltage divider, electron accumulation layer stabilizer, large area, homogeneous radiation entrance window and a drain for surface generated leakage current.
Abstract: New cylindrical silicon drift detectors have been designed, fabricated and tested. They comprise an integrated on-chip amplifier system with continuous reset, on-chip voltage divider, electron accumulation layer stabilizer, large area, homogeneous radiation entrance window and a drain for surface generated leakage current. The test of the 3.5 mm2 large individual devices, which have also been grouped together to form a sensitive area up to 21 mm2 have shown the following spectroscopic results: at room temperature (300 K) the devices have shown a full width at half maximum at the MnKα line of a radioactive 55 Fe source of 225 eV with shaping times of 250 to 500 ns. At −20°C the resolution improves to 152 eV at 2 μs Gaussian shaping. At temperatures below 200 K the energy resolution is below 140 eV. With the implementation of a digital filtering system the resolution approaches 130 eV. The system was operated with count rates up to 800 000 counts per second and per readout node, still conserving the spectroscopic qualities of the detector system.

279 citations


Journal ArticleDOI
TL;DR: The main characters of Heuristics ‘derived’ from Nature are described, a border area between Operations Research and Artificial Intelligence, with applications to graph optimization problems.

278 citations


Journal ArticleDOI
TL;DR: In this article, the scaling properties of temporal rainfall are shown to dictate the form of the depth-duration-frequency (DDF) curves of station precipitation, which are widely used in hydrological practice to predict design storms.

267 citations


Journal ArticleDOI
TL;DR: Harmonic and Cartan decompositions were used by as mentioned in this paper to prove that there are eight symmetry classes of elasticity tensors, and their results in apparent contradiction with this conclusion are discussed in a short history of the problem.
Abstract: Harmonic and Cartan decompositions are used to prove that there are eight symmetry classes of elasticity tensors. Recent results in apparent contradiction with this conclusion are discussed in a short history of the problem.

235 citations


Journal ArticleDOI
TL;DR: It is reached that only two spectral components of heart rate variability were soundly analysable in a time span of a few hundred cycles-i.e., the time series usually selected to afford an adequate number of events, adequate stationarity and some appraisal of dynamic changes.
Abstract: The activity of sinus node pacemaker cells is under continuous regulation mainly effected by neural mechanisms. Hence the study of the variability of heart period, especially when assessed in the frequency domain with spectral analysis, was proposed about 15 years ago [l] as a probe for the evaluation of its autonomic control. Initially three spectral components of heart rate variability (HRV) were defined, while the factors considered as modulating them included sympathetic and parasympathetic activities, humoral factors such as the renin-angiotensin system or complex patterns like thermoregulation. On the basis of (i) an attempt to analyse cardiovascular neural regulation in closed-loop conditions, (ii) a different spectral methodology using autoregressive algorithms [21, (iii> simultaneous spectral analysis of both heart rate and arterial pressure variabilities providing the possibility of calculating crossspectra and their coherence, we reached the conclusion that only two spectral components were soundly analysable in a time span of a few hundred cycles-i.e., the time series usually selected to afford an adequate number of events, adequate stationarity and some appraisal of dynamic changes [3]. Hence, apart from a very low frequency (VLF) component [41, short-term variability appeared to consist mainly of an oscillation at low frequency (LF), usually around 0.1 Hz largely related to vasomotor activity, and of a high frequency (HF) oscillation, related to respiratory activity and under control conditions often around 0.25 Hz. Since our initial studies [2], we observed that the power of the LF component was percentually increased during manoeuvres exciting sympathetic activity while HF was

228 citations


Journal ArticleDOI
TL;DR: The potential of the analytical hierarchy process (AHP) for assessing and comparing the overall manufacturing performance of different departments is shown and its assumptions and limitations are pointed out.
Abstract: Many authors have suggested including non‐financial measures, besides traditional cost measures, in manufacturing performance measurement systems, in order to control the correct implementation of the manufacturing strategy with respect to all competitive priorities (quality, timeliness, flexibility, dependability, etc.). But the use of non‐financial performance measures makes it difficult to assess and compare the overall effectiveness of each manufacturing department, in terms of support provided to the achievement of the manufacturing strategy, since to this aim it is necessary to integrate performance measures expressed in heterogeneous measurement units. Aims to show the potential of the analytical hierarchy process (AHP) for assessing and comparing the overall manufacturing performance of different departments. Does not report the detailed analytical description of the AHP but focuses on the practical problems and managerial implications related to its application to performance measurement, pointing out also its assumptions and limitations.

203 citations


Journal ArticleDOI
TL;DR: It is demonstrated that following the approach presented may lead to violations of the strict prescriptions and proscriptions of measurement theory, but that in practical terms these violations would have diminished consequences, especially when compared to the advantages afforded to the practicing researcher.
Abstract: Elements of measurement theory have recently been introduced into the software engineering discipline. It has been suggested that these elements should serve as the basis for developing, reasoning about, and applying measures. For example, it has been suggested that software complexity measures should be additive, that measures fall into a number of distinct types (i.e., levels of measurement: nominal, ordinal, interval, and ratio), that certain statistical techniques are not appropriate for certain types of measures (e.g., parametric statistics for less-than-interval measures), and that certain transformations are not permissible for certain types of measures (e.g., non-linear transformations for interval measures). In this paper we argue that, inspite of the importance of measurement theory, and in the context of software engineering, many of these prescriptions and proscriptions are either premature or, if strictly applied, would represent a substantial hindrance to the progress of empirical research in software engineering. This argument is based partially on studies that have been conducted by behavioral scientists and by statisticians over the last five decades. We also present a pragmatic approach to the application of measurement theory in software engineering. While following our approach may lead to violations of the strict prescriptions and proscriptions of measurement theory, we demonstrate that in practical terms these violations would have diminished consequences, especially when compared to the advantages afforded to the practicing researcher.

Journal ArticleDOI
TL;DR: In this article, a numerical scheme to simulate full crack propagation is proposed which makes use of interface laws relating interlaminar stresses to displacement discontinuities along the plane of crack propagation, and the relation between interface laws and mixed-mode failure loci in terms of critical energies is discussed and clarified.
Abstract: A study of mixed-mode crack propagation in bending-based interlaminar fracture specimens is here presented. A numerical scheme to simulate full crack propagation is proposed which makes use of interface laws relating interlaminar stresses to displacement discontinuities along the plane of crack propagation. The relation between interface laws and mixed-mode failure loci in terms of critical energies is discussed and clarified. Numerical simulations are presented and compared with analytical and experimental results.

Journal ArticleDOI
TL;DR: In this paper, an inversion procedure based on the integral equation model (IEM) is developed to assess the retrieval of soil moisture from ERS 1 (European Remote Sensing Satellite) synthetic aperture radar (SAR) data.
Abstract: In order to assess the retrieval of soil moisture from ERS 1 (European Remote Sensing Satellite) synthetic aperture radar (SAR) data, an inversion procedure based oft the integral equation model (IEM) [Fung et al, 1992] is developed First, the IEM is used to analyze the sensitivity of radar echoes (in terms of the backscattering coefficient σ0) to the surface parameters (roughness and dielectric constant) under ERS 1 SAR configuration Results obtained for random rough bare soil fields show that the effect of surface roughness is very strong, particularly in the case of smooth surfaces, and that the sensitivity of σ0 to dielectric constant is independent of the radar configuration and the roughness conditions This means that the range of variation of backscattering with respect to the dielectric constant variation of dry to wet soil remains the same (about 5 dB) for any roughness condition and radar configuration The possibility of applying the inversion procedure to retrieve soil moisture is investigated using a set of data collected in a test site situated near Naples, Italy, during the SeIe Synthetic Aperture Radar experiment (SESAR) campaign (November 1993) Simultaneous with ERS 1 overpasses, dielectric constant and roughness measurements were taken over two flat bare fields From this analysis it is found that the inversion of backscattering from ERS 1 SAR into soil moisture is not reliable without accurate information on roughness if the surface is smooth In this case it is observed that the sensitivity to the roughness parameters is much higher than the sensitivity to dielectric constant, so that even a small error in the measurement of this parameter can affect the retrieved value of soil moisture significantly The inversion procedure provides more reliable soil moisture estimates when surfaces rougher than those analyzed in the field experiment are considered

Journal ArticleDOI
01 Aug 1996
TL;DR: A novel proof for the stability of rigid robot arms controlled by PID algorithms: the proof is based on a model of the robot where the nominal decoupled linear part is emphasized and the theoretical results are verified on a simple two DOF example.
Abstract: PID control is the most widespread technique for the control of industrial robot arms. However, the adoption of PID control is not adequately supported by a theoretical basis, since the results presented in the literature are of dubious interpretation and difficult, when not impossible, to verify. Motivated by this lack of theoretical support, this paper presents a novel proof for the stability of rigid robot arms controlled by PID algorithms: the proof is based on a model of the robot where the nominal decoupled linear part is emphasized. The main result consists in a simple condition between the exponential stability degree of the nominal closed loop system and the parameters of a bound on the nonlinear terms in the dynamic model of the mechanical manipulator. Some considerations are also worked out on the relations between the eigenvalues of the nominal system and the extension of the stability region. The theoretical results are finally verified on a simple two DOF example.

Journal ArticleDOI
TL;DR: The main modes of behavior of a food chain model composed of logistic prey and Holling type II predator and superpredator are discussed in this paper and the appearance of chaos in the model is proved to be related to a Hopf bifurcation and a degenerate homoclinic b ifurcation in the prey-predator subsystem.
Abstract: The main modes of behavior of a food chain model composed of logistic prey and Holling type II predator and superpredator are discussed in this paper. The study is carried out through bifurcation analysis, alternating between a normal form approach and numerical continuation. The two-parameter bifurcation diagram of the model contains Hopf, fold, and transcritical bifurcation curves of equilibria as well as flip, fold, and transcritical bifurcation curves of limit cycles. The appearance of chaos in the model is proved to be related to a Hopf bifurcation and a degenerate homoclinic bifurcation in the prey-predator subsystem. The boundary of the chaotic region is shown to have a very peculiar structure.

Journal ArticleDOI
TL;DR: The InGaAs/lnP avalanche photodiodes, designed for optical receivers and range finders, can be operated biased above the breakdown voltage, achieving single-photon sensitivity, and the performance of state-of-the-art detectors in the near IR is compared.
Abstract: Commercially available InGaAs/InP avalanche photodiodes, designed for optical receivers and range finders, can be operated biased above the breakdown voltage, achieving single-photon sensitivity. We describe in detail how to select the device for photon-counting applications among commercial samples. Because of the high dark-counting rate the detector must be cooled to below 100 K and operated in a gated mode. We achieved a noise equivalent power of 3 × 10−16 W/Hz1/2 to a 1.55-μm wavelength and a time resolution well below 1 ns with a best value of 200-ps FWHM. Finally we compare these figures with the performance of state-of-the-art detectors in the near IR, and we highlight the potentials of properly designed InGaAs/InP avalanche photodiodes in single-photon detection.

Journal ArticleDOI
TL;DR: In this article, a three-dimensional mode of spatial instability, related to the temporal algebraic growth that determines lift-up in parallel flow, is found to occur in the two-dimensional boundary layer growing over a flat surface.
Abstract: A three-dimensional mode of spatial instability, related to the temporal algebraic growth that determines lift-up in parallel flow, is found to occur in the two-dimensional boundary layer growing over a flat surface. This unstable perturbation can be framed within the limits of Prandtl's standard boundary-layer approximation, and therefore develops at any Reynolds number for which the boundary layer exists, in sharp contrast to all previously known flow instabilities which only occur beyond a sharply defined Reynolds-number threshold. It is thus a good candidate for the initial linear amplification mechanism that leads to bypass transition.

Journal ArticleDOI
TL;DR: In this article, a cyclic spanwise oscillation of the wall with a proper frequency and amplitude is imposed, allowing a reduction of the turbulent drag of up to 40% in turbulent boundary layers and channel flows.
Abstract: In the present work a technique is numerically investigated, which is aimed at reducing the friction drag in turbulent boundary layers and channel flows. A cyclic spanwise oscillation of the wall with a proper frequency and amplitude is imposed, allowing a reduction of the turbulent drag of up to 40%. The present work is based on the numerical simulation of the Navier-Stokes equations in the simple geometry of a plane channel flow. The frequency of the oscillations is kept fixed at the most efficient value determined in previous studies, while the choice of the best value for the amplitude of the oscillations is evaluated not only in terms of friction reduction, but also by taking into consideration the overall energy balance and the power spent for the motion of the wall. The analysis of turbulence statistics allows to shed some light on the way oscillations interact with wall turbulence, as illustrated by visual inspection of some instantaneous flow fields. Finally, a simple explanation is proposed for this interaction, which leads to a rough estimate of the most efficient value for the frequency of the oscillations.

Journal ArticleDOI
TL;DR: In this article, the most interesting processes, either physico-chemical or biological, tested on pilot scale or in industrial plants, for the treatment of these effluents, are presented.

Journal ArticleDOI
TL;DR: In this paper, the reactivity of ternary V2O5−WO3/TiO2 de-NOx catalysts was investigated under steady-state and transient conditions.
Abstract: The reactivity of ternary V2O5−WO3/TiO2 de-NOx catalysts (V2O5 = 0−1.47% w/w, WO3 = 0−9% w/w) in the selective catalytic reduction (SCR) reaction is investigated under steady-state and transient conditions. The results indicate that over the investigated catalysts the SCR reaction occurs via a redox mechanism and that the rate-determining step of the reaction is the catalyst reoxidation process. The reactivity of the V2O5−WO3/TiO2 catalysts increases on increasing either the V2O5 or the WO3 loading; the reactivity of V and/or W in the ternary catalysts is higher than that of the corresponding binary samples. A synergism between the TiO2-supported V and W surface oxide species in the SCR reaction is suggested, that is exploited in the enhancement of the catalyst redox properties of the samples. Accordingly, tungsta increases the rate of the SCR reaction of V2O5/TiO2 catalysts by favoring the catalyst reoxidation by gas-phase oxygen.

Journal ArticleDOI
TL;DR: In this paper, a mathematical expression is presented which can be used to describe the yield and plastic potential surfaces in any elasto-plastic constitutive model, which is defined completely by a maximum of only four parameters.

Journal ArticleDOI
TL;DR: In this paper, the results about the esterification of acetic acid with ethanol on a highly cross-linked sulphonic ion-exchange resin (Amberlyst 15) in a continuous Simulated Moving Bed Reactor (SMBR) laboratory unit are reported.

Journal ArticleDOI
TL;DR: In this article, the Granger causality relationship between a firm's intramural R&D intensity and technological cooperative agreements is analyzed using vector autoregression for panel data, and it is shown that decisions on interfirm technological collaborations are shown to cause a la Granger decisions on internal R & D investments and vice versa.

Book ChapterDOI
01 Jan 1996
TL;DR: The problem of learning fuzzy rules using Evolutionary Learning techniques, such as Genetic Algorithms and Learning Classifier Systems, are discussed.
Abstract: We discuss the problem of learning fuzzy rules using Evolutionary Learning techniques, such as Genetic Algorithms and Learning Classifier Systems.

Journal ArticleDOI
TL;DR: Findings support the hypothesis that chaotic dynamics underlie HRV, and indicate that non-linear dynamics are likely to be present in HRV control mechanisms, giving rise to complex and qualitatively different behaviours.
Abstract: Objectives : Heart rate variability (HRV) is characterised by a variety of linear, non-linear, periodical and non-periodical oscillations. The aim of the present study was mainly to investigate the role played by neural mechanisms in determining non-linear and non-periodical components. Methods : Analysis was performed in 7 recently heart transplanted patients and in 7 controls of similar age whose HRV signal was collected during 24 h. Parameters that quantify non-linear dynamic behaviour, in a time series, were calculated. We first assessed the specific non-linear nature of the time series by a test on surrogate data after Fourier phase randomization. Furthermore, the D 2 correlation dimension, K 2 Kolmogorov entropy, and H self-similarity exponent of the signal were estimated. From this last parameter, the dimension 1/H can be obtained. In order to assess whether the dynamics of the system are compatible with chaotic characteristics, the entire spectrum of Lyapunov exponents was calculated. We used return maps to graphically represent the non-linear and non-periodical behaviours in patients and controls. Results : Surrogate data suggest that the HRV time courses have unique non-linear characteristics. D 2, K 2 and 1/H parameters were significantly lower in transplanted subjects than in controls. Positivity of the first Lyapunov exponent indicates divergence of trajectories in state-space. Furthermore, the display of return maps on projections obtained after Singular Value Decomposition, especially in low-complexity data (as in transplanted patients), shows a structure which is suggestive of a strange attractor. These findings support the hypothesis that chaotic dynamics underlie HRV. Conclusion : These results indicate that non-linear dynamics are likely to be present in HRV control mechanisms, giving rise to complex and qualitatively different behaviours. System complexity decreases in transplanted patients and this may be related to loss of the neural modulation of heart rate.

Journal ArticleDOI
TL;DR: The coupling between joint kinematics and kinetics during level walking was analysed by plotting joint angles vs. joint moments in nine normal male subjects walking at three different velocities, and consistent portions of the moment-angle loops can be described as a sequence of quasi-constant slope phases.

Journal ArticleDOI
TL;DR: An automatic-repeat-request (ARQ) Go-Back-N (GBN) protocol with unreliable feedback and time-out mechanism is studied, using renewal theory.
Abstract: An automatic-repeat-request (ARQ) Go-Back-N (GBN) protocol with unreliable feedback and time-out mechanism is studied, using renewal theory. Transmissions on both the forward and the reverse channels are assumed to experience Markovian errors. The exact throughput of the protocol is evaluated, and simulation results, that confirm the analysis, are presented. A detailed comparison of the proposed method and the commonly used transfer function method reveals that the proposed approach is simple and potentially more powerful.

Journal ArticleDOI
TL;DR: In this paper, a model of a channel network is proposed for PTMSP with large holes (means radium r = 075 nm) connected by channel-like holes (mean radium 045 nm) to investigate the influence of the external atmosphere on the experimental PAL measurements and to determine correctly the size and the number of free volume holes.
Abstract: PTMSP membranes were prepared and characterized The mean molecular weight of the polymer was found to be 450,000 Da Time dependence of the density, mechanical properties, IR spectra, DSC, PAL, and permeability data of the polymer and membranes are presented A detailed analysis of PAL results for PTMSP samples under vacuum, in air, oxygen, and nitrogen atmosphere is presented with the aim to investigate the influence of the external atmosphere on the experimental PAL measurements and to determine correctly the size and the number of the free volume holes From all the experimental data a model of a channel network is proposed for PTMSP with large holes (mean radium r = 075 nm) connected by channel-like holes (mean radium 045 nm) The number of large holes decreases by ageing, but not their size, whereas the number of small holes does not change but their size decreases According to our model the decrease in the permeability of PTMSP with time could be caused by the decrease of the size of the channel-like holes © 1996 John Wiley & Sons, Inc

Journal ArticleDOI
TL;DR: The authors discuss an efficient phase preserving technique for ScanSAR focusing that can be significantly reduced by means of an azimuth varying filter, and the SAR-ScanSAR interferometry is proposed: here the decorrelation can always be removed.
Abstract: The authors discuss an efficient phase preserving technique for ScanSAR focusing, used to obtain images suitable for ScanSAR interferometry. Given two complex focused ScanSAR images of the same area, an interferogram can be generated as for conventional repeat pass SAR interferometry. However, due to the nonstationary azimuth spectrum of ScanSAR images, the coherence of the interferometric pair and the interferogram resolution are affected, both by the possible scan misregistration between two passes and by the terrain slopes along the azimuth. The resulting decorrelation can be significantly reduced by means of an azimuth varying filter, provided that some conditions on the scan misregistration are met. Finally, the SAR-ScanSAR interferometry is proposed: here the decorrelation can always be removed. With no resolution loss by means of the technique presented.

Journal ArticleDOI
TL;DR: Four different expressions, derived from the diffusion theory or the random walk model, were used to fit time-resolved reflectance data for the evaluation of tissue optical properties, showing different performances depending on the optical properties of the sample and the experimental conditions.
Abstract: Four different expressions, derived from the diffusion theory or the random walk model, were used to fit time-resolved reflectance data for the evaluation of tissue optical properties. The experimental reflectance curves were obtained from phantoms of known optical parameters (absorption and transport scattering coefficients) covering the range of typical values for biological tissues between 600 and 900 nm. The measurements were performed using an instrumentation for time-correlated single-photon counting. The potential of the four methods in the assessment of the absorption and transport scattering coefficients was evaluated in terms of absolute error, linearity error, and dispersion of data. Each method showed different performances depending on the optical properties of the sample and the experimental conditions. We propose some criteria for the optimal choice of the fitting method to be used in different applications.