scispace - formally typeset
Search or ask a question

Showing papers by "University of Trento published in 2008"


Journal ArticleDOI
TL;DR: In this article, the physics of quantum degenerate atomic Fermi gases in uniform as well as in harmonically trapped configurations is reviewed from a theoretical perspective, focusing on the effect of interactions that bring the gas into a superfluid phase at low temperature.
Abstract: The physics of quantum degenerate atomic Fermi gases in uniform as well as in harmonically trapped configurations is reviewed from a theoretical perspective. Emphasis is given to the effect of interactions that play a crucial role, bringing the gas into a superfluid phase at low temperature. In these dilute systems, interactions are characterized by a single parameter, the $s$-wave scattering length, whose value can be tuned using an external magnetic field near a broad Feshbach resonance. The BCS limit of ordinary Fermi superfluidity, the Bose-Einstein condensation (BEC) of dimers, and the unitary limit of large scattering length are important regimes exhibited by interacting Fermi gases. In particular, the BEC and the unitary regimes are characterized by a high value of the superfluid critical temperature, on the order of the Fermi temperature. Different physical properties are discussed, including the density profiles and the energy of the ground-state configurations, the momentum distribution, the fraction of condensed pairs, collective oscillations and pair-breaking effects, the expansion of the gas, the main thermodynamic properties, the behavior in the presence of optical lattices, and the signatures of superfluidity, such as the existence of quantized vortices, the quenching of the moment of inertia, and the consequences of spin polarization. Various theoretical approaches are considered, ranging from the mean-field description of the BCS-BEC crossover to nonperturbative methods based on quantum Monte Carlo techniques. A major goal of the review is to compare theoretical predictions with available experimental results.

1,753 citations


Journal ArticleDOI
12 Jun 2008-Nature
TL;DR: This work uses a non-interacting Bose–Einstein condensate to study Anderson localization of waves in disordered media and describes the crossover, finding that the critical disorder strength scales with the tunnelling energy of the atoms in the lattice.
Abstract: Anderson localization of waves in disordered media was originally predicted fifty years ago, in the context of transport of electrons in crystals. The phenomenon is much more general and has been observed in a variety of systems, including light waves. However, Anderson localization has not been observed directly for matter waves. Owing to the high degree of control over most of the system parameters (in particular the interaction strength), ultracold atoms offer opportunities for the study of disorder-induced localization. Here we use a non-interacting Bose-Einstein condensate to study Anderson localization. The experiment is performed with a one-dimensional quasi-periodic lattice-a system that features a crossover between extended and exponentially localized states, as in the case of purely random disorder in higher dimensions. Localization is clearly demonstrated through investigations of the transport properties and spatial and momentum distributions. We characterize the crossover, finding that the critical disorder strength scales with the tunnelling energy of the atoms in the lattice. This controllable system may be used to investigate the interplay of disorder and interaction (ref. 7 and references therein), and to explore exotic quantum phases.

1,379 citations


Journal ArticleDOI
TL;DR: It is argued that weighted, alpha-like coefficients, traditionally less used than kappa-like measures in computational linguistics, may be more appropriate for many corpus annotation tasks—but that their use makes the interpretation of the value of the coefficient even harder.
Abstract: This article is a survey of methods for measuring agreement among corpus annotators. It exposes the mathematics and underlying assumptions of agreement coefficients, covering Krippendorff's alpha as well as Scott's pi and Cohen's kappa; discusses the use of coefficients in several annotation tasks; and argues that weighted, alpha-like coefficients, traditionally less used than kappa-like measures in computational linguistics, may be more appropriate for many corpus annotation tasks---but that their use makes the interpretation of the value of the coefficient even harder.

1,324 citations


Journal ArticleDOI
TL;DR: This work suggests a middle ground between the embodied and disembodied cognition hypotheses--grounding by interaction, which combines the view that concepts are, at some level, 'abstract' and 'symbolic', with the idea that sensory and motor information may 'instantiate' online conceptual processing.
Abstract: Many studies have demonstrated that the sensory and motor systems are activated during conceptual processing. Such results have been interpreted as indicating that concepts, and important aspects of cognition more broadly, are embodied. That conclusion does not follow from the empirical evidence. The reason why is that the empirical evidence can equally be accommodated by a ‘disembodied’ view of conceptual representation that makes explicit assumptions about spreading activation between the conceptual and sensory and motor systems. At the same time, the strong form of the embodied cognition hypothesis is at variance with currently available neuropsychological evidence. We suggest a middle ground between the embodied and disembodied cognition hypotheses – grounding by interaction. This hypothesis combines the view that concepts are, at some level, ‘abstract’ and ‘symbolic’, with the idea that sensory and motor information may ‘instantiate’ online conceptual processing.

1,078 citations


Journal ArticleDOI
TL;DR: A conservative least-squares polynomial reconstruction operator is applied to the discontinuous Galerkin method, which yields space–time polynomials for the vector of conserved variables and for the physical fluxes and source terms that can be used in a natural way to construct very efficient fully-discrete and quadrature-free one-step schemes.

555 citations


Journal ArticleDOI
TL;DR: In this paper, the spontaneous formation of pinned quantized vortices in the Bose-condensed phase of a polariton fluid was observed in a solid state system made of exciton polaritons.
Abstract: When a superfluid—such as liquid helium—is set in rotation, vortices appear in which circulation around a closed loop can take only discrete values. Such quantized vortices have now been observed in a solid-state system—a Bose–Einstein condensate made of exciton polaritons. One of the most striking quantum effects in an interacting Bose gas at low temperature is superfluidity. First observed in liquid 4He, this phenomenon has been intensively studied in a variety of systems for its remarkable features such as the persistence of superflows and the proliferation of quantized vortices1. The achievement of Bose–Einstein condensation in dilute atomic gases2 provided the opportunity to observe and study superfluidity in an extremely clean and well-controlled environment. In the solid state, Bose–Einstein condensation of exciton polaritons has been reported recently3,4,5,6. Polaritons are strongly interacting light–matter quasiparticles that occur naturally in semiconductor microcavities in the strong-coupling regime and constitute an interesting example of composite bosons. Here, we report the observation of spontaneous formation of pinned quantized vortices in the Bose-condensed phase of a polariton fluid. Theoretical insight into the possible origin of such vortices is presented in terms of a generalized Gross–Pitaevskii equation. Whereas the observation of quantized vortices is, in itself, not sufficient for establishing the superfluid nature of the non-equilibrium polariton condensate, it suggests parallels between our system and conventional superfluids.

544 citations


Journal ArticleDOI
TL;DR: SocialCast is proposed, a routing framework for publish-subscribe that exploits predictions based on metrics of social interaction to identify the best information carriers and shows that prediction of colocation and node mobility allow for maintaining a very high and steady event delivery with low overhead and latency.
Abstract: Applications involving the dissemination of information directly relevant to humans (e.g., service advertising, news spreading, environmental alerts) often rely on publish-subscribe, in which the network delivers a published message only to the nodes whose subscribed interests match it. In principle, publish- subscribe is particularly useful in mobile environments, since it minimizes the coupling among communication parties. However, to the best of our knowledge, none of the (few) works that tackled publish-subscribe in mobile environments has yet addressed intermittently-connected human networks. Socially-related people tend to be co-located quite regularly. This characteristic can be exploited to drive forwarding decisions in the interest-based routing layer supporting the publish-subscribe network, yielding not only improved performance but also the ability to overcome high rates of mobility and long-lasting disconnections. In this paper we propose SocialCast, a routing framework for publish-subscribe that exploits predictions based on metrics of social interaction (e.g., patterns of movements among communities) to identify the best information carriers. We highlight the principles underlying our protocol, illustrate its operation, and evaluate its performance using a mobility model based on a social network validated with real human mobility traces. The evaluation shows that prediction of colocation and node mobility allow for maintaining a very high and steady event delivery with low overhead and latency, despite the variation in density, number of replicas per message or speed.

513 citations


Journal ArticleDOI
TL;DR: The elevation channel of the first LIDAR return was very effective for the separation of species with similar spectral signatures but different mean heights, and the SVM classifier proved to be very robust and accurate in the exploitation of the considered multisource data.
Abstract: In this paper, we propose an analysis on the joint effect of hyperspectral and light detection and ranging (LIDAR) data for the classification of complex forest areas. In greater detail, we present: 1) an advanced system for the joint use of hyperspectral and LIDAR data in complex classification problems; 2) an investigation on the effectiveness of the very promising support vector machines (SVMs) and Gaussian maximum likelihood with leave-one-out-covariance algorithm classifiers for the analysis of complex forest scenarios characterized from a high number of species in a multisource framework; and 3) an analysis on the effectiveness of different LIDAR returns and channels (elevation and intensity) for increasing the classification accuracy obtained with hyperspectral images, particularly in relation to the discrimination of very similar classes. Several experiments carried out on a complex forest area in Italy provide interesting conclusions on the effectiveness and potentialities of the joint use of hyperspectral and LIDAR data and on the accuracy of the different classification techniques analyzed in the proposed system. In particular, the elevation channel of the first LIDAR return was very effective for the separation of species with similar spectral signatures but different mean heights, and the SVM classifier proved to be very robust and accurate in the exploitation of the considered multisource data.

506 citations


Journal ArticleDOI
01 Sep 2008
TL;DR: A thorough experimental study to show the superiority of the generalization capability of the support vector machine (SVM) approach in the automatic classification of electrocardiogram (ECG) beats and suggest that further substantial improvements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system.
Abstract: The aim of this paper is twofold. First, we present a thorough experimental study to show the superiority of the generalization capability of the support vector machine (SVM) approach in the automatic classification of electrocardiogram (ECG) beats. Second, we propose a novel classification system based on particle swarm optimization (PSO) to improve the generalization performance of the SVM classifier. For this purpose, we have optimized the SVM classifier design by searching for the best value of the parameters that tune its discriminant function, and upstream by looking for the best subset of features that feed the classifier. The experiments were conducted on the basis of ECG data from the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database to classify five kinds of abnormal waveforms and normal beats. In particular, they were organized so as to test the sensitivity of the SVM classifier and that of two reference classifiers used for comparison, i.e., the k-nearest neighbor (kNN) classifier and the radial basis function (RBF) neural network classifier, with respect to the curse of dimensionality and the number of available training beats. The obtained results clearly confirm the superiority of the SVM approach as compared to traditional classifiers, and suggest that further substantial improvements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. On an average, over three experiments making use of a different total number of training beats (250, 500, and 750, respectively), the PSO-SVM yielded an overall accuracy of 89.72% on 40438 test beats selected from 20 patient records against 85.98%, 83.70%, and 82.34% for the SVM, the kNN, and the RBF classifiers, respectively.

480 citations


Journal ArticleDOI
TL;DR: Current tools, frameworks, and trends that aim to facilitate mashup development are overviewed and a set of characteristic dimensions are used to highlight the strengths and weaknesses of some representative approaches.
Abstract: Web mashups are Web applications developed using contents and services available online. Despite rapidly increasing interest in mashups, comprehensive development tools and frameworks are lacking, and in most cases mashing up a new application implies a significant manual programming effort. This article overviews current tools, frameworks, and trends that aim to facilitate mashup development. The authors use a set of characteristic dimensions to highlight the strengths and weaknesses of some representative approaches.

480 citations


Journal ArticleDOI
TL;DR: An overview of the state of the art of machine learning applications for spam filtering, and of the ways of evaluation and comparison of different filtering methods.
Abstract: Email spam is one of the major problems of the today's Internet, bringing financial damage to companies and annoying individual users. Among the approaches developed to stop spam, filtering is an important and popular one. In this paper we give an overview of the state of the art of machine learning applications for spam filtering, and of the ways of evaluation and comparison of different filtering methods. We also provide a brief description of other branches of anti-spam protection and discuss the use of various approaches in commercial and non-commercial anti-spam software solutions.

Journal ArticleDOI
TL;DR: Based on the results of this study, chemical oxidation proved to be an effective remediation technology, amenably applicable for the ex situ remediation of the sediments of concern, and indicated that the optimal oxidant dose must be carefully determined under site-specific conditions.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the influence of the boundary propagates into the bulk over increasing length scales on cooling, and with the increase of this static correlation length, the influence on the boundary decays non-exponentially.
Abstract: That the dynamical properties of a glass-forming liquid at high temperature are different from behaviour in the supercooled state has already been established. Numerical simulations now suggest that the static length scale over which spatial correlations exist also changes on approaching the glass transition. Supercooled liquids exhibit a pronounced slowdown of their dynamics on cooling1 without showing any obvious structural or thermodynamic changes2. Several theories relate this slowdown to increasing spatial correlations3,4,5,6. However, no sign of this is seen in standard static correlation functions, despite indirect evidence from considering specific heat7 and linear dielectric susceptibility8. Whereas the dynamic correlation function progressively becomes more non-exponential as the temperature is reduced, so far no similar signature has been found in static correlations that can distinguish qualitatively between a high-temperature and a deeply supercooled glass-forming liquid in equilibrium. Here, we show evidence of a qualitative thermodynamic signature that differentiates between the two. We show by numerical simulations with fixed boundary conditions that the influence of the boundary propagates into the bulk over increasing length scales on cooling. With the increase of this static correlation length, the influence of the boundary decays non-exponentially. Such long-range susceptibility to boundary conditions is expected within the random first-order theory4,9,10 (RFOT) of the glass transition. However, a quantitative account of our numerical results requires a generalization of RFOT, taking into account surface tension fluctuations between states.

Journal ArticleDOI
TL;DR: The results suggest that bilinguals do not differ from monolinguals in terms of active inhibition but have acquired a better ability to maintain action goals and to use them to bias goal-related information.
Abstract: It has been claimed that bilingualism enhances inhibitory control, but the available evidence is equivocal. The authors evaluated several possible versions of the inhibition hypothesis by comparing monolinguals and bilinguals with regard to stop signal performance, inhibition of return, and the attentional blink. These three phenomena, it can be argued, tap into different aspects of inhibition. Monolinguals and bilinguals did not differ in stop signal reaction time and thus were comparable in terms of active-inhibitory efficiency. However, bilinguals showed no facilitation from spatial cues, showed a strong inhibition of return effect, and exhibited a more pronounced attentional blink. These results suggest that bilinguals do not differ from monolinguals in terms of active inhibition but have acquired a better ability to maintain action goals and to use them to bias goal-related information. Under some circumstances, this ability may indirectly lead to more pronounced reactive inhibition of irrelevant information.

Journal ArticleDOI
TL;DR: A new class of finite volume schemes of arbitrary accuracy in space and time for systems of hyperbolic balance laws with stiff source terms is proposed based on a three stage procedure and a new strategy that only replaces the Cauchy-Kovalewski procedure compared to the previously mentioned schemes is presented.

Journal ArticleDOI
TL;DR: Five principles of 'trans-saccadic perception' are outlined that could help to explain how it is possible - despite discrete sensory input and limited memory - that conscious perception across saccades seems smooth and predictable.

Journal ArticleDOI
TL;DR: In this paper, the authors synthesize nanocrystalline TiO2 powders by thermohydrolysis of TiCl4 in HCl or NaCl aqueous solutions.

Journal ArticleDOI
TL;DR: The phase sensitivity Deltatheta of a Mach-Zehnder interferometer illuminated by a coherent state in one input port and a squeezed-vacuum state in the other port is independent of the true value of the phase shift and can reach the Heisenberg limit Delt atheta approximately 1/N(T), where N(T) is the average number of input particles.
Abstract: We show that the phase sensitivity $\ensuremath{\Delta}\ensuremath{\theta}$ of a Mach-Zehnder interferometer illuminated by a coherent state in one input port and a squeezed-vacuum state in the other port is (i) independent of the true value of the phase shift and (ii) can reach the Heisenberg limit $\ensuremath{\Delta}\ensuremath{\theta}\ensuremath{\sim}1/{N}_{T}$, where ${N}_{T}$ is the average number of input particles. We also demonstrate that the Cramer-Rao lower bound of phase sensitivity, $\ensuremath{\Delta}\ensuremath{\theta}\ensuremath{\sim}\phantom{\rule{0ex}{0ex}}1/\sqrt{|\ensuremath{\alpha}{|}^{2}{e}^{2r}+{sinh }^{2}r}$, can be saturated for arbitrary values of the squeezing parameter $r$ and the amplitude of the coherent mode $\ensuremath{\alpha}$ by using a Bayesian phase inference protocol.

Book
06 Nov 2008
TL;DR: Reactive Search and Intelligent Optimization is an excellent introduction to the main principles of reactive search, as well as an attempt to develop some fresh intuition for the approaches.
Abstract: Reactive Search integrates sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. By automatically adjusting the working parameters, a reactive search self-tunes and adapts, effectively learning by doing until a solution is found. Intelligent Optimization, a superset of Reactive Search, concerns online and off-line schemes based on the use of memory, adaptation, incremental development of models, experimental algorithms applied to optimization, intelligent tuning and design of heuristics. Reactive Search and Intelligent Optimization is an excellent introduction to the main principles of reactive search, as well as an attempt to develop some fresh intuition for the approaches. The book looks at different optimization possibilities with an emphasis on opportunities for learning and self-tuning strategies. While focusing more on methods than on problems, problems are introduced wherever they help make the discussion more concrete, or when a specific problem has been widely studied by reactive search and intelligent optimization heuristics. Individual chapters cover reacting on the neighborhood; reacting on the annealing schedule; reactive prohibitions; model-based search; reacting on the objective function; relationships between reactive search and reinforcement learning; and much more. Each chapter is structured to show basic issues and algorithms; the parameters critical for the success of the different methods discussed; and opportunities and schemes for the automated tuning of these parameters. Anyone working in decision making in business, engineering, economics or science will find a wealth of information here.

Journal ArticleDOI
TL;DR: In this article, numerical evidence of the emission of Bogoliubov phonons from a sonic horizon in a flowing one-dimensional atomic Bose-Einstein condensate was reported.
Abstract: We report numerical evidence of Hawking emission of Bogoliubov phonons from a sonic horizon in a flowing one-dimensional atomic Bose–Einstein condensate. The presence of Hawking radiation is revealed from peculiar long-range patterns in the density–density correlation function of the gas. Quantitative agreement between our fully microscopic calculations and the prediction of analog models is obtained in the hydrodynamic limit. New features are predicted and the robustness of the Hawking signal against a finite temperature discussed.

Journal ArticleDOI
TL;DR: The AEGIS experiment at CERN/AD as mentioned in this paper was the first experiment to directly measure the Earth's gravitational acceleration on antihydrogen with a classical Moire deflectometer.
Abstract: The principle of the equivalence of gravitational and inertial mass is one of the cornerstones of general relativity. Considerable efforts have been made and are still being made to verify its validity. A quantum-mechanical formulation of gravity allows for non-Newtonian contributions to the force which might lead to a difference in the gravitational force on matter and antimatter. While it is widely expected that the gravitational interaction of matter and of antimatter should be identical, this assertion has never been tested experimentally. With the production of large amounts of cold antihydrogen at the CERN Antiproton Decelerator, such a test with neutral antimatter atoms has now become feasible. For this purpose, we have proposed to set up the AEGIS experiment at CERN/AD, whose primary goal will be the direct measurement of the Earth's gravitational acceleration on antihydrogen with a classical Moire deflectometer.

Journal ArticleDOI
TL;DR: A new variant of the k-nearest neighbor (kNN) classifier based on the maximal margin principle is presented, characterized by resulting global decision boundaries of the piecewise linear type.
Abstract: In this paper, we present a new variant of the k-nearest neighbor (kNN) classifier based on the maximal margin principle. The proposed method relies on classifying a given unlabeled sample by first finding its k-nearest training samples. A local partition of the input feature space is then carried out by means of local support vector machine (SVM) decision boundaries determined after training a multiclass SVM classifier on the considered k training samples. The labeling of the unknown sample is done by looking at the local decision region to which it belongs. The method is characterized by resulting global decision boundaries of the piecewise linear type. However, the entire process can be kernelized through the determination of the k -nearest training samples in the transformed feature space by using a distance function simply reformulated on the basis of the adopted kernel. To illustrate the performance of the proposed method, an experimental analysis on three different remote sensing datasets is reported and discussed.

Journal ArticleDOI
TL;DR: This paper presents a novel approach to unsupervised change detection in multispectral remote-sensing images by using a selective Bayesian thresholding for deriving a pseudotraining set that is necessary for initializing an adequately defined binary semisupervised support vector machine classifier.
Abstract: This paper presents a novel approach to unsupervised change detection in multispectral remote-sensing images. The proposed approach aims at extracting the change information by jointly analyzing the spectral channels of multitemporal images in the original feature space without any training data. This is accomplished by using a selective Bayesian thresholding for deriving a pseudotraining set that is necessary for initializing an adequately defined binary semisupervised support vector machine classifier. Starting from these initial seeds, the performs change detection in the original multitemporal feature space by gradually considering unlabeled patterns in the definition of the decision boundary between changed and unchanged pixels according to a semisupervised learning algorithm. This algorithm models the full complexity of the change-detection problem, which is only partially represented from the seed pixels included in the pseudotraining set. The values of the classifier parameters are then defined according to a novel unsupervised model-selection technique based on a similarity measure between change-detection maps obtained with different settings. Experimental results obtained on different multispectral remote-sensing images confirm the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: Spoken language understanding and natural language understanding share the goal of obtaining a conceptual representation of natural language sentences and computational semantics performs a conceptualization of the world using computational processes for composing a meaning representation structure from available signs.
Abstract: Semantics deals with the organization of meanings and the relations between sensory signs or symbols and what they denote or mean. Computational semantics performs a conceptualization of the world using computational processes for composing a meaning representation structure from available signs and their features present, for example, in words and sentences. Spoken language understanding (SLU) is the interpretation of signs conveyed by a speech signal. SLU and natural language understanding (NLU) share the goal of obtaining a conceptual representation of natural language sentences. Specific to SLU is the fact that signs to be used for interpretation are coded into signals along with other information such as speaker identity. Furthermore, spoken sentences often do not follow the grammar of a language; they exhibit self-corrections, hesitations, repetitions, and other irregular phenomena. SLU systems contain an automatic speech recognition (ASR) component and must be robust to noise due to the spontaneous nature of spoken language and the errors introduced by ASR. Moreover, ASR components output a stream of words with no structure information like punctuation and sentence boundaries. Therefore, SLU systems cannot rely on such markers and must perform text segmentation and understanding at the same time.

Journal ArticleDOI
TL;DR: This paper applies multicriteria decision analysis techniques in a spatial context to support zoning of the Paneveggio-Pale di S. Martino Natural Park (Italy) by suggesting to park's management and other stakeholders an approach that is scientifically sound and practical.

Journal ArticleDOI
TL;DR: An equilibrium model (gas-solid), based on the minimization of the Gibbs energy, has been used in order to estimate the theoretical yield and the equilibrium composition of the reaction products of biomass thermochemical conversion processes (pyrolysis and gasification).

Journal ArticleDOI
01 Apr 2008
TL;DR: This paper proposes GRAnD, a goal-oriented approach to requirement analysis for data warehouses based on the Tropos methodology, which can be employed within both a demand-driven and a mixed supply/demand-driven design framework.
Abstract: Several surveys indicate that a significant percentage of data warehouses fail to meet business objectives or are outright failures. One of the reasons for this is that requirement analysis is typically overlooked in real projects. In this paper we propose GRAnD, a goal-oriented approach to requirement analysis for data warehouses based on the Tropos methodology. Two different perspectives are integrated for requirement analysis: organizational modeling, centered on stakeholders, and decisional modeling, focused on decision makers. Our approach can be employed within both a demand-driven and a mixed supply/demand-driven design framework.

Journal ArticleDOI
TL;DR: The capability-based view of the firm is based on the assumption that firms know how to do things as discussed by the authors, and it is used to deal with issues like horizontal and vertical boundaries of a firm, innovation and corporate performance.
Abstract: The capability-based view of the firm is based on the assumption that firms know how to do things. Assuming the existence of a thing called `organizational knowledge', in the first part of the paper we identify its main building blocks and we provide a description of its inner structure. This results in an analysis of the relationships among key concepts like organizational routines, organizational competencies and skills. In the second part, we consider some empirical implications of the adoption of a capability-based view of the firm in dealing with issues like horizontal and vertical boundaries of the firm, innovation and corporate performance. Some implications for strategic management are also discussed.

Journal ArticleDOI
TL;DR: Numerical results show that the proposed method can deal with the GTSP problems fairly well, and the developed mutation process and local search technique are effective.
Abstract: Focused on a variation of the euclidean traveling salesman problem (TSP), namely, the generalized traveling salesman problem (GTSP), this paper extends the ant colony optimization method from TSP to this field. By considering the group influence, an improved method is further improved. To avoid locking into local minima, a mutation process and a local searching technique are also introduced into this method. Numerical results show that the proposed method can deal with the GTSP problems fairly well, and the developed mutation process and local search technique are effective.

Book ChapterDOI
04 Dec 2008
TL;DR: The mathematical foundations and the design methodology of the contract-based model developed in the framework of the SPEEDS project, a design methodology in which distributed designers develop different aspects of the overall system, in a concurrent but controlled way, are presented.
Abstract: We present the mathematical foundations and the design methodology of the contract-based model developed in the framework of the SPEEDS project. SPEEDS aims at developing methods and tools to support "speculative design", a design methodology in which distributed designers develop different aspects of the overall system, in a concurrent but controlled way. Our generic mathematical model of contract supports this style of development. This is achieved by focusing on behaviors, by supporting the notion of "rich component" where diverse (functional and non-functional) aspects of the system can be considered and combined, by representing rich components via their set of associated contracts, and by formalizing the whole process of component composition.