scispace - formally typeset
Search or ask a question

Showing papers by "Polytechnic University of Milan published in 2006"


Journal ArticleDOI
30 Nov 2006-Nature
TL;DR: It is reported that intra-arterial delivery of wild-type canine mesoangioblasts (vessel-associated stem cells) results in an extensive recovery of dystrophin expression, normal muscle morphology and function, and a remarkable clinical amelioration and preservation of active motility.
Abstract: Duchenne muscular dystrophy remains an untreatable genetic disease that severely limits motility and life expectancy in affected children. The only animal model specifically reproducing the alterations in the dystrophin gene and the full spectrum of human pathology is the golden retriever dog model. Affected animals present a single mutation in intron 6, resulting in complete absence of the dystrophin protein, and early and severe muscle degeneration with nearly complete loss of motility and walking ability. Death usually occurs at about 1 year of age as a result of failure of respiratory muscles. Here we report that intra-arterial delivery of wild-type canine mesoangioblasts (vessel-associated stem cells) results in an extensive recovery of dystrophin expression, normal muscle morphology and function (confirmed by measurement of contraction force on single fibres). The outcome is a remarkable clinical amelioration and preservation of active motility. These data qualify mesoangioblasts as candidates for future stem cell therapy for Duchenne patients.

761 citations


Journal ArticleDOI
TL;DR: This paper introduces multiple centrality assessment (MCA), a methodology for geographic network analysis, which is defined and implemented on four 1-square-mile urban street systems and shows that, in the MCA primal approach, some centrality indices nicely capture the ‘skeleton’ of the urban structure that impacts so much on spatial cognition and collective behaviours.
Abstract: The network metaphor in the analysis of urban and territorial cases has a long tradition, especially in transportation or land-use planning and economic geography. More recently, urban design has brought its contribution by means of the ‘space syntax’ methodology. All these approaches-though under different terms like ‘accessibility’, ‘proximity’, ‘integration’ ‘connectivity’, ‘cost’, or ‘effort’-focus on the idea that some places (or streets) are more important than others because they are more central. The study of centrality in complex systems, however, originated in other scientific areas, namely in structural sociology, well before its use in urban studies; moreover, as a structural property of the system, centrality has never been extensively investigated metrically in geographic networks as it has been topologically in a wide range of other relational networks such as social, biological, or technological ones. After a previous work on some structural properties of the primal graph representation of...

679 citations


Journal ArticleDOI
TL;DR: A novel (according to the authors' knowledge) type of scanning synthetic aperture radar (ScanSAR) that solves the problems of scalloping and azimuth-varying ambiguities is introduced, with the name terrain observation with progressive scan (TOPS).
Abstract: In this paper, a novel (according to the authors' knowledge) type of scanning synthetic aperture radar (ScanSAR) that solves the problems of scalloping and azimuth-varying ambiguities is introduced. The technique employs a very simple counterrotation of the radar beam in the opposite direction to a SPOT: hence, the name terrain observation with progressive scan (TOPS). After a short summary of the characteristics of the ScanSAR technique and its problems, TOPSAR, which is the technique of design, the limits, and a focusing technique are introduced. A synthetic example based on a possible future system follows

668 citations


Journal ArticleDOI
TL;DR: The main characteristics of a good quality process are discussed, the key testing phases are surveyed and modern functional and model-based testing approaches are presented.

658 citations


Journal ArticleDOI
TL;DR: The results indicate that a spatial analysis based on a set of four centrality indices allows an extended visualization and characterization of the city structure and has a certain capacity to distinguish different classes of cities.
Abstract: We study centrality in urban street patterns of different world cities represented as networks in geographical space. The results indicate that a spatial analysis based on a set of four centrality indices allows an extended visualization and characterization of the city structure. A hierarchical clustering analysis based on the distributions of centrality has a certain capacity to distinguish different classes of cities. In particular, self-organized cities exhibit scale-free properties similar to those found in nonspatial networks, while planned cities do not.

599 citations


Journal ArticleDOI
TL;DR: In this article, a taxonomy of research-based spin-off (RBSO) typologies has been developed to understand the heterogeneity of RBSOs and identify common themes in relation with these typologies in relation to spinoff creation and spinoff development.

473 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derive an empirical model that aims at highlighting the inducements and obstacles that new technology-based firms face in alliance formation according to firm-specific characteristics and the nature of the alliance.

370 citations


Journal ArticleDOI
01 Jun 2006-Nature
TL;DR: A new subsidence map for New Orleans is presented, generated from space-based synthetic-aperture radar measurements, which reveals that parts of New Orleans underwent rapid subsidence in the three years before Hurricane Katrina struck in August 2005.
Abstract: A subsidence map of the city offers insight into the failure of the levees during Hurricane Katrina.

357 citations


Journal ArticleDOI
TL;DR: This work studies the basic properties of twenty 1-square-mile samples of street patterns of different world cities and finds that cities of the same class, e.g., grid-iron or medieval, exhibit roughly similar properties.
Abstract: Recent theoretical and empirical studies have focused on the structural properties of complex relational networks in social, biological, and technological systems. Here we study the basic properties of twenty 1-square-mile samples of street patterns of different world cities. Samples are turned into spatial valued graphs. In such graphs, the nodes are embedded in the two-dimensional plane and represent street intersections, the edges represent streets, and the edge values are equal to the street lengths. We evaluate the local properties of the graphs by measuring the meshedness coefficient and counting short cycles (of three, four, and five edges), and the global properties by measuring global efficiency and cost. We also consider, as extreme cases, minimal spanning trees (MST) and greedy triangulations (GT) induced by the same spatial distribution of nodes. The measures found in the real and the artificial networks are then compared. Surprisingly, cities of the same class, e.g., grid-iron or medieval, exhibit roughly similar properties. The correlation between a priori known classes and statistical properties is illustrated in a plot of relative efficiency vs cost.

347 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze the strategies of software firms that have entered the open source (OS) field and examine the determinants of the degree of openness toward OS and discuss the stability of hybrid models in the evolution of the industry.
Abstract: The paper analyzes the strategies of software firms that have entered the open source (OS) field. The notion of the OS business model is discussed in the light of a substantial body of theoretical literature concerning strategic management and the economics of innovation, as well as specialized literature on OS. Empirical evidence based on a survey of 146 Italian software firms shows that firms have adapted to an environment dominated by incumbent standards by combining the offering of proprietary and OS software under different licensing schemes, thus choosing a hybrid business model. The paper examines the determinants of the degree of openness toward OS and discusses the stability of hybrid models in the evolution of the industry.

346 citations


Journal ArticleDOI
TL;DR: In this article, two strategies for stabilization of discrete time linear switched systems were proposed, one of open loop nature (trajectory independent) and the other of closed loop nature based on the solution of what we call Lyapunov-Metzler inequalities.
Abstract: This paper addresses two strategies for stabilization of discrete time linear switched systems. The first one is of open loop nature (trajectory independent) and is based on the determination of an upper bound of the minimum dwell time by means of a family of quadratic Lyapunov functions. The relevant point on dwell time calculation is that the proposed stability condition does not require the Lyapunov function be uniformly decreasing at every switching time. The second one is of closed loop nature (trajectory dependent) and is designed from the solution of what we call Lyapunov–Metzler inequalities from which the stability condition is expressed. Being non-convex, a more conservative but simpler to solve version of the Lyapunov–Metzler inequalities is provided. The theoretical results are illustrated by means of examples.

Journal ArticleDOI
TL;DR: In this paper, the authors present a critical appraisal of results related to the problem of finding representative hydraulic conductivities, i.e., a parameter controlling the average behavior of groundwater flow within an aquifer at a given scale.
Abstract: [1] Heterogeneity is the single most salient feature of hydrogeology. An enormous amount of work has been devoted during the last 30 years to addressing this issue. Our objective is to synthesize and to offer a critical appraisal of results related to the problem of finding representative hydraulic conductivities. By representative hydraulic conductivity we mean a parameter controlling the average behavior of groundwater flow within an aquifer at a given scale. Three related concepts are defined: effective hydraulic conductivity, which relates the ensemble averages of flux and head gradient; equivalent conductivity, which relates the spatial averages of flux and head gradient within a given volume of an aquifer; and interpreted conductivity, which is the one derived from interpretation of field data. Most theoretical results are related to effective conductivity, and their application to real world scenarios relies on ergodic assumptions. Fortunately, a number of results are available suggesting that conventional hydraulic test interpretations yield (interpreted) hydraulic conductivity values that can be closely linked to equivalent and/or effective hydraulic conductivities. Complex spatial distributions of geologic hydrofacies and flow conditions have a strong impact upon the existence and the actual values of representative parameters. Therefore it is not surprising that a large body of literature provides particular solutions for simplified boundary conditions and geological settings, which are, nevertheless, useful for many practical applications. Still, frequent observations of scale effects imply that efforts should be directed at characterizing well-connected stochastic random fields and at evaluating the corresponding representative hydraulic conductivities.

Journal ArticleDOI
TL;DR: The theoretical condition for DL in a sinusoidal field is experimentally demonstrated and the optical analog of DL for electrons in periodic potentials subjected to ac electric fields as originally proposed by Dunlap and Kenkre is reported on.
Abstract: We report on a direct experimental observation of dynamic localization (DL) of light in sinusoidally-curved lithium-niobate waveguide arrays which provides the optical analog of DL for electrons in periodic potentials subjected to ac electric fields as originally proposed by Dunlap and Kenkre [Phys. Rev. B 34, 3625 (1986)10.1103/PhysRevB.34.3625]. The theoretical condition for DL in a sinusoidal field is experimentally demonstrated.

Journal ArticleDOI
TL;DR: The paper reviews the physics underlying PCM operation, the scaling potentials of these devices and some options recently proposed for the cell structure and addresses the main challenges for the PCM to become fully competitive with standard Flash technology.
Abstract: Among the emerging non-volatile technologies, phase change memories (PCM) are the most attractive in terms of both performance and scalability perspectives. The paper reviews the physics underlying PCM operation, the scaling potentials of these devices and some options recently proposed for the cell structure. The paper also addresses the main challenges for the PCM to become fully competitive with standard Flash technology.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the relationship between two supply chain integration dimensions (the integration of information flows and the integration of physical flows) and two manufacturing improvement programs (lean production and enterprise resource planning) using exploratory factor analysis and hierarchical regression.
Abstract: Purpose – While the attention of most OM scholars has shifted to supply chain management, there is still a need to understand how supply chain strategies are linked with internal manufacturing strategies. The literature shows some studies in this field, but a deep investigation of the linkages between these two areas is still missing. The purpose of this study is to investigate on an empirical basis the relationship between two supply chain integration dimensions – the integration of information flows and the integration of physical flows – and two manufacturing improvement programmes – lean production and enterprise resource planning (ERP) systems.Design/methodology/approach – Evidence is drawn from a sample of 297 European companies from the third edition of the International Manufacturing Strategy Survey. Data are analysed using exploratory factor analysis and hierarchical regression.Findings – Results show that the adoption of the lean production model has a strong influence on the integration of both...

Journal ArticleDOI
TL;DR: The model underlying LIME is illustrated, a formal semantic characterization for the operations it makes available to the application developer is provided, its current design and implementation is presented, and lessons learned are discussed in developing applications that involve physical mobility.
Abstract: LIME (Linda in a mobile environment) is a model and middleware supporting the development of applications that exhibit the physical mobility of hosts, logical mobility of agents, or both. LIME adopts a coordination perspective inspired by work on the Linda model. The context for computation, represented in Linda by a globally accessible persistent tuple space, is refined in LIME to transient sharing of the identically named tuple spaces carried by individual mobile units. Tuple spaces are also extended with a notion of location and programs are given the ability to react to specified states. The resulting model provides a minimalist set of abstractions that facilitates the rapid and dependable development of mobile applications. In this article we illustrate the model underlying LIME, provide a formal semantic characterization for the operations it makes available to the application developer, present its current design and implementation, and discuss lessons learned in developing applications that involve physical mobility.

Journal ArticleDOI
20 Dec 2006-Wear
TL;DR: In this article, a wheel wear prediction model is developed to predict the wheel profile evolution due to the wear process, which can be used to effectively evaluate maintenance intervals, to optimise wheel and rail profiles with respect to wear and to optimize the railway vehicle's suspensions with new and worn wheel profiles.

Journal ArticleDOI
TL;DR: In this article, a two-source random model (2SR) was proposed for estimating evapotranspiration in heterogeneous ecosystems as the residual term of the energy balance using Ts observations and Quickbird images.
Abstract: [1] Micrometeorological measurements of evapotranspiration (ET) can be difficult to interpret and use for validating model calculations in the presence of land cover heterogeneity. Land surface fluxes, soil moisture (θ), and surface temperatures (Ts) data were collected by an eddy correlation-based tower located at the Orroli (Sardinia) experimental field (covered by woody vegetation, grass, and bare soil) from April 2003 to July 2004. Two Quickbird high-resolution images (summer 2003 and spring 2004) were acquired for depicting the contrasting land cover components. A procedure is presented for estimating ET in heterogeneous ecosystems as the residual term of the energy balance using Ts observations, a two-dimensional footprint model, and the Quickbird images. Two variations on the procedure are successfully implemented: a proposed two-source random model (2SR), which treats the heat sources of each land cover component separately but computes the bulk heat transfer coefficient as spatially homogeneous, and a common two-source tile model. For 2SR, new relationships between the interfacial transfer coefficient and the roughness Reynolds number are estimated for the two bare soil–woody vegetation and grass–woody vegetation composite surfaces. The ET versus θ relationships for each land cover component were also estimated, showing that that the woody vegetation has a strong tolerance to long droughts, transpiring at rates close to potential for even the driest conditions. Instead, the grass is much less tolerant to θ deficits, and the switch from grass to bare soil following the rainy season had a significant impact on ET.

Journal Article
TL;DR: A short review of the present practice for expressing and estimating uncertainty in measurement, based on the definitions and prescriptions given by the GUM, can be found in this paper, where the authors also discuss the importance of the uncertainty concept in quantifying the incompleteness of the knowledge provided by the measurement result.
Abstract: The following fundamental concepts of measurement science have been briefly reported in this tutorial article: 1) The result of a measurement provides only incomplete knowledge of the measurand, whose true value remains unknown and unknowable; 2) The uncertainty concept plays a key role in quantifying the incompleteness of the knowledge provided by the measurement result; and 3) A measurement result can be usefully employed only if the associated uncertainty is estimated and if it can be traced back to the appertaining standard; otherwise, it is a meaningless value. This tutorial has given a short review of the present practice for expressing and estimating uncertainty in measurement, based on the definitions and prescriptions given by the GUM.

Journal ArticleDOI
TL;DR: In this article, transient response data collected at low temperature over a commercial V2O5-WO3/TiO2 catalyst was used to study the reactivity of NH3-NO/NO2 mixtures with different NO/NOx feed ratios (from 0 to 1).

Journal ArticleDOI
TL;DR: VRFT is a data-based method that permits to directly select the controller based on data, with no need for a model of the plant, based on a global model reference optimization procedure and does not require to access the plant for experiments many times so as to estimate the control cost gradient.
Abstract: This paper introduces the virtual reference feedback tuning (VRFT) approach for controller tuning in a nonlinear setup. VRFT is a data-based method that permits to directly select the controller based on data, with no need for a model of the plant. It is based on a global model reference optimization procedure and, therefore, does not require to access the plant for experiments many times so as to estimate the control cost gradient. For this reason, it represents a very appealing controller design methodology for many control applications.

Journal ArticleDOI
TL;DR: In this article, the Permanent Scatterers (PS) analysis was applied at a regional scale as support for landslide inventory mapping and at local scale for the monitoring of single well-known slope movements.

Journal ArticleDOI
TL;DR: This novel method could offer a strong step forward in bringing the “unseen proteome” (due to either low abundance and/or presence of interference) within the detection capabilities of current proteomics detection methods.
Abstract: No proteome can be considered "democratic", but rather "oligarchic", since a few proteins dominate the landscape and often obliterate the signal of the rare ones. This is the reason why most scientists lament that, in proteome analysis, the same set of abundant proteins is seen again and again. A host of pre-fractionation techniques have been described, but all of them, one way or another, are besieged by problems, in that they are based on a "depletion principle", i.e. getting rid of the unwanted species. Yet "democracy" calls not for killing the enemy, but for giving "equal rights" to all people. One way to achieve that would be the use of "Protein Equalizer Technology" for reducing protein concentration differences. This comprises a diverse library of combinatorial ligands coupled to spherical porous beads. When these beads come into contact with complex proteomes (e.g. human urine and serum, egg white, and any cell lysate, for that matter) of widely differing protein composition and relative abundances, they are able to "equalize" the protein population, by sharply reducing the concentration of the most abundant components, while simultaneously enhancing the concentration of the most dilute species. It is felt that this novel method could offer a strong step forward in bringing the "unseen proteome" (due to either low abundance and/or presence of interference) within the detection capabilities of current proteomics detection methods. Examples are given of equalization of human urine and serum samples, resulting in the discovery of a host of proteins never reported before. Additionally, these beads can be used to remove host cell proteins from purified recombinant proteins or protein purified from natural sources that are intended for human consumption. These proteins typically reach purities of the order of 98%: higher purities often become prohibitively expensive. Yet, if incubated with "equalizer beads", these last impurities can be effectively removed at a small cost and with minute losses of the main, valuable product.

Journal ArticleDOI
TL;DR: In this paper, the design and optimization of thermal actuators employed in a novel MEMS-based material testing system is addressed and analytical expressions of the actuator thermomechanical response are derived and discussed.
Abstract: This paper addresses the design and optimization of thermal actuators employed in a novel MEMS-based material testing system. The testing system is designed to measure the mechanical properties of a variety of materials/structures from thin films to one-dimensional structures, e.g. carbon nanotubes (CNTs) and nanowires (NWs). It includes a thermal actuator and a capacitive load sensor with a specimen in-between. The thermal actuator consists of a number of V-shaped beams anchored at both ends. It is capable of generating tens of milli-Newton force and a few micrometers displacement depending on the beams' angle and their number. Analytical expressions of the actuator thermomechanical response are derived and discussed. From these expressions, a number of design criteria are drawn and used to optimize the device response. The analytical predictions are compared with both finite element multiphysics analysis (FEA) and experiments. To demonstrate the actuator performance, polysilicon freestanding specimens cofabricated with the testing system are tested.

Journal ArticleDOI
TL;DR: In this paper, the effectiveness of three organic commercial inhibitors in preventing carbon steel chlorides induced corrosion in concrete is investigated, since there is not yet a clear knowledge on the real effectiveness of these products.

Journal ArticleDOI
01 Nov 2006-Europace
TL;DR: Technical aspects of novel electrocardiogram (ECG) analysis techniques are described and research and clinical applications of these methods for characterization of both the fibrillatory process and the ventricular response during AF are presented.
Abstract: Atrial fibrillation (AF) is the most common arrhythmia encountered in clinical practice. Neither the natural history of AF nor its response to therapy is sufficiently predictable by clinical and echocardiographic parameters. The purpose of this article is to describe technical aspects of novel electrocardiogram (ECG) analysis techniques and to present research and clinical applications of these methods for characterization of both the fibrillatory process and the ventricular response during AF. Atrial fibrillatory frequency (or rate) can reliably be assessed from the surface ECG using digital signal processing (extraction of atrial signals and spectral analysis). This measurement shows large inter-individual variability and correlates well with intra-atrial cycle length, a parameter which appears to have primary importance in AF maintenance and response to therapy. AF with a low fibrillatory rate is more likely to terminate spontaneously and responds better to antiarrhythmic drugs or cardioversion, whereas high-rate AF is more often persistent and refractory to therapy. Ventricular responses during AF can be characterized by a variety of methods, which include analysis of heart rate variability, RR-interval histograms, Lorenz plots, and non-linear dynamics. These methods have all shown a certain degree of usefulness, either in scientific explorations of atrioventricular (AV) nodal function or in selected clinical questions such as predicting response to drugs, cardioversion, or AV nodal modification. The role of the autonomic nervous system for AF sustenance and termination, as well as for ventricular rate responses, can be explored by different ECG analysis methods. In conclusion, non-invasive characterization of atrial fibrillatory activity and ventricular response can be performed from the surface ECG in AF patients. Different signal processing techniques have been suggested for identification of underlying AF pathomechanisms and prediction of therapy efficacy.

Journal ArticleDOI
TL;DR: A fiber-matrix constitutive model is employed and proposed that is able to account for the human cornea's mechanical behavior in healthy conditions or in the presence of keratoconus under increasing values of the intraocular pressure.
Abstract: The human cornea (the external lens of the eye) has the macroscopic structure of a thin shell, originated by the organization of collagen lamellae parallel to the middle surface of the shell. The lamellae, composed of bundles of collagen fibrils, are responsible for the experimentally observed anisotropy of the cornea. Anomalies in the fibril structure may explain the changes in the mechanical behavior of the tissue observed in pathologies such as keratoconus. We employ a fiber-matrix constitutive model and propose a numerical model for the human cornea that is able to account for its mechanical behavior in healthy conditions or in the presence of keratoconus under increasing values of the intraocular pressure. The ability of our model to reproduce the behavior of the human cornea opens a promising perspective for the numerical simulation of refractive surgery.

Journal ArticleDOI
TL;DR: In this paper, a class of damped wave equations with superlinear source term is considered and it is shown that every global solution is uniformly bounded in the natural phase space, and not only finite time blow up for solutions starting in the unstable set but also high energy initial data for which the solution blows up are constructed.
Abstract: A class of damped wave equations with superlinear source term is considered. It is shown that every global solution is uniformly bounded in the natural phase space. Global existence of solutions with initial data in the potential well is obtained. Finally, not only finite time blow up for solutions starting in the unstable set is proved, but also high energy initial data for which the solution blows up are constructed.

Journal ArticleDOI
TL;DR: This work proposes a simple lumped parameter model for the heart and shows how it can be coupled numerically with a 1D models for the arteries, and results actually confirm the relevant impact of the heart-arteries coupling in realistic simulations.
Abstract: The investigations on the pressure wave propagation along the arterial network and its relationships with vascular physiopathologies can be supported nowadays by numerical simulations. One dimensional (1D) mathematical models, based on systems of two partial differential equations for each arterial segment suitably matched at bifurcations, can be simulated with low computational costs and provide useful insights into the role of wave reflections. Some recent works have indeed moved in this direction. The specific contribution of the present paper is to illustrate a 1D numerical model numerically coupled with a model for the heart action. Typically, the action of the heart on the arterial system is modelled as a boundary condition at the entrance of the aorta. However, the left ventricle (LV) and the vascular network are a strongly coupled single mechanical system. This coupling can be relevant in the numerical description of pressure waves propagation, particularly when dealing with pathological situations. In this work, we propose a simple lumped parameter model for the heart and show how it can be coupled numerically with a 1D model for the arteries. Numerical results actually confirm the relevant impact of the heart-arteries coupling in realistic simulations.

Journal ArticleDOI
TL;DR: In this article, the authors developed a modeling framework that accounts for the variability of extreme rainfall rate with the duration of rainfall events, which leads to predict the temporal scale of hillslope evolution associated with the occurrence of shallow landslides.
Abstract: [1] Both rainfall intensity and duration take part in determining the hydrologic conditions favorable to the occurrence of shallow landslides. Hydrogeomorphic models of slope stability generally account for the dependence of landsliding on soil mechanical and topographic factors, while the role of rainfall duration is seldom considered within a process-based approach. To investigate the effect of different climate drivers on slope stability, we developed a modeling framework that accounts for the variability of extreme rainfall rate with the duration of rainfall events. The slope stability component includes the key characteristics of the soil mantle, i.e., angle of shearing resistance, void ratio, and specific gravity of solids. Hillslope hydrology is modeled by coupling the conservation of mass of soil water with the Darcy's law used to describe seepage flow. This yields a simple analytical model capable of describing combined effect of duration and intensity of a precipitation episode in triggering shallow landslides. Dimensionless variables are introduced to investigate model sensitivity. Finally, coupling this model with the simple scaling model for the frequency of storm precipitation can help in understanding the climate control on landscape evolution. This leads to predict the temporal scale of hillslope evolution associated with the occurrence of shallow landslides. Model application is shown for the Mettman Ridge study area in Oregon, United States.