scispace - formally typeset
Search or ask a question

Showing papers by "Polytechnic University of Catalonia published in 1998"


Journal ArticleDOI
TL;DR: In this article, the multiband behavior of the fractal Sierpinski (1915) antenna is described and compared to the well-known single-band bow-tie antenna.
Abstract: The multiband behavior of the fractal Sierpinski (1915) antenna is described. Due to its mainly triangular shape, the antenna is compared to the well-known single-band bow-tie antenna. Both experimental and numerical results show that the self-similarity properties of the fractal shape are translated into its electromagnetic behavior. A deeper physical insight on such a behavior is achieved by means of the computed current densities over the antenna surface, which also display some similarity properties through the bands.

754 citations


Journal ArticleDOI
TL;DR: This paper shows that connected operators work implicitly on a structured representation of the image made of flat zones, and proposes the max-tree as a suitable and efficient structure to deal with the processing steps involved in antiextensive connected operators.
Abstract: This paper deals with a class of morphological operators called connected operators. These operators filter the signal by merging its flat zones. As a result, they do not create any new contours and are very attractive for filtering tasks where the contour information has to be preserved. This paper shows that connected operators work implicitly on a structured representation of the image made of flat zones. The max-tree is proposed as a suitable and efficient structure to deal with the processing steps involved in antiextensive connected operators. A formal definition of the various processing steps involved in the operator is proposed and, as a result, several lines of generalization are developed. First, the notion of connectivity and its definition are analyzed. Several modifications of the traditional approach are presented. They lead to connected operators that are able to deal with texture. They also allow the definition of connected operators with less leakage than the classical ones. Second, a set of simplification criteria are proposed and discussed. They lead to simplicity-, entropy-, and motion-oriented operators. The problem of using a nonincreasing criterion is analyzed. Its solution is formulated as an optimization problem that can be very efficiently solved by a Viterbi (1979) algorithm. Finally, several implementation issues are discussed showing that these operators can be very efficiently implemented.

656 citations


Journal ArticleDOI
TL;DR: It is shown that the classical SUPG method is very similar to an explicit version of the Characteristic-Galerkin method, whereas the Taylor-Galerskin method has a stabilization effect similar to a sub-grid scale model, which is in turn related to the introduction of bubble functions.

482 citations


Journal ArticleDOI
TL;DR: In this article, a constitutive damage model for massive concrete is presented, mainly intended for the seismic analysis of gravity and arch dams, and an extension to account for the concrete strain-rate dependency, suitable for seismic analysis, is presented at the end.

434 citations


Journal ArticleDOI
TL;DR: In this paper, some asymptotic properties of matrix diffusion are derived, such as early-time behavior (short tests) and long tracer tests, which depend only on matrix diffusion coefficient or block size and shape.
Abstract: Matrix diffusion has become widely recognized as an important transport mechanism. Unfortunately, accounting for matrix diffusion complicates solute-transport simulations. This problem has led to simplified formulations, partly motivated by the solution method. As a result, some confusion has been generated about how to properly pose the problem. One of the objectives of this work is to find some unity among existing formulations and solution methods. In doing so, some asymptotic properties of matrix diffusion are derived. Specifically, early-time behavior (short tests) depends only on φ m 2 R m D m / L m 2, whereas late-time behavior (long tracer tests) depends only on φ m R m , and not on matrix diffusion coefficient or block size and shape. The latter is always true for mean arrival time. These properties help in: (a) analyzing the qualitative behavior of matrix diffusion; (b) explaining one paradox of solute transport through fractured rocks (the apparent dependence of porosity on travel time); (c) discriminating between matrix diffusion and other problems (such as kinetic sorption or heterogeneity); and (d) describing identifiability problems and ways to overcome them.

347 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared seven different inverse approaches for identifying aquifer transmissivity and found that the linearized methods were more accurate than those of nonlinear methods in predicting travel times and travel paths.
Abstract: This paper describes the first major attempt to compare seven different inverse approaches for identifying aquifer transmissivity. The ultimate objective was to determine which of several geostatistical inverse techniques is better suited for making probabilistic forecasts of the potential transport of solutes in an aquifer where spatial variability and uncertainty in hydrogeologic properties are significant. Seven geostatistical methods (fast Fourier transform (FF), fractal simulation (FS), linearized cokriging (LC), linearized semianalytical )LS), maximum likelihood (ML), pilot point (PP), and sequential self-calibration (SS)) were compared on four synthetic data sets. Each data set had specific features meeting (or not) classical assumptions about stationarity, amenability to a geostatistical description, etc. The comparison of the outcome of the methods is based on the prediction of travel times and travel paths taken by conservative solutes migrating in the aquifer for a distance of 5 km. Four of the methods, LS, ML, PP, and SS, were identified as being approximately equivalent for the specific problems considered. The magnitude of the variance of the transmissivity fields, which went as high as 10 times the generally accepted range for linearized approaches, was not a problem for the linearized methods when applied to stationary fields; that is, their inverse solutions and travel time predictions were as accurate as those of the nonlinear methods. Nonstationarity of the “true” transmissivity field, or the presence of “anomalies” such as high-permeability fracture zones was, however, more of a problem for the linearized methods. The importance of the proper selection of the semivariogram of the log10 (T) field (or the ability of the method to optimize this variogram iteratively) was found to have a significant impact on the accuracy and precision of the travel time predictions. Use of additional transient information from pumping tests did not result in major changes in the outcome. While the methods differ in their underlying theory, and the codes developed to implement the theories were limited to varying degrees, the most important factor for achieving a successful solution was the time and experience devoted by the user of the method.

320 citations


Journal ArticleDOI
TL;DR: A comparison of the Canarian archipelago with the prototypical hotspot-related island group, the Hawaiian Archipelago, reveals that the differences between the two are not as great as had previously been supposed on the basis of older data.
Abstract: The Canarian Archipelago is a group of volcanic islands on a slow-moving oceanic plate, close to a continental margin. The origins of the archipelago are controversial: a hotspot or mantle plume, a zone of lithospheric deformation, a region of compressional block-faulting or a rupture propagating westwards from the active Atlas Mountains fold belt have been proposed by different authors. However, comparison of the Canarian Archipelago with the prototypical hotspot-related island group, the Hawaiian Archipelago, reveals that the differences between the two are not as great as had previously been supposed on the basis of older data. Quaternary igneous activity in the Canaries is concentrated at the western end of the archipelago, close to the present-day location of the inferred hotspot. This is the same relationship as seen in the Hawaiian and Cape Verde islands. The latter archipelago, associated with a well-defined but slow-moving mantle plume, shows anomalies in a plot of island age against distance which are comparable to those seen in the Canary Islands: these anomalies cannot therefore be used to argue against a hotspot origin for the Canaries. Individual islands in both archipelagoes are characterized by initial rapid growth (the ‘shield-building’ stages of activity), followed by a period of quiescence and deep erosion (erosion gap) which in turn is followed by a ‘post-erosional’ stage of activity. The absence of post-shield stage subsidence in the Canaries is in marked contrast with the major subsidence experienced by the Hawaiian Islands, but is comparable with the lack of subsidence evident in other island groups at slow-moving hotspots, such as the Cape Verdes. Comparison of the structure and structural evolution of the Canary Islands with other oceanic islands such as Hawaii and Reunion reveals many similarities. These include the development of triple (‘Mercedes Star’) rift zones and the occurrence of giant lateral collapses on the flanks of these rift zones. The apparent absence of these features in the post-erosional islands may in part be a result of their greater age and deeper erosion, which has removed much of the evidence for their early volcanic architecture. We conclude that the many similarities between the Canary Islands and island groups whose hotspot origins are undisputed show that the Canaries have been produced in the same way.

317 citations


Journal ArticleDOI
TL;DR: Some of the different methods available for analysing the effects of randomness of models and data in structural analysis are reviewed to point out some research needs in the field from the authors' point of view.
Abstract: A state of the art on simulation methods in stochastic structural analysis is presented. The purpose of the paper is to review some of the different methods available for analysing the effects of randomness of models and data in structural analysis. While most of these techniques can be grouped under the general name ofMonte Carlo methods, the several published algorithms are more suitable to some objectives of analysis than to others in each case. These objectives have been classified into the foolowing cathegories: (1), TheStatistical Description of the structural scattering, a primary analysis in which the uncertain parameters are treated as random variables; (2) The consideration of the spatial variability of the random parameters, that must then be modelled as Random Fields (Stochastic Finite Elements); (3) The advanced Monte Carlo methods for calculating the usually very low failure probabilities (Reliability Analysis), and, (4), a deterministic technique that depart from the random nature of the above methods, but which can be linked with them in some cases, known as theResponse Surface Method. All of these techniques are critically examined and discussed. The concluding remarks point out some research needs in the field from the authors' point of view.

281 citations


Journal ArticleDOI
TL;DR: In this article, a long wire fractal antenna based on the Koch curve is presented, which improves the features of a common linear monopole by increasing the radiation resistance and reducing the Q at each fractal iteration.
Abstract: A small but long wire fractal antenna based on the Koch curve is presented. Experimental and numerical results show that the antenna improves the features of a common linear monopole. The radiation resistance is increased and the Q is reduced at each fractal iteration, approaching the fundamental limit on small antennas.

256 citations


Journal ArticleDOI
TL;DR: In this paper, the concept of the so-called "artificial or balancing diffusion" used to stabilize the numerical solution of advective-diffusive transport and fluid flow problems is revised.

242 citations


Journal ArticleDOI
TL;DR: A brief overview of improved liquid membrane (LM) separation techniques can be found in this paper, including modified bulk, supported and emulsion liquid membranes as well as hollow fibre contained liquid membranes, electrostatic pseudo liquid membranes (ESPLIM) reverse micelle and recently developed hybrid (HLM) and other LM configurations.
Abstract: This paper presents a brief overview of improved Liquid membrane (LM) separation techniques. including modified bulk, supported and emulsion liquid membranes as well as hollow fibre contained liquid membranes, electrostatic pseudo liquid membranes (ESPLIM) reverse micelle and recently developed hybrid (HLM) and other LM configurations. The discussion also includes design of ion specific carriers, analytical importance, aspects of stability and modelling of LMs and their applications in the separation/removal of metal captions from a range of diverse matrices. In general, an attempt has been made to review the literature published from 1990 to 1997 in order to focus on the present status of different liquid membrane configurations. The LM studies dealing with separation and removal of organic compounds and gases are not included in this article owing to limitations of space.

Journal ArticleDOI
TL;DR: The injectability test could be used to determine accurately the dough time of CPBCs, and relations between the setting time and the cohesion time are discussed.
Abstract: The injectability of four calcium phosphate bone cements (CPBCs) was measured using a commercial disposable syringe. It varied considerably with the cement powder composition, with the liquid/powder ratio, with the time after starting the mixing of liquid and powder, with the accelerator concentration (% Na2HPO4), and with the ageing time of the cement powder which was prepared by milling. The injectability test could be used to determine accurately the dough time of CPBCs. Relations between the setting time and the cohesion time are discussed.

Journal ArticleDOI
TL;DR: In this paper, a large entropy variation (magnetocaloric effect) has been discovered in ceramic perovskites with the formulas La0.65Ca0.35Ti1−xMnxO3−z and La 0.5+x+yLi0.5−3yTi 1−3xmn3xO 3−z, and both Curie temperature and entropy change were studied from 4.2 to 400 K for different stoichiometric compositions and applied magnetic fields.
Abstract: A large entropy variation (magnetocaloric effect) has been discovered in ceramic perovskites with the formulas La0.65Ca0.35Ti1−xMnxO3−z and La0.5+x+yLi0.5−3yTi1−3xMn3xO3−z. Both Curie temperature and entropy change were studied from 4.2 to 400 K for different stoichiometric compositions and applied magnetic fields. Our conclusion is that these materials are excellent candidates for working materials in magnetic refrigeration and liquefaction devices in a wide temperature range.

Journal ArticleDOI
TL;DR: Methods to use age as the time scale and compare inferences and interpretations with those obtained using the standard approach are presented and are recommended for survival analysis of the elderly population.
Abstract: Background The standard approach for survival analysis of the elderly population is to define the survival time as the elapsed time from entry into the study until death, and to adjust by age using stratification and regression procedures. However, the interest is in the study of the aging process and the risk factors related to it, not in the use of time-on-study as the time scale. Here, we present methods to use age as the time scale and compare inferences and interpretations with those obtained using the standard approach. Methods A total of 1,315 individuals aged 65 years or older from the city of Barcelona, Spain, were interviewed in 1986 (baseline). The vital status of the cohort was assessed in October 1994. To illustrate the usefulness of age as time scale (alternative approach) instead of time-on-study in the survival analysis of the elderly population, both methods were used to assess the relationship between baseline functional capacity and mortality. Results Using the alternative approach, we observed that 50% of the sample died at age 80.6 years; this information could not be estimated with the standard approach. Using age as a covariate in the standard analysis with time-on-study as the time scale and using age as the time scale in the alternative analysis, the association of functional capacity at baseline and mortality was of similar magnitude under both analyses. Nevertheless, using the alternative approach, relative risks were slightly lower, and the adjustment by age was tight and was not subject to the inherent assumptions in regression models of the functional relationship of independent variables with outcome. We illustrated the methods with fixed covariates (i.e., gender) and baseline values of time-dependent covariates (i.e., functional capacity), but we discussed the extension of our methods for the analysis of time-dependent covariates measured at several visits in a cohort study. Methods proposed here are easily implemented with widely available statistical software packages. Conclusions Although the use of standard survival analysis generally produces correct estimates, the use of age as time scale is deemed more appropriate for survival analysis of the elderly: Inferences are easier to interpret and final models are simpler. We therefore recommend the use of age as time scale for survival analysis of the elderly population.

Journal ArticleDOI
TL;DR: In this paper, the performance of both materials was compared in terms of phenol conversion in 240 h tests, and the results showed that the active carbon, without any active metal supported, gives the highest phenol reduction.
Abstract: Catalytic wet air oxidation is a promising alternative for the treatment of phenolic waste water which cannot be treated in conventional sewage plants. Catalytic wet air oxidation of an aqueous phenol solution was conducted in a fixed bed reactor operating in trickle flow regime. Either active carbon or a commercial copper oxide supported over γ-alumina was used as catalyst. The performance of both materials was compared in terms of phenol conversion in 240 h tests. The results showed that the active carbon, without any active metal supported, gives the highest phenol conversion. The supported copper catalyst undergoes a rapid deactivation due to the dissolution of the metal active species in the acidic medium in which the reaction takes place. On the other hand, the active carbon maintains a higher activity throughout the test, although a decrease of the phenol conversion was also observed due to both the loss of active carbon by combustion and the reduction of its surface area. The phenol oxidation was proved to occur through a first order mechanism with respect to phenol. After the ten-day run, the catalytic activity of the active carbon was found to be about eight times higher than that of the commercial catalyst, also showing high selectivity to the production of carbon dioxide.

Proceedings ArticleDOI
13 Jul 1998
TL;DR: A novel processor microarchitecture is presented that relieves four of the most important bottlenecks of superscalar processors to exploit instruction level parallelism: the serialization imposed by true dependences, the instruction window size, the complexity of a wide issue machine and the instruction fetch bandwidth requirements.
Abstract: En esta tesis se estudia el modelo de ejecucion de los procesadores multithreaded especulativos asi como los requisitos necesarios para su implementacion. El modelo de ejecucion se basa en la insercion de instrucciones de spawn dentro del codigo secuencial. De esta manera, la ejecucion de un programa en estos procesadores es similar a cualquier otro hasta que se encuentra con un punto de spawn. Entonces, se crea un nuevo thread especulativo en el punto indicado por la instruccion de spawn y ambos threads se ejecutan en paralelo. Cuanto el thread creador llega al punto inicial del thread especulativo, se ha de verificar si la especulacion ha sido correcta. En ese caso, el contexto del thread no especulativo se gradua y se libera para uso futuro de mas threads especulativos. En caso de que la verificacion no haya sido correcta, se recupera el estado correcto. En este modelo de ejecucion siempre hay un thread no especulativo y puede haber multiples threads especulativos. Para soportar este modelo de ejecucion, se necesita: i) hardware capaz de crear y gestionar threads especulativo y ii) un mecanismo de particionado para dividir los programas en threads especulativos. Se han estudiado varias plataformas para gestionar threads de forma concurrente. Por un lado, los procesadores clustered se benefician de menores retardos, menor potencia consumida y una menor complejidad aunque las latencias de comunicacion sean mayores. Por otro lado, las arquitecturas centralizadas se benefician del hecho de compartir recursos y menor latencia de comunicacion, pero la complejidad del hardware es mucho mayor. En cualquier caso, el hardware ha de ser capaz de ejecutar multiples threads simultaneamente con el inconveniente de que algunos valores van a tener que compartirse mientras que otros son copias privadas. Es decir, el procesador debera ser capaz de gestionar multiples versiones de un mismo registro o posicion de memoria para cada uno de los threads que se esten ejecutando. Ademas, se ha puesto especial enfasis en la gestion de las dependencias de datos entre los threads especulativos ya que tienen un impacto muy importante en el rendimiento del procesador. Encontrar threads independientes es casi imposible en aplicaciones irregulares, por tanto los threads especulativos necesitaran de valores producidos por otros threads especulativos. Se han estudiado dos mecanismos: sincronizar el thread productor y el thread consumidor y predecir los valores dependientes. En el primer caso, se han propuesto mecanismos para pasar el valor tan pronto como ha sido producido del productor al consumidor, especialmente en el caso de valores de memoria. Por otro lado, el segundo modelo es mucho mas atrayente ya que si todos los valores dependientes fueran predichos de forma correcta, los threads pasarian a ejecutarse de forma independiente. Se han evaluado multiples predictores de valores propuestos en la literatura y se ha presentado un nuevo predictor especialmente pensado para este tipo de arquitecturas que es el predictor de incremento. Este predictor usa la informacion de control de los threads especulativos para predecir los valores y los resultados obtenidos son muy prometedores aun con tamanos muy reducidos del predictor. Finalmente, el particionado de las aplicaciones afecta al rendimiento de este tipo de procesadores. Se han propuesto y evaluado varios esquemas de particionado. Una familia de estos esquemas asigna threads especulativos a construcciones de programa que por si solas proporcionan cierta independencia de control. Politicas de esta familia son aquellas que crean threads especulativos en iteraciones de bucles, continuaciones de bucles y continuaciones de subrutinas. La segunda familia de esquemas de particionado se ayuda de un analisis basado en profiling para encontrar las parejas de spawn mas idoneas para cada uno de los codigos. De esta manera, aquellas partes del programa que cumplan las mejores caracteristicas se seleccionan para crear threads especulativos. Algunos criterios de seleccion que han sido considerados en esta tesis han sido: la independencia de control, el tamano minimo de los threads, la independencia de datos y su predictabilidad. Los resultados obtenidos por ambas familias han sido muy significativos, aunque el esquema basado en tecnicas de profile mejora los resultados obtenidos por la otra familia.

Journal ArticleDOI
01 Jun 1998
TL;DR: Calcium phosphate bone cements are osteotransductive, i.e. after implantation in bone they are transformed into new bone tissue and can be used for healing of bone defects, bone augmentation and bone reconstruction.
Abstract: Calcium phosphate bone cements (CPBCs) are osteotransductive, i.e. after implantation in bone they are transformed into new bone tissue. Furthermore, due to the fact that they are mouldable, their osteointegration is immediate. Their chemistry has been established previously. Some CPBCs contain amorphous calcium phosphate (ACP) and set by a sol-gel transition. The others are crystalline and can give as the reaction product dicalcium phosphate dihydrate (DCPD), calcium-deficient hydroxyapatite (CDHA), carbonated apatite (CA) or hydroxyapatite (HA). Mixed-type gypsum-DCPD cements are also described. In vivo rates of osteotransduction vary as follows: gypsum-DCPD > DCPD > CDHA approximately CA > HA. The osteotransduction of CDHA-type cements may be increased by adding dicalcium phosphate anhydrous (DCP) and/or CaCO3 to the cement powder. CPBCs can be used for healing of bone defects, bone augmentation and bone reconstruction. Incorporation of drugs like antibiotics and bone morphogenetic protein is envisaged. Load-bearing applications are allowed for CHDA-type, CA-type and HA-type CPBCs as they have a higher compressive strength than human trabecular bone (10 MPa).

Journal ArticleDOI
TL;DR: In this paper, a semianalytical computation of quasihalo orbits in the circular restricted three-body problem by means of an ad hoc Lindstedt-Poincare method is presented.
Abstract: Quasihalo orbits are Lissajous trajectories librating about the well known halo orbits. The main feature of these orbits is that they keep an exclusion zone in the same way that halo orbits do. As a result, the knowledge of this type of orbit gives more flexibility to the mission analysis design about collinear libration points of any pair of primaries in the solar system. This paper is devoted to the semianalytical computation of quasihalo orbits in the circular restricted three-body problem by means of an ad hoc Lindstedt-Poincare method. The study of the practical convergence of the procedure and the extension of the orbits to suitable locations in the solar system using Jet Propulsion Laboratory (JPL) ephemerides is also discussed.

Journal ArticleDOI
TL;DR: An architecture for a product master model that federates CAD systems with downstream application processes for different feature views that are part of the design process that respects the need of commercial CAD systems to maintain proprietary information that must not be disclosed in the master model.
Abstract: We develop an architecture for a product master model that federates CAD systems with downstream application processes for different feature views that are part of the design process. The architecture addresses especially the need to make persistent associations of design information with net shape elements. Moreover, the design respects the need of commercial CAD systems (and of downstream applications) to maintain proprietary information that must not be disclosed in the master model. Two case studies consider the requirements on the master model architecture for geometric dimensioning and tolerancing, and for manufacturing process planning using NC machining. We discuss how to reconcile the associated feature views and how to update them under net shape redesign. The case studies indicate that many design changes that arise from these downstream views can be formalized by a well-defined problem on dimensional and geometric constraints.

Journal ArticleDOI
TL;DR: In this paper, the chemistry of North African dust reaching NE Spain with red rains is described to characterize the red dust elemental composition, analyze the relative contribution of dissolved and particulate forms to the total element inputs for the period 1983-1994, and study the role of the particulate and dissolved inputs in red rains to the forest nutrient cycle.

Journal ArticleDOI
TL;DR: In this paper, a new inverse synthetic aperture radar (ISAR) algorithm is proposed for radar cross-section imaging and measurement from scattered fields, based on a spherical-wave near-field illumination of the target, overcomes the requirement for an expensive compact range facility to produce a plane wave illumination.
Abstract: The paper presents a new inverse synthetic aperture radar (ISAR) algorithm intended for radar cross-section (RCS) imaging and measurement from scattered fields. The method, based on a spherical-wave near-field illumination of the target, overcomes the requirement for an expensive compact range facility to produce a plane wave illumination. The formulation and the implementation of the algorithm are described. Some experimental results obtained in an anechoic chamber are presented to show RCS results similar to the conventional plane wave methods.

Journal ArticleDOI
TL;DR: In this paper, the colour changes found in the wall thickness of calcareous ceramics are analyzed and the main color change from creamy to reddish is due to the incorporation of Fe3+ in several calcium silicates and aluminosilicates developed during firing.

Journal ArticleDOI
01 Jun 1998
TL;DR: In this work the basis of the memory effect lies in the fact that the materials exhibiting such a property undergo a thermoelastic martensitic transformation.
Abstract: The shape memory alloys exhibit a number of remarkable properties, which open new possibilities in engineering and more specifically in biomedical engineering. The most important alloy used in biomedical applications is NiTi. This alloy combines the characteristics of the shape memory effect and superelasticity with excellent corrosion resistance, wear characteristics, mechanical properties and a good biocompatibility. These properties make it an ideal biological engineering material, especially in orthopaedic surgery and orthodontics. In this work the basis of the memory effect lies in the fact that the materials exhibiting such a property undergo a thermoelastic martensitic transformation. In order to understand even the most elementary engineering aspects of the shape memory effect it is necessary to review some basic principles of the formation and the characteristics of the martensitic phase. The different properties of shape memory, superelasticity, two-way shape memory, rubber-like behaviour and a high damping capacity are reviewed. Some applications proposed in recent years are described and classified according to different medical fields.

Journal ArticleDOI
TL;DR: This in vitro culture system demonstrates the use of human bone marrow cells for the potential evaluation of new biomaterials and the development of a novel carbonated apatite that may be of potential use in orthopaedic implants.

Journal ArticleDOI
TL;DR: The interferometric radiometer uncertainty principle is presented; it quantifies the relationship between radiometric sensitivity and angular resolution.
Abstract: This paper is concerned with the radiometric sensitivity computation of an aperture synthesis interferometric radiometer devoted to Earth observation. The impact of system parameters and the use of simultaneous redundant measurements are analyzed. The interferometric radiometer uncertainty principle is presented; it quantifies the relationship between radiometric sensitivity and angular resolution.

Journal ArticleDOI
TL;DR: In this article, the anomalous behavior of the Avrami exponent in the primary crystallization of amorphous alloys leading to nanostructured materials is considered, and a kinetic model able to adequately treat such phase transformation is formulated by means of the implementation of a soft-impingement diffusion mechanism after a transient interface controlled growth.
Abstract: The anomalous behaviour of the Avrami exponents found in the primary crystallization of amorphous alloys leading to nanostructured materials is considered. A kinetic model able to adequately treat such phase transformation has been formulated by means of the implementation of a soft-impingement diffusion mechanism after a transient interface controlled growth. A decrease in the nucleation rate as crystallization proceeds has also been considered. Comparison of the model with experimental data is performed, giving excellent agreement. The soft-impingement diffusion mechanism is demonstrated to be responsible for the anomalous behaviour of the Avrami exponent, the decreasing nucleation rate being a second-order effect.

Journal ArticleDOI
TL;DR: A series of aliphatic poly(ester amide)s derived from 1,6-hexanediol, glycine, and diacids with a variable number of methylenes (from 2 to 8) have been synthesized and characterized.
Abstract: A series of aliphatic poly(ester amide)s derived from 1,6-hexanediol, glycine, and diacids with a variable number of methylenes (from 2 to 8) have been synthesized and characterized. Infrared spectroscopy shows that the studied polymers present a unique kind of hydrogen bond that is established between their amide groups. Thermal properties as melting, glass transition, and decomposition temperatures are reported. The data indicate that all the polymers are highly crystalline. Thus, different kinds of spherulites (positive and/or negative) were obtained depending on the preparation conditions and on the polymer samples. Moreover, all the polymers crystallized from dilute diol solutions as ribbonlike crystals where a regular folding habit and a single hydrogen bond direction could be deduced. A test of enzymatic hydrolysis was employed to assess the potential biodegradability of these polymers. © 1998 John Wiley & Sons, Inc. J Polym Sci A: Polym Chem 36: 1271–1282, 1998

Journal ArticleDOI
TL;DR: A method of image analysis is proposed for detection of local defects in materials with periodic regular texture by applying a multiscale and multiorientation Gabor filter scheme that imitates the early human vision process to the sample under inspection.
Abstract: A method of image analysis is proposed for detection of local defects in materials with periodic regular texture. A general improvement and enlargement of vision system capabilities for versatility, full automa- tism, computational efficiency, and robustness in their application to the industrial inspection of periodic textured materials is pursued. In the pro- posed method, a multiscale and multiorientation Gabor filter scheme that imitates the early human vision process is applied to the sample under inspection. The designed algorithm automatically segments defects from the regular texture. A variety of examples of fabric inspection are pre- sented. In all of them defects are successfully segmented from the tex- ture background. © 1998 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(98)01408-1)

Journal ArticleDOI
TL;DR: In this article, a new model is developed as a combination of the classical ''hole'' and ''pipe'' models, for the calculation of gas releases in distribution systems at medium and low pressures.

Journal ArticleDOI
TL;DR: In this article, the authors consider the semi-implicit version of a fractional step finite element method for compressible flows and show that no boundary conditions at all need to be imposed in the first step where an intermediate momentum is computed.
Abstract: SUMMARY In this paper we consider some particular aspects related to the semi-implicit version of a fractional step finite element method for compressible flows that we have developed recently. The first is the imposition of boundary conditions. We show that no boundary conditions at all need to be imposed in the first step where an intermediate momentum is computed. This allows us to impose the real boundary conditions for the pressure, a point that turns out to be very important for compressible flows. The main difficulty of the semi-implicit form of the scheme arises in the solution of the continuity equation, since it involves both the density and the pressure. These two variables can be related through the equation of state, which in turn introduces the temperature as a variable in many cases. We discuss here the choice of variables (pressure or density) and some strategies to solve the continuity equation. The final point that we study is the behaviour of the scheme in the incompressible limit. It is shown that the method has an inherent pressure dissipation that allows us to reach this limit without having to satisfy the classical compatibility conditions for the interpolation of the velocity and the pressure. # 1998 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Fluids, 27: 13‐32 (1998)