scispace - formally typeset
Search or ask a question

Showing papers by "Tampere University of Technology published in 2002"


Book ChapterDOI
01 Jan 2002
TL;DR: In this article, the basic operations of these filter banks are considered and the requirements are stated for alias-free, perfect-reconstruction (PR), and nearly perfect reconstruction (NPR) filter banks.
Abstract: The outline of this chapter is as follows. Section 2 reviews various types of existing finite impulse response (FIR) and infinite impulse response (IIR) two-channel filter banks. The basic operations of these filter banks are considered and the requirements are stated for alias-free, perfect-reconstruction (PR), and nearly perfect-reconstruction (NPR) filter banks. Also some efficient synthesis techniques are referred to. Furthermore, examples are included to compare various two-channel filter banks with each other. Section 3 concentrates on the design of multi-channel (M-channel) uniform filter banks. The main emphasis is laid on designing these banks using tree-structured filter banks with the aid of two-channel filter banks and on generating the overall bank with the aid of a single prototype filter and a proper cosine-modulation or MDFT technique. In Section 4, it is shown how octave filter banks can be generated using a single two-channel filter bank as the basic building block. Also, the relations between the frequency-selective octave filter banks and discrete-time wavelet banks are briefly discussed. Finally, concluding remarks are given in Section 5.

1,598 citations


Journal ArticleDOI
TL;DR: As the filler content increased, the failure strain decreased due to a reduction in the amount of ductile polymer present and the ultimate tensile strength (UTS) decreased because of agglomeration and void formation at higher filler content.

273 citations


Proceedings ArticleDOI
13 May 2002
TL;DR: This paper addresses the problem of computational auditory scene recognition and describes methods to classify auditory scenes into predefined classes using band-energy ratio features with 1-NN classifier and Mel-frequency cepstral coefficients with Gaussian mixture models.
Abstract: In this paper, we address the problem of computational auditory scene recognition and describe methods to classify auditory scenes into predefined classes. By auditory scene recognition we mean recognition of an environment using audio information only. The auditory scenes comprised tens of everyday outside and inside environments, such as streets, restaurants, offices, family homes, and cars. Two completely different but almost equally effective classification systems were used: band-energy ratio features with 1-NN classifier and Mel-frequency cepstral coefficients with Gaussian mixture models. The best obtained recognition rate for 17 different scenes out of 26 and for an analysis duration of 30 seconds was 68.4%. For comparison, the recognition accuracy of humans was 70% for 25 different scenes and the average response time was around 20 seconds. The efficiency of different acoustic features and the effect of test sequence length were studied.

252 citations


Proceedings Article
01 Jan 2002
TL;DR: An application of gray level co-occurrence matrix (GLCM) to texturebased similarity evaluation of rock images could reduce the cost of geological investigations by allowing improved accuracy in automatic rock sample selection.
Abstract: Nowadays, as the computational power increases, the role of automatic visual inspection becomes more important. Therefore, also visual quality control has gained in popularity. This paper presents an application of gray level co-occurrence matrix (GLCM) to texturebased similarity evaluation of rock images. Retrieval results were evaluated for two databases, one consisting of the whole images and the other with blocks obtained by splitting the original images. Retrieval results for both databases were obtained by calculating distance between the feature vector of the query image and other feature vectors in the database. Performance of the cooccurrence matrices was also compared to that of Gabor wavelet features. Co-occurrence matrices performed better for the given rock image dataset. This similarity evaluation application could reduce the cost of geological investigations by allowing improved accuracy in automatic rock sample selection.

224 citations


Journal ArticleDOI
TL;DR: A dedicated study into the formation of new particles, New Particle Formation and Fate in the Coastal Environment (PARFORCE), was conducted over a period from 1998 to 1999 at the Mace Head Atmospheric Research Station on the western coast of Ireland as mentioned in this paper.
Abstract: A dedicated study into the formation of new particles, New Particle Formation and Fate in the Coastal Environment (PARFORCE), was conducted over a period from 1998 to 1999 at the Mace Head Atmospheric Research Station on the western coast of Ireland. Continuous measurements of new particle formation were taken over the 2-year period while two intensive field campaigns were also conducted, one in September 1998 and the other in June 1999. New particle events were observed on ∼90% of days and occurred throughout the year and in all air mass types. These events lasted for, typically, a few hours, with some events lasting more than 8 hours, and occurred during daylight hours coinciding with the occurrence of low tide and exposed shorelines. During these events, peak aerosol concentrations often exceeded 106 cm−3 under clean air conditions, while measured formation rates of detectable particle sizes (i.e., d > 3 nm) were of the order of 104–105 cm−3 s−1. Nucleation rates of new particles were estimated to be, at least, of the order of 105–106 cm−3 s−1 and occurred for sulphuric acid concentrations above 2 × 106 molecules cm−3; however, no correlation existed between peak sulphuric acid concentrations, low tide occurrence, or nucleation events. Ternary nucleation theory of the H2SO4-H2O-NH3 system predicts that nucleation rates far in excess of 106 cm−3 s−1 can readily occur for the given sulphuric acid concentrations; however, aerosol growth modeling studies predict that there is insufficient sulphuric acid to grow new particles (of ∼1 nm in size) into detectable sizes of 3 nm. Hygroscopic growth factor analysis of recently formed 8-nm particles illustrate that these particles must comprise some species significantly less soluble than sulphate aerosol. The nucleation-mode hygroscopic data, combined with the lack of detectable VOC emissions from coastal biota, the strong emission of biogenic halocarbon species, and the fingerprinting of iodine in recently formed (7 nm) particles suggest that the most likely species resulting in the growth of new particles to detectable sizes is an iodine oxide as suggested by previous laboratory experiments. It remains an open question whether nucleation is driven by self nucleation of iodine species, a halocarbon derivative, or whether first, stable clusters are formed through ternary nucleation of sulphuric acid, ammonia, and water vapor, followed by condensation growth into detectable sizes by condensation of iodine species. Airborne measurements confirm that nucleation occurs all along the coastline and that the coastal biogenic aerosol plume can extend many hundreds of kilometers away from the source. During the evolution of the coastal plume, particle growth is observed up to radiatively active sizes of 100 nm. Modeling studies of the yield of cloud-condensation nuclei suggest that the cloud condensation nuclei population can increase by ∼100%. Given that the production of new particles from coastal biogenic sources occurs at least all along the western coast of Europe, and possibly many other coastlines, it is suggested that coastal aerosols contribute significantly to the natural background aerosol population.

208 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effect of air temperature on labour productivity in telecommunication offices and found that productivity may fall by 5-7% at the elevated indoor temperatures.

204 citations


Journal ArticleDOI
TL;DR: In this article, the formation and growth of new particles has been evaluated using a revised version of a simple, but novel, theoretical tool, which can be used to estimate the concentration of condensable vapors and their source rates using the aerosol condensation sink together with measured particle growth rate.
Abstract: [1] The formation and growth of new particles has been evaluated using a revised version of a simple, but novel, theoretical tool. The concentration of condensable vapors and their source rates has been estimated using the aerosol condensation sink together with the measured particle growth rate. Also, by adding the coagulation sink and the measured formation rate of 3 nm particles, the formation rate of 1 nm particles and their concentration can be estimated. Condensation and coagulation sinks can be obtained from ambient aerosol size distribution data. The method has been applied to analyze the particle formation and growth rates observed during coastal and boreal forest nucleation events. The condensation sinks are typically 4–7 × 10−3 s−1 in the forest and 2 × 10−3 s−1 under coastal conditions, while the coagulation sinks for 1, 2, and 3 nm particles are typically smaller by factors 1.5–2, 5–7, and 11–15, respectively. The measured growth rates are 2–10 nm/h for the boreal forest and range from 15 to 180 nm/h at the coast, corresponding to a vapor concentration of 2–13 × 107 cm−3 and 108 cm−3 to 109 cm−3, respectively. The vapor source rate was 1–2 × 105 cm−3s−1 in the boreal forest and 2–5 × 106 cm−3s−1 in the coastal environment. The estimated formation rate of 1 nm particles in the forest environment was 8–20 cm−3s−1 and 300–10,000 cm−3s−1 at the coast. The concentration of 1 nm particles was estimated to be 2000–5000 and 4 × 104–7 × 106 particles cm−3 in forest and at coast, respectively.

196 citations


Proceedings ArticleDOI
11 Mar 2002
TL;DR: A set of object-oriented metrics in terms of their usefulness in predicting fault-proneness, an important software quality indicator, are empirically validated using two data analysis techniques: regression analysis and discriminant analysis.
Abstract: Software quality is an important external software attribute that is difficult to measure objectively. In this case study, we empirically validate a set of object-oriented metrics in terms of their usefulness in predicting fault-proneness, an important software quality indicator We use a set of ten software product metrics that relate to the following software attributes: the size of the software, coupling, cohesion, inheritance, and reuse. Eight hypotheses on the correlations of the metrics with fault-proneness are given. These hypotheses are empirically tested in a case study, in which the client side of a large network service management system is studied. The subject system is written in Java and it consists of 123 classes. The validation is carried out using two data analysis techniques: regression analysis and discriminant analysis.

184 citations


Journal ArticleDOI
TL;DR: In this article, a conceptual analysis of the role of tacit knowledge in the early stages of the innovation process of small technology enterprises has been presented, which hints that tacit knowledge can play an important role in the initial stages of innovation processes.

175 citations


Journal ArticleDOI
TL;DR: High-performance size exclusion chromatography (HPSEC) proved to be a fast and relatively easy method to estimate NOM content in water and gave more information than traditional methods on the type of NOM in a water sample and helped the process performance follow-up.

155 citations


Journal ArticleDOI
TL;DR: In this paper, a new model for ion-induced nucleation and charged aerosol dynamics is presented, which is able to produce a considerable amount of new particles if the preexisting particle concentration is sufficiently low.
Abstract: [1] A new model for ion-induced nucleation and charged aerosol dynamics is presented in this paper. It was found that ion-induced nucleation is able to produce a considerable amount of new particles if the preexisting particle concentration is sufficiently low. Also, when only positive or negative ions nucleate a large amount of particles in observable sizes was produced. It was also found that there can be continuous nucleation in particle sizes below the detection limit of most commonly used aerosol instruments at low temperatures and high preexisting particle concentrations. In some simulated conditions, fair agreement with observed particle formation events in boreal forest environment was achieved. According to the results, in certain situations ion-induced nucleation changes the charge distribution of the particles, which may allow the observation of ion-induced nucleation in the atmospheric conditions.

Journal ArticleDOI
TL;DR: In this paper, the authors focused on two-party relationships and found that there is a wide gap between supplier side quality of cost information and customer side expectations, and that only two out of seven suppliers are ready for open book accounting.

Journal ArticleDOI
TL;DR: It was showed that utilisation of selected plants accelerates removal of diesel fuel in soil and may serve as a viable, low-cost remedial technology for diesel-contaminated soils in subarctic regions.

Journal ArticleDOI
TL;DR: An empirically tested cost model of industrial maintenance has been used with data collected from more than 400 companies operating in various industries and revealed clear causalities between certain variables and key figures.

Journal ArticleDOI
TL;DR: In this paper, a block copolymers poly(N-isopropylacrylamide)-block-poly(ethylene oxide), PNIPA-b-PEO, were synthesized by free radical polymerization using macroazoinitiators bearing PEO chains with two different chain lengths (Mw = 550 or 1900 g/mol).
Abstract: Block copolymers poly(N-isopropylacrylamide)-block-poly(ethylene oxide), PNIPA-b-PEO, were synthesized by free radical polymerization using macroazoinitiators bearing PEO chains with two different chain lengths (Mw = 550 or 1900 g/mol). The molar mass of the PNIPA block varied from 0.4 × 105 to 7.3 × 105 g/mol. The cloud points of the aqueous copolymer solutions shifted to slightly higher temperatures only with the samples having the longer PEO block. Above the lower critical solution temperature (LCST) of PNIPA the block copolymers formed aggregates with a spherical core−shell structure sterically stabilized by a PEO shell. The formation and especially the shape of the aggregates were influenced by the length of the PNIPA block, the molar ratio of the repeating units of PNIPA and PEO, and the polymer concentration. A fluorescent probe 4-(dicyanomethylene)-2-methyl-6-(p-(dimethylamino)styryl)-4H-pyran, 4HP, was localized inside the polymer differently depending on the method of sample preparation. The mic...

Journal ArticleDOI
TL;DR: The photoinduced electron transfer in differently linked zinc porphyrin-fullerene dyads and their free-base porphyrsin analogues was studied in polar and nonpolar solvents with femto- to nanosecond absorption and emission spectroscopies.
Abstract: The photoinduced electron transfer in differently linked zinc porphyrin-fullerene dyads and their free-base porphyrin analogues was studied in polar and nonpolar solvents with femto- to nanosecond absorption and emission spectroscopies. A new intermediate state, different from the locally excited (LE) chromophores and the complete charge-separated (CCS) state, was observed. It was identified as an exciplex. The exciplex preceded the CCS state in polar benzonitrile and the excited singlet state of fullerene in nonpolar toluene. The behavior of the dyads was modeled by using a common kinetic scheme involving equilibria between the exciplex and LE chromophores. The scheme is suitable for all the studied porphyrin-fullerene compounds. The rates of reaction steps depended on the type of linkage between the moieties. The scheme and Marcus theory were applied to calculate electronic couplings for sequential reactions, and consistent results were obtained.

Proceedings Article
01 Jan 2002
TL;DR: A system is described which measures the similarity of two arbitrary rhythmic patterns, and behaved consistently by assigning high similarity measures to similar musical rhythms, even when performed using different sound sets.
Abstract: A system is described which measures the similarity of two arbitrary rhythmic patterns. The patterns are represented as acoustic signals, and are not assumed to have been performed with similar sound sets. Two novel methods are presented that constitute the algorithmic core of the system. First, a probabilistic musical meter estimation process is described, which segments a continuous musical signal into patterns. As a side-product, the method outputs tatum, tactus (beat), and measure lengths. A subsequent process performs the actual similarity measurements. Acoustic features are extracted which model the fluctuation of loudness and brightness within the pattern, and dynamic time warping is then applied to align the patterns to be compared. In simulations, the system behaved consistently by assigning high similarity measures to similar musical rhythms, even when performed using different sound sets.

Journal ArticleDOI
TL;DR: It is shown that the cross-validation adjustment of the threshold significantly improves the algorithm accuracy and the adaptive transforms with the adjusted threshold parameter perform better than the adaptive wavelet estimators.
Abstract: We describe a novel approach to solve a problem of window size (bandwidth) selection for filtering an image signal given with a noise. The approach is based on the intersection of confidence intervals (ICI) rule and gives the algorithm, which is simple to implement and nearly optimal in the point-wise mean squared error risk. The local polynomial approximation (LPA) is used in order to derive the 2D transforms (filters) and demonstrate the efficiency of the approach. The ICI rule gives the adaptive varying window size and enables the algorithm to be spatially adaptive in the sense that its quality is close to that which one could achieve if the smoothness of the estimated signal was known in advance. Optimization of the threshold (design parameter of the ICI) is studied. It is shown that the cross-validation adjustment of the threshold significantly improves the algorithm accuracy. In particular, simulation demonstrates that the adaptive transforms with the adjusted threshold parameter perform better than the adaptive wavelet estimators.

Journal ArticleDOI
TL;DR: The molecular study revealed that the process was dependent on a stable bacterial community with low species diversity, and Phenotypic dimorphism, low growth rate, and low-copy-number 16S rDNA genes were characteristic of strain MT1 and other MT1-like organisms isolated from the bioreactor.
Abstract: A high-rate fluidized-bed bioreactor has been treating polychlorophenol-contaminated groundwater in southern Finland at 5 to 8°C for over 6 years. We examined the microbial diversity of the bioreactor using three 16S ribosomal DNA (rDNA)-based methods: denaturing gradient gel electrophoresis, length heterogeneity-PCR analysis, and restriction fragment length polymorphism analysis. The molecular study revealed that the process was dependent on a stable bacterial community with low species diversity. The dominant organism, Novosphingobium sp. strain MT1, was isolated and characterized. Novosphingobium sp. strain MT1 degraded the main contaminants of the groundwater, 2,4,6-trichlorophenol, 2,3,4,6-tetrachlorophenol, and pentachlorophenol, at 8°C. The strain carried a homolog of the pcpB gene, coding for the pentachlorophenol-4-monooxygenase in Sphingobium chlorophenolicum. Spontaneous deletion of the pcpB gene homolog resulted in the loss of degradation ability. Phenotypic dimorphism (planktonic and sessile phenotypes), low growth rate (0.14 to 0.15 h−1), and low-copy-number 16S rDNA genes (single copy) were characteristic of strain MT1 and other MT1-like organisms isolated from the bioreactor.

Journal ArticleDOI
TL;DR: The results of the study suggested that activity-based costing and process modeling provide a good starting point in heading toward more cost-conscious design.

Journal ArticleDOI
TL;DR: Water adsorbed in submonolayer coverage on Ag(111) at 70 K forms hydrogen-bonded networks and scanning tunneling spectroscopy indicates that the bond length within the two-dimensional hydrogen- bonded water layer is shortened.
Abstract: Water adsorbed in submonolayer coverage on Ag(111) at 70 K forms hydrogen-bonded networks. High resolution images in combination with calculation reveal that single protrusions represent a cyclic water hexamer with the intermolecular bond stretched to the silver lattice constant of 0.29 nm. Scanning tunneling spectroscopy indicates that the bond length within the two-dimensional hydrogen-bonded water layer is shortened. The spectra contain further information about the vibrational modes of water molecules.

Proceedings ArticleDOI
29 Oct 2002
TL;DR: The capabilities of the two most successful industrial-strength CASE-tools in reverse engineering the static structure of software systems are examined and compared to the results produced by two academic prototypes.
Abstract: Today, software-engineering research and industry alike recognize the need for practical tools to support reverse-engineering activities. Most of the well-known CASE tools support reverse engineering in some way. The Unified Modeling Language (UML) has emerged as the de facto standard for graphically representing the design of object-oriented software systems. However, there does not yet exist a standard scheme for representing the reverse-engineered models of these systems. The various CASE tools usually adopt proprietary extensions to UML and, as a result, it is difficult, or even impossible, to ensure that model semantics remains unambiguous when working with different tools at the same time. In this paper, we examine the capabilities of the two most successful industrial-strength CASE-tools in reverse engineering the static structure of software systems and compare them to the results produced by two academic prototypes. The comparisons are carried out both manually and automatically using a research prototype for manipulating and comparing UML models.

Journal ArticleDOI
TL;DR: In this paper, an online method for simultaneous size distribution and particle density measurement, based on parallel measurements made by SMPS and ELPI, is presented. But this method is not suitable for the measurement of aerosols with known density.

Journal ArticleDOI
02 Jan 2002
TL;DR: The design and implementation of a survival smart clothing prototype for the arctic environment is described, which provides communication, positioning, and navigation aids for the user and decides whether an emergency message should be sent.
Abstract: Continuous miniaturisation of electronic components has made it possible to create smaller and smaller electrical devices which can be worn and carried all the time Together with developing fibre and textile technologies, this has enabled the creation of truly usable smart clothes that resemble clothes more than wearable computing equipment These intelligent clothes are worn like ordinary clothing and provide help in various situations according to the application area This paper describes the design and implementation of a survival smart clothing prototype for the arctic environment Concept development, electrical design, and non-electrical features are discussed The suit provides communication, positioning, and navigation aids for the user Depending on the measurements of the human and the environment, the suit decides whether an emergency message should be sent The user can control the system with a user interface called a Yo-Yo The functionality of the suit has been tested in an arctic environment

Journal ArticleDOI
TL;DR: In this paper, it was observed that both iodine and sulphur were present in the new particles with diameter below 10 mn, while the contribution of sulphate was significantly higher than iodine.
Abstract: [1] Ultrafine particles sampled during new particle formation bursts observed in the coastal zone were studied with transmission electron microscopy (TEM) and elemental analysis using energy-dispersive X ray (EDX). It was observed that both iodine and sulphur were present in the new particles with diameter below 10 mn. Gaseous emissions of halogen compounds from seaweeds were also measured at the same location during low-tide particle nucleation episodes. Based on the presence of iodine in the particle phase during low-tide nucleation bursts, and the significant emission of iodine compounds from the seaweeds during these periods, it is apparent that part of the biogenic iodine species emitted from the seaweeds end up in the ultrafine particulate phase. It was not possible to quantitatively determine the iodine content in the particles; however, in most cases the relative contribution from iodine and sulphate was similar, while some cases indicated no sulphate. On larger sized particles the contribution of sulphate was significantly higher than iodine. It appears that the condensable species leading to the appearance of new particles in the coastal atmosphere is an iodine species. Whether or not this iodine species also participates in the nucleation of new stable clusters could not be completely verified.

Journal ArticleDOI
TL;DR: A fuzzy rule-based control model for multipurpose real-time reservoir operation is constructed and a new, mathematically justified methodology for fuzzy inference—total fuzzy similarity—is used and compared with the more traditional Sugeno-style method.
Abstract: A fuzzy rule-based control model for multipurpose real-time reservoir operation is constructed. A new, mathematically justified methodology for fuzzy inference—total fuzzy similarity—is used and compared with the more traditional Sugeno-style method. Specifically, the seasonal variation in both hydrological variables and operational targets is examined by considering the inputs as season-dependent relative values, instead of using absolute values. The inference drawn in several stages allows a simple, accessible model structure. The control model is illustrated using Lake Paijanne, a regulated lake in Finland. The model is calibrated to simulate the actual operation, but also to better fulfill the new multipurpose operational objectives determined by experts. Relatively similar results obtained with the inference methods and the strong mathematical background of total fuzzy similarity put fuzzy reasoning on a solid foundation.

Journal ArticleDOI
TL;DR: In order to generalize the class of MRP-type of priors, the standard median was replaced by other order statistic operations, the L and finite-impulse-response median hybrid (FMH) filters, which allow for smoother appearance as they apply linear weighting together with robust nonlinear operations.
Abstract: Penalized iterative algorithms for image reconstruction in emission tomography contain conditions on which kind of images are accepted as solutions. The penalty term has commonly been a function of pairwise pixel differences in the activity in a local neighborhood, such that smooth images are favored. Attempts to ensure better edge and detail preservation involve difficult tailoring of parameter values or the penalty function itself. The previously introduced median root prior (MRP) favors locally monotonic images. MRP preserves sharp edges while reducing locally nonmonotonic noise at the same time. Quantitative properties of MRP are good, because differences in the neighboring pixel values are not penalized as such. The median is used as an estimate for a penalty reference, against which the pixel value is compared when setting the penalty. In order to generalize the class of MRP-type of priors, the standard median was replaced by other order statistic operations, the L and finite-impulse-response median hybrid (FMH) filters. They allow for smoother appearance as they apply linear weighting together with robust nonlinear operations. The images reconstructed using the new MRP-L and MRP-FMH priors are visually more conventional. Good quantitative properties of MRP are not significantly altered by the new priors.

Journal ArticleDOI
TL;DR: In this article, high temperature corrosion tests were performed on one ferritic boiler steel, one austenitic boiler steel and five high velocity oxy-fuel (HVOF) coatings, one laser-melted HVOF coating, and one diffusion chromized steel.
Abstract: Unacceptably high corrosion rates are often experienced, when chlorine-containing fuels are combusted. Reducing conditions that may occur in various boilers accelerate corrosion even further. Protective oxide scales are not formed on low-alloy steels if partial pressure of oxygen is too low. Materials rich in oxide formers, such as chromium and aluminum, are needed to resist corrosion in reducing combustion atmospheres, but processibility of such bulk alloys is very limited. Various coating technologies are considered as potential solution for corrosion problems in high temperature combustion environments with low partial pressure of oxygen. High temperature corrosion tests were performed on one ferritic boiler steel, one austenitic boiler steel, five high velocity oxy-fuel (HVOF) coatings, one laser-melted HVOF coating, and one diffusion chromized steel. Synthetic atmosphere simulating reducing conditions in combustion of chlorine-containing fuels was created for the tests. The test atmosphere contained 500 ppm HCl, 600 ppm H 2 S, 20% H 2 O, 5% CO, and Ar as a balance. The test temperature was 550 °C and the test duration was 1000 h. Corrosion resistance of steels and homogeneous coatings was mainly determined by chromium content. Homogeneous and dense coatings with high chromium content performed well and were able to protect the substrate. Some of the HVOF coatings were attacked by corrosive species through interconnected network of voids and oxides at splat boundaries.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the Fermi surface maps and spectral intensities obtained using high-resolution ARPES measurements using mean-field Hartree Fock and self-consistent renormalization computations within the framework of the one-band Hubbard model Hamiltonian.
Abstract: Fermi surface (FS) maps and spectral intensities obtained recently in ${\mathrm{Nd}}_{2\ensuremath{-}x}{\mathrm{Ce}}_{x}{\mathrm{CuO}}_{4\ifmmode\pm\else\textpm\fi{}\ensuremath{\delta}}$ via high resolution ARPES measurements are analyzed using mean-field Hartree Fock and self-consistent renormalization computations within the framework of the one-band $t\ensuremath{-}{t}^{\ensuremath{'}}\ensuremath{-}{t}^{\ensuremath{''}}\ensuremath{-}U$ Hubbard model Hamiltonian. We show that the remarkable observed crossover of the FS from small to large sheets reflects a reduction in the value of the effective Hubbard U with increasing electron doping and the collapse of the correlation induced Mott pseudogap just above optimal doping.

Journal ArticleDOI
TL;DR: An effort to present a practical total productivity measurement method with acceptable validity for the business unit level is described, based on relatively simple and commonly used partial productivity ratios which can be easily achieved and which are already widely used in industry.