scispace - formally typeset
Search or ask a question

Showing papers by "Rensselaer Polytechnic Institute published in 2003"


Journal ArticleDOI
TL;DR: The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large by making use of universally available web GUIs (Graphical User Interfaces).
Abstract: The abbreviated name,‘mfold web server’,describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces),the server circumvents the problem of portability of this software. Detailed output,in the form of structure plots with or without reliability information,single strand frequency plots and ‘energy dot plots’, are available for the folding of single sequences. A variety of ‘bulk’ servers give less information,but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/ mfold. This URL will be referred to as ‘MFOLDROOT’.

12,535 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a conceptual framework to define seismic resilience of communities and quantitative measures of resilience that can be useful for a coordinated research effort focusing on enhancing this resilience.
Abstract: This paper presents a conceptual framework to define seismic resilience of communities and quantitative measures of resilience that can be useful for a coordinated research effort focusing on enhancing this resilience. This framework relies on the complementary measures of resilience: ‘‘Reduced failure probabilities,’’ ‘‘Reduced consequences from failures,’’ and ‘‘Reduced time to recovery.’’ The framework also includes quantitative measures of the ‘‘ends’’ of robustness and rapidity, and the ‘‘means’’ of resourcefulness and redundancy, and integrates those measures into the four dimensions of community resilience—technical, organizational, social, and economic—all of which can be used to quantify measures of resilience for various types of physical and organizational systems. Systems diagrams then establish the tasks required to achieve these objectives. This framework can be useful in future research to determine the resiliency of different units of analysis and systems, and to develop resiliency targets and detailed analytical procedures to generate these values. [DOI: 10.1193/1.1623497]

3,399 citations


Journal ArticleDOI
TL;DR: These findings indicate that heat transport in a nanotube composite material will be limited by the exceptionally small interface thermal conductance and that the thermal conductivity of the composite will be much lower than the value estimated from the intrinsic thermal conductivities of the nanotubes and their volume fraction.
Abstract: The enormous amount of basic research into carbon nanotubes has sparked interest in the potential applications of these novel materials. One promising use of carbon nanotubes is as fillers in a composite material to improve mechanical behaviour, electrical transport and thermal transport. For composite materials with high thermal conductivity, the thermal conductance across the nanotube-matrix interface is of particular interest. Here we use picosecond transient absorption to measure the interface thermal conductance (G) of carbon nanotubes suspended in surfactant micelles in water. Classical molecular dynamics simulations of heat transfer from a carbon nanotube to a model hydrocarbon liquid are in agreement with experiment. Our findings indicate that heat transport in a nanotube composite material will be limited by the exceptionally small interface thermal conductance (G approximately 12 MW m(-2) K(-1)) and that the thermal conductivity of the composite will be much lower than the value estimated from the intrinsic thermal conductivity of the nanotubes and their volume fraction.

1,066 citations


Journal ArticleDOI
TL;DR: The Sloan Digital Sky Survey (SDSS) has validated and made publicly available its First Data Release as discussed by the authors, which consists of 2099 deg2 of five-band (u, g, r, i, z) imaging data, 186,240 spectra of galaxies, quasars, stars and calibrating blank sky patches selected over 1360 deg 2 of this area.
Abstract: The Sloan Digital Sky Survey (SDSS) has validated and made publicly available its First Data Release. This consists of 2099 deg2 of five-band (u, g, r, i, z) imaging data, 186,240 spectra of galaxies, quasars, stars and calibrating blank sky patches selected over 1360 deg2 of this area, and tables of measured parameters from these data. The imaging data go to a depth of r ≈ 22.6 and are photometrically and astrometrically calibrated to 2% rms and 100 mas rms per coordinate, respectively. The spectra cover the range 3800–9200 A, with a resolution of 1800–2100. This paper describes the characteristics of the data with emphasis on improvements since the release of commissioning data (the SDSS Early Data Release) and serves as a pointer to extensive published and on-line documentation of the survey.

948 citations


Journal ArticleDOI
10 Jul 2003-Nature
TL;DR: The fabrication and successful testing of ionization microsensors featuring the electrical breakdown of a range of gases and gas mixtures at carbon nanotube tips are reported, enabling compact, battery-powered and safe operation of such sensors.
Abstract: Gas sensors operate by a variety of fundamentally different mechanisms1,2,3,4,5,6,7,8,9,10,11,12,13,14. Ionization sensors13,14 work by fingerprinting the ionization characteristics of distinct gases, but they are limited by their huge, bulky architecture, high power consumption and risky high-voltage operation. Here we report the fabrication and successful testing of ionization microsensors featuring the electrical breakdown of a range of gases and gas mixtures at carbon nanotube tips. The sharp tips of nanotubes generate very high electric fields at relatively low voltages, lowering breakdown voltages several-fold in comparison to traditional electrodes, and thereby enabling compact, battery-powered and safe operation of such sensors. The sensors show good sensitivity and selectivity, and are unaffected by extraneous factors such as temperature, humidity, and gas flow. As such, the devices offer several practical advantages over previously reported nanotube sensor systems15,16,17. The simple, low-cost, sensors described here could be deployed for a variety of applications, such as environmental monitoring, sensing in chemical processing plants, and gas detection for counter-terrorism.

925 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the hypothesis that the higher entrepreneurs' social competence (their ability to interact effectively with others as based on discrete social skills), the greater their financial success.

856 citations


Journal ArticleDOI
01 Jan 2003-Carbon
TL;DR: In this paper, the individual and competitive adsorption capacities of Pb 2+, Cu 2+ and Cd 2+ by nitric acid treated multiwalled carbon nanotubes (CNTs) were studied.

850 citations


PatentDOI
24 Feb 2003-Science
TL;DR: In this article, a long, macroscopic nanotube strands or cables, up to several tens of centimeters in length, of aligned single-walled nanotubes are synthesized by the catalytic pyrolysis of n-hexane using an enhanced vertical floating catalyst CVD technique.
Abstract: Long, macroscopic nanotube strands or cables, up to several tens of centimeters in length, of aligned single-walled nanotubes are synthesized by the catalytic pyrolysis of n-hexane using an enhanced vertical floating catalyst CVD technique. The long strands of nanotubes assemble continuously from ropes or arrays of nanotubes, which are intrinsically long. These directly synthesized long nanotube strands or cables can be easily manipulated using macroscopic tools.

847 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze the UITT process and its outcomes based on 98 structured interviews of key UITT stakeholders (i.e., university administrators, academic and industry scientists, business managers, and entrepreneurs) at five research universities in two regions of the US.

764 citations


Journal ArticleDOI
TL;DR: This paper presents a new formulation of the normal constraint (NC) method that incorporates a critical linear mapping of the design objectives, which has the desirable property that the resulting performance of the method is entirely independent of theDesign objectives scales.
Abstract: The authors recently proposed the normal constraint (NC) method for generating a set of evenly spaced solutions on a Pareto frontier – for multiobjective optimization problems. Since few methods offer this desirable characteristic, the new method can be of significant practical use in the choice of an optimal solution in a multiobjective setting. This paper’s specific contribution is two-fold. First, it presents a new formulation of the NC method that incorporates a critical linear mapping of the design objectives. This mapping has the desirable property that the resulting performance of the method is entirely independent of the design objectives scales. We address here the fact that scaling issues can pose formidable difficulties. Secondly, the notion of a Pareto filter is presented and an algorithm thereof is developed. As its name suggests, a Pareto filter is an algorithm that retains only the global Pareto points, given a set of points in objective space. As is explained in the paper, the Pareto filter is useful in the application of the NC and other methods. Numerical examples are provided.

745 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that to the extent entrepreneurs are high on a number of distinct individual-difference dimensions (e.g., selfefficacy, ability to recognize opportunities, personal perseverance, human and social capital, superior social skills) the closer will be the person-entrepreneurship fit and, consequently, the greater the likelihood or magnitude of their success.

Proceedings ArticleDOI
24 Aug 2003
TL;DR: This paper presents a novel vertical data representation called Diffset, that only keeps track of differences in the tids of a candidate pattern from its generating frequent patterns, and shows that diffsets drastically cut down the size of memory required to store intermediate results.
Abstract: A number of vertical mining algorithms have been proposed recently for association mining, which have shown to be very effective and usually outperform horizontal approaches. The main advantage of the vertical format is support for fast frequency counting via intersection operations on transaction ids (tids) and automatic pruning of irrelevant data. The main problem with these approaches is when intermediate results of vertical tid lists become too large for memory, thus affecting the algorithm scalability.In this paper we present a novel vertical data representation called Diffset, that only keeps track of differences in the tids of a candidate pattern from its generating frequent patterns. We show that diffsets drastically cut down the size of memory required to store intermediate results. We show how diffsets, when incorporated into previous vertical mining methods, increase the performance significantly.

Journal ArticleDOI
TL;DR: The authors analyzes the experiences and developments of Hungarian banking sector during the transitional process from a centralized economy to a market-oriented system and identifies that early reorganization initiatives, flexible approaches to privatization, and liberal policies towards foreign banks involvement with the domestic institutions helped to build a relatively stable and increasingly efficient banking system.
Abstract: The paper analyzes the experiences and developments of Hungarian banking sector during the transitional process from a centralized economy to a market-oriented system. The paper identifies that early reorganization initiatives, flexible approaches to privatization, and liberal policies towards foreign banks’ involvement with the domestic institutions helped to build a relatively stable and increasingly efficient banking system. Foreign banks and banks with higher foreign bank ownership involvement were associated with lower inefficiency.

Journal ArticleDOI
01 Mar 2003-Wear
TL;DR: In this article, a solid lubricant composite material was made by compression molding PTFE and 40nm alumina particles using a jet milling apparatus and tested against a polished stainless steel counterface on a reciprocating tribometer.

Proceedings ArticleDOI
01 Jun 2003
TL;DR: This work considers arbitrary networks and random networks where nodes are assumed to be static and hybrid beamform patterns that are a mix of omnidirectional/directional and a better model of real directional antennas.
Abstract: The capacity of ad hoc wireless networks is constrained by the interference between concurrent transmissions from neighboring nodes. Gupta and Kumar have shown that the capacity of an ad hoc network does not scale well with the increasing number of nodes in the system when using omnidirectional antennas [6]. We investigate the capacity of ad hoc wireless networks using directional antennas. In this work, we consider arbitrary networks and random networks where nodes are assumed to be static.In arbitrary networks, due to the reduction of the interference area, the capacity gain is proven to be √2π/α when using directional transmission and omni reception. Because of the reduced probability of two neighbors pointing to each other, the capacity gain is √2π/β when omni transmission and directional reception are used. Although these two expressions look similar, the proof technique is different. By taking advantage of the above two approaches, the capacity gain is 2π/√αβ when both transmission and reception are directional.For random networks, interfering neighbors are reduced due to the decrease of interference area when directional antennas are used for transmission and/or reception. The throughput improvement factor is 2π/α, 2π/β and 4π2/αβ for directional transmission/omni reception, omni transmission/direc-tional reception, and directional transmission/directional reception, respectively.We have also analyzed hybrid beamform patterns that are a mix of omnidirectional/directional and a better model of real directional antennas.

Journal ArticleDOI
TL;DR: Gold nanoparticles were selectively attached to chemically functionalized surface sites on nitrogen-doped carbon (CNx) nanotubes by electrostatic interaction between carboxyl groups on the chemically oxidized nanotube surface and polyelectrolyte chains as discussed by the authors.
Abstract: Gold nanoparticles were selectively attached to chemically functionalized surface sites on nitrogen-doped carbon (CNx) nanotubes. A cationic polyelectrolyte was adsorbed on the surface of the nanotubes by electrostatic interaction between carboxyl groups on the chemically oxidized nanotube surface and polyelectrolyte chains. Negatively charged 10 nm gold nanoparticles from a gold colloid suspension were subsequently anchored to the surface of the nanotubes through the electrostatic interaction between the polyelectrolyte and the nanoparticles. This approach provides an efficient method to attach other nanostructures to carbon nanotubes and can be used as an illustrative detection of the functional groups on carbon nanotube surfaces.

Journal ArticleDOI
TL;DR: In this article, an overview of the findings to date from laboratory measurements of diffusion of cations and oxygen in zircon is presented, with important implications for isotopic dating, interpretation of stable-isotope ratios, closure temperatures, and formation and preservation of primary chemical composition and zoning in Zircon.
Abstract: Despite its low abundance in most rocks, zircon is extraordinarily useful in interpreting crustal histories. The importance of U-Th-Pb isotopic dating of zircon is well and long established (Davis et al., this volume; Parrish et al., this volume). Zircon also tends to incorporate trace elements useful as geochemical tracers, such as the REE, Y, and Hf. A number of characteristics of zircon encourage the preservation of internal isotopic and chemical variations, often on extremely fine scale, which provide valuable insight into thermal histories and past geochemical environments. The relative insolubility of zircon in crustal melts and fluids, as well as its general resistance to chemical and physical breakdown, often result in the existence of several generations of geochemical information in a single zircon grain. The fact that this information is so frequently retained (as evidenced through backscattered electron or cathodoluminescence imaging that often reveal fine-scale zoning down to the sub-micron scale) has long suggested that diffusion of most elements is quite sluggish in zircon. In this chapter, we present an overview of the findings to date from laboratory measurements of diffusion of cations and oxygen in zircon. Because of its importance as a geochronometer, attempts have been made to measure diffusion (especially of Pb) for over 30 years. But only in the last decade or so have profiling techniques with adequate depth resolution been employed in these studies, resulting in a plethora of new diffusion data. These findings have important implications for isotopic dating, interpretation of stable-isotope ratios, closure temperatures, and formation and preservation of primary chemical composition and zoning in zircon. Efforts have been made for some time to quantify and characterize diffusion in zircon, most notably of Pb, in deference to its significance in interpreting Pb isotopic signatures and refining understanding of thermal histories. As is evident from …

Journal Article
TL;DR: The method constructs a series of sparse linear SVMs to generate linear models that can generalize well, and uses a subset of nonzero weighted variables found by the linear models to produce a final nonlinear model.
Abstract: We describe a methodology for performing variable ranking and selection using support vector machines (SVMs). The method constructs a series of sparse linear SVMs to generate linear models that can generalize well, and uses a subset of nonzero weighted variables found by the linear models to produce a final nonlinear model. The method exploits the fact that a linear SVM (no kernels) with l1-norm regularization inherently performs variable selection as a side-effect of minimizing capacity of the SVM model. The distribution of the linear model weights provides a mechanism for ranking and interpreting the effects of variables. Starplots are used to visualize the magnitude and variance of the weights for each variable. We illustrate the effectiveness of the methodology on synthetic data, benchmark problems, and challenging regression problems in drug design. This method can dramatically reduce the number of variables and outperforms SVMs trained using all attributes and using the attributes selected according to correlation coefficients. The visualization of the resulting models is useful for understanding the role of underlying variables.

Journal ArticleDOI
TL;DR: The most commonly used static mixers are described and compared in this article, and the key parameters needed for the selection of a suitable mixer are highlighted, as well as their respective advantages and limitations.
Abstract: This paper summarizes the field of static mixers including recent improvements and applications to industrial processes. The most commonly used static mixers are described and compared. Their respective advantages and limitations are emphasized. Efficiencies of static mixers are compared based both on theory and experimental results from the literature. The operations, which can benefit from the use of static mixers, are explored, namely, mixing of miscible fluids, liquid–liquid and gas–liquid interface generation, liquid–solid dispersion and heat transfer. Design parameters governing the performance of the various mixers in these applications are reported. The key parameters needed for the selection of a suitable mixer are highlighted.

Journal ArticleDOI
TL;DR: In this article, the authors present evidence for a ring of stars in the plane of the Milky Way, extending at least from l = 180° to 227° with turnoff magnitude g ~ 19.5; the ring could encircle the Galaxy.
Abstract: We present evidence for a ring of stars in the plane of the Milky Way, extending at least from l = 180° to 227° with turnoff magnitude g ~ 19.5; the ring could encircle the Galaxy. We infer that the low Galactic latitude structure is at a fairly constant distance of R = 18 ± 2 kpc from the Galactic center above the Galactic plane and has R = 20 ± 2 kpc in the region sampled below the Galactic plane. The evidence includes 500 Sloan Digital Sky Survey spectroscopic radial velocities of stars within 30° of the plane. The velocity dispersion of the stars associated with this structure is found to be 27 km s-1 at (l, b) = (198°, - 27°), 22 km s-1 at (l, b) = (225°, 28°), 30 km s-1 at (l, b) = (188°, 24°), and 30 km s-1 at (l, b) = (182°, 27°). The structure rotates in the same prograde direction as the Galactic disk stars but with a circular velocity of 110 ± 25 km s-1. The narrow measured velocity dispersion is inconsistent with power-law spheroid or thick-disk populations. We compare the velocity dispersion in this structure with the velocity dispersion of stars in the Sagittarius dwarf galaxy tidal stream, for which we measure a velocity dispersion of 20 km s-1 at (l, b) = (165°, - 55°). We estimate a preliminary metallicity from the Ca II (K) line and color of the turnoff stars of [Fe/H] = -1.6 with a dispersion of 0.3 dex and note that the turnoff color is consistent with that of the spheroid population. We interpret our measurements as evidence for a tidally disrupted satellite of 2 × 107 to 5 × 108 M☉ that rings the Galaxy.

Proceedings ArticleDOI
TL;DR: In this article, it is shown that these embedding methods are equivalent to a lowpass filtering of histograms that is quantified by a decrease in the HCF center of mass (COM), which is exploited in known scheme detection to classify unaltered and spread spectrum images using a bivariate classifier.
Abstract: The process of information hiding is modeled in the context of additive noise. Under an independence assumption, the histogram of the stegomessage is a convolution of the noise probability mass function (PMF) and the original histogram. In the frequency domain this convolution is viewed as a multiplication of the histogram characteristic function (HCF) and the noise characteristic function. Least significant bit, spread spectrum, and DCT hiding methods for images are analyzed in this framework. It is shown that these embedding methods are equivalent to a lowpass filtering of histograms that is quantified by a decrease in the HCF center of mass (COM). These decreases are exploited in a known scheme detection to classify unaltered and spread spectrum images using a bivariate classifier. Finally, a blind detection scheme is built that uses only statistics from unaltered images. By calculating the Mahalanobis distance from a test COM to the training distribution, a threshold is used to identify steganographic images. At an embedding rate of 1 b.p.p. greater than 95% of the stegoimages are detected with false alarm rate of 5%.

Proceedings ArticleDOI
09 Jul 2003
TL;DR: This paper presents a decentralized scheme that organizes the MSNs into an appropriate overlay structure that is particularly beneficial for real-time applications and iteratively modifies the overlay tree using localized transformations to adapt with changing distribution of MSNs, clients, as well as network conditions.
Abstract: This paper presents an overlay architecture where service providers deploy a set of service nodes (called MSNs) in the network to efficiently implement media-streaming applications. These MSNs are organized into an overlay and act as application-layer multicast forwarding entities for a set of clients. We present a decentralized scheme that organizes the MSNs into an appropriate overlay structure that is particularly beneficial for real-time applications. We formulate our optimization criterion as a "degree-constrained minimum average-latency problem" which is known to be NP-hard. A key feature of this formulation is that it gives a dynamic priority to different MSNs based on the size of its service set. Our proposed approach iteratively modifies the overlay tree using localized transformations to adapt with changing distribution of MSNs, clients, as well as network conditions. We show that a centralized greedy approach to this problem does not perform quite as well, while our distributed iterative scheme efficiently converges to near-optimal solutions.

Journal ArticleDOI
TL;DR: OligoArray 2.0 is a program that designs specific oligonucleotides at the genomic scale and makes it feasible to perform expression analysis on a genomic scale for any organism for which the genome sequence is known, without relying on cDNA or oligon nucleotide libraries.
Abstract: There is a substantial interest in implementing bioinformatics technologies that allow the design of oligonucleotides to support the development of microarrays made from short synthetic DNA fragments spotted or in situ synthesized on slides. Ideally, such oligonucleotides should be totally specific to their respective targets to avoid any cross-hybridization and should not form stable secondary structures that may interfere with the labeled probes during hybridization. We have developed OligoArray 2.0, a program that designs specific oligonucleotides at the genomic scale. It uses a thermodynamic approach to predict secondary structures and to calculate the specificity of targets on chips for a unique probe in a mixture of labeled probes. Furthermore, OligoArray 2.0 can adjust the oligonucleotide length, according to user input, to fit a narrow T(m) range compatible with hybridization requirements. Combined with on chip oligonucleotide synthesis, this program makes it feasible to perform expression analysis on a genomic scale for any organism for which the genome sequence is known. This is without relying on cDNA or oligonucleotide libraries. OligoArray 2.0 was used to design 75 764 oligonucleotides representing 26 140 transcripts from Arabidopsis thaliana. Among this set, we provide at least one specific oligonucleotide for 93% of these transcripts.

Journal ArticleDOI
Bernhard Mecking1, G. S. Adams2, S. Ahmad3, E. Anciant  +171 moreInstitutions (27)
TL;DR: The CEBAF Large Acceptance Spectrometer (CLAS) as mentioned in this paper is used to study photo-and electro-induced nuclear and hadronic reactions by providing efficient detection of neutral and charged particles over a good fraction of the full solid angle.
Abstract: The CEBAF large acceptance spectrometer (CLAS) is used to study photo- and electro-induced nuclear and hadronic reactions by providing efficient detection of neutral and charged particles over a good fraction of the full solid angle. A collaboration of about 30 institutions has designed, assembled, and commissioned CLAS in Hall B at the Thomas Jefferson National Accelerator Facility. The CLAS detector is based on a novel six-coil toroidal magnet which provides a largely azimuthal field distribution. Trajectory reconstruction using drift chambers results in a momentum resolution of 0.5% at forward angles. Cherenkov counters, time-of-flight scintillators, and electromagnetic calorimeters provide good particle identification. Fast triggering and high data-acquisition rates allow operation at a luminosity of 10 34 nucleon cm −2 s −1 . These capabilities are being used in a broad experimental program to study the structure and interactions of mesons, nucleons, and nuclei using polarized and unpolarized electron and photon beams and targets. This paper is a comprehensive and general description of the design, construction and performance of CLAS.

Journal ArticleDOI
TL;DR: Experimental diffusion couples were made by juxtaposing powders of a natural basalt and a natural rhyolite and then annealing them in a piston cylinder apparatus for times ranging from 0.1 to 15.7 h, temperatures of 1350-1450°C, and pressures of 1.2-1.3 GPa as mentioned in this paper.

Journal ArticleDOI
TL;DR: In this paper, the authors used cointegration analysis to test the Environmental Kuznets Curve (EKC) hypothesis using a panel dataset of sulfur emissions and GDP data for 74 countries over a span of 31 years.
Abstract: he Environmental Kuznets Curve (EKC) hypothesis - an inverted U-shape relation between various indicators of environmental degradation and income per capita - has become one of the 'stylised facts' of environmental and resource economics. This is despite considerable criticism on both theoretical and empirical grounds. Cointegration analysis can be used to test the validity of such stylised facts when the data involved contain stochastic trends. In the present paper, we use cointegration analysis to test the EKC hypothesis using a panel dataset of sulfur emissions and GDP data for 74 countries over a span of 31 years. We find that the data is stochastically trending in the time-series dimension. Given this, and interpreting the EKC as a long run equilibrium relationship, support for the hypothesis requires that an appropriate model cointegrates and that sulfur emissions are a concave function of income. Individual and panel cointegration tests cast doubt on the general applicability of the hypothesised relationship. Even when we find cointegration, many of the relationships for individual countries are not concave. The results show that the EKC is a problematic concept, at least in the case of sulfur emissions.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the role of the top-down nature of Swedish policies in promoting the commercialization of university-generated knowledge and argue that it is likely to be true in part due to an academic environment that discourages academics from actively participating in the commercialisation of their ideas.

01 Jun 2003
TL;DR: In this article, the properties of inorganic LEDs, including emission spectra, electrical characteristics, and current-flow patterns, are discussed, as well as the packaging of low-power and high-power LED dies.
Abstract: Inorganic semiconductor light-emitting diodes (LEDs) are environmentally benign and have found widespread use as indicator lights, mobile displays, large-area displays, signage applications, and lighting applications. The entire visible spectrum can be covered by light-emitting semiconductors: AlGaInP and AlGaInN compound semiconductors are capable of emission in the red-to-yellow wavelength range and violet-to-green wavelength range, respectively. For white light sources based on LEDs, the most common approach is the combination of a blue LED chip with a yellow phosphor. Alternatively, a group of red, green, and blue (RGB) LEDs can be used; such source allows for color tunability. White LEDs are currently used to replace incandescent and fluorescent sources. In this review, the properties of inorganic LEDs will be presented, including emission spectra, electrical characteristics, and current-flow patterns. Structures providing high internal quantum efficiency, namely heterostructures and multiple quantum well structures, will be discussed. Advanced techniques enhancing the external quantum efficiency will be reviewed, including die shaping (chip shaping) and surface roughening. Different approaches to white LEDs will be presented and figures-of-merit such as the color rendering index and luminous efficacy will be explained. Besides visible LEDs, the technical challenges of newly evolving deep ultraviolet (UV) LEDs will be introduced. Finally, the packaging of low-power and high-power LED dies will be discussed.

Journal ArticleDOI
TL;DR: A contingency theory for closed-loop supply chains that incorporate product recovery in place can be reexamined and the potential for generalizability of the approach to similar types of other problems and applications can be assessed and determined.

Journal ArticleDOI
TL;DR: In registering retinal image pairs, Dual-Bootstrap ICP is initialized by automatically matching individual vascular landmarks, and it aligns images based on detected blood vessel centerlines, and the resulting quadratic transformations are accurate to less than a pixel.
Abstract: Motivated by the problem of retinal image registration, this paper introduces and analyzes a new registration algorithm called Dual-Bootstrap Iterative Closest Point (Dual-Bootstrap ICP). The approach is to start from one or more initial, low-order estimates that are only accurate in small image regions, called bootstrap regions. In each bootstrap region, the algorithm iteratively: 1) refines the transformation estimate using constraints only from within the bootstrap region; 2) expands the bootstrap region; and 3) tests to see if a higher order transformation model can be used, stopping when the region expands to cover the overlap between images. Steps 1): and 3), the bootstrap steps, are governed by the covariance matrix of the estimated transformation. Estimation refinement [Step 2)] uses a novel robust version of the ICP algorithm. In registering retinal image pairs, Dual-Bootstrap ICP is initialized by automatically matching individual vascular landmarks, and it aligns images based on detected blood vessel centerlines. The resulting quadratic transformations are accurate to less than a pixel. On tests involving approximately 6000 image pairs, it successfully registered 99.5% of the pairs containing at least one common landmark, and 100% of the pairs containing at least one common landmark and at least 35% image overlap.