scispace - formally typeset
Search or ask a question

Showing papers by "Rensselaer Polytechnic Institute published in 2005"


Journal ArticleDOI
27 May 2005-Science
TL;DR: The high efficiency of solid-state sources already provides energy savings and environmental benefits in a number of applications, but these sources also offer controllability of their spectral power distribution, spatial distribution, color temperature, temporal modulation, and polarization properties.
Abstract: More than a century after the introduction of incandescent lighting and half a century after the introduction of fluorescent lighting, solid-state light sources are revolutionizing an increasing number of applications. Whereas the efficiency of conventional incandescent and fluorescent lights is limited by fundamental factors that cannot be overcome, the efficiency of solid-state sources is limited only by human creativity and imagination. The high efficiency of solid-state sources already provides energy savings and environmental benefits in a number of applications. However, solid-state sources also offer controllability of their spectral power distribution, spatial distribution, color temperature, temporal modulation, and polarization properties. Such ‘‘smart’’ light sources can adjust to specific environments and requirements, a property that could result in tremendous benefits in lighting, automobiles, transportation, communication, imaging, agriculture, and medicine.

3,164 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling.
Abstract: Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer.

1,693 citations


Reference EntryDOI
15 Jul 2005
TL;DR: In this article, the properties of inorganic LEDs, including emission spectra, electrical characteristics, and current-flow patterns, are presented and the packaging of low power and high power LED dies is discussed.
Abstract: Inorganic semiconductor light-emitting diodes (LEDs) are environmentally benign and have already found widespread use as indicator lights, large-area displays, and signage applications. In addition, LEDs are very promising candidates for future energy-saving light sources suitable for office and home lighting applications. Today, the entire visible spectrum can be covered by light-emitting semiconductors: AlGaInP and AlGaInN compound semiconductors are capable of emission in the red to yellow wavelength range and ultraviolet (uv) to green wavelength range, respectively. Currently, two basic approaches exist for white light sources: The combination of one or more phosphorescent materials with a semiconductor LED and the use of multiple LEDs emitting at complementary wavelengths. Both approaches are suitable for high efficiency sources that have the potential to replace incandescent and fluorescent lights. In this article, the properties of inorganic LEDs will be presented, including emission spectra, electrical characteristics, and current-flow patterns. Structures providing high internal quantum efficiency, namely, heterostructures and multiple quantum well structures, will be discussed. Advanced techniques enhancing the external quantum efficiency will be reviewed, including resonant-cavities, die shaping (chip shaping), omnidirectional reflectors, and photonic crystals. Different approaches to white LEDs will be presented and figures-of-merit such as the color rendering index, luminous efficacy, and luminous efficiency will be explained. Finally, the packaging of low power and high power LED dies will be discussed. Keywords: light-emitting diodes (LEDs); solid-state lighting; compound semiconductors; device physics; reflectors; resonant cavity LEDs; white LEDs; packaging

1,364 citations


Journal ArticleDOI
TL;DR: The results support the proposition that an organization's ability to use IT to support its core competencies is dependent on IS functional capabilities, which, in turn, are dependent on the nature of human, technology, and relationship resources of the IS department.
Abstract: We draw on the resource-based theory to examine how information systems (IS) resources and capabilities affect firm performance. A basic premise is that a firm's performance can be explained by how effective the firm is in using information technology (IT) to support and enhance its core competencies. In contrast to past studies that have implicitly assumed that IS assets could have direct effects on firm performance, this study draws from the resource complementarity arguments and posits that it is the targeted use of IS assets that is likely to be rent-yielding. We develop the theoretical underpinnings of this premise and propose a model that interrelates IS resources, IS capabilities, IT support for core competencies, and firm performance. The model is empirically tested using data collected from 129 firms in the United States. The results provide strong support for the research model and suggest that variation in firm performance is explained by the extent to which IT is used to support and enhance a firm's core competencies. The results also support our proposition that an organization's ability to use IT to support its core competencies is dependent on IS functional capabilities, which, in turn, are dependent on the nature of human, technology, and relationship resources of the IS department. These results are interpreted and the implications of this study for IS research and practice are discussed.

1,203 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of ownership, especially by a strategic foreign owner, on bank efficiency for eleven transition countries in an unbalanced panel consisting of 225 banks and 856 observations were investigated.
Abstract: Using data from 1996 to 2000, we investigate the effects of ownership, especially by a strategic foreign owner, on bank efficiency for eleven transition countries in an unbalanced panel consisting of 225 banks and 856 observations. Applying stochastic frontier estimation procedures, we compute profit and cost efficiency taking account of both time and country effects directly. In second-stage regressions, we use the efficiency measures along with return on assets to investigate the influence of ownership type. With respect to the impact of ownership, we conclude that privatization by itself is not sufficient to increase bank efficiency as government-owned banks are not appreciably less efficient than domestic private banks. We find that foreign-owned banks are more cost-efficient than other banks and that they also provide better service, in particular if they have a strategic foreign owner. The remaining government-owned banks are less efficient in providing services, which is consistent with the hypothesis that the better banks were privatized first in transition countries.

926 citations


Journal ArticleDOI
TL;DR: The DINAMelt web server simulates the melting of one or two single-stranded nucleic acids in solution to predict not just a melting temperature for a hybridized pair ofucleic acids, but entire equilibrium melting profiles as a function of temperature.
Abstract: The DINAMelt web server simulates the melting of one or two single-stranded nucleic acids in solution. The goal is to predict not just a melting temperature for a hybridized pair of nucleic acids, but entire equilibrium melting profiles as a function of temperature. The two molecules are not required to be complementary, nor must the two strand concentrations be equal. Competition among different molecular species is automatically taken into account. Calculations consider not only the heterodimer, but also the two possible homodimers, as well as the folding of each single-stranded molecule. For each of these five molecular species, free energies are computed by summing Boltzmann factors over every possible hybridized or folded state. For temperatures within a user-specified range, calculations predict species mole fractions together with the free energy, enthalpy, entropy and heat capacity of the ensemble. Ultraviolet (UV) absorbance at 260 nm is simulated using published extinction coefficients and computed base pair probabilities. All results are available as text files and plots are provided for species concentrations, heat capacity and UV absorbance versus temperature. This server is connected to an active research program and should evolve as new theory and software are developed. The server URL is http://www.bioinfo.rpi.edu/applications/hybrid/.

893 citations


Book ChapterDOI
27 Jun 2005
TL;DR: In this article, a low-rank approximation to an n × n Gram matrix G such that computations of interest may be performed more rapidly is presented. But the problem of finding the best rank-k approximation to the matrix is not solved, since the number of training examples required to find the solution scales as O(n 3 ).
Abstract: A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an n × n Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form ${\tilde G}_{k} = CW^{+}_{k}C^{T}$, where C is a matrix consisting of a small number c of columns of G and Wk is the best rank-k approximation to W, the matrix formed by the intersection between those c columns of G and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let || ·||2 and || ·||F denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let Gk be the best rank-k approximation to G. We prove that by choosing O(k/e4) columns $${\left\|G - CW^{+}_{k}C^{T}\right\|_{\xi}} \leq \|A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an n × n Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form ${\tilde G}_{k} = CW^{+}_{k}C^{T}$, where C is a matrix consisting of a small number c of columns of G and Wk is the best rank-k approximation to W, the matrix formed by the intersection between those c columns of G and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let || ·||2 and || ·||F denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let Gk be the best rank-k approximation to G. We prove that by choosing O(k/e4) columns $${\left\|G - CW^{+}_{k}C^{T}\right\|_{\xi}} \leq \|G - G_{k}\|_{\xi} + \sum\limits_{i=1}^{n} G^{2}_{ii},$$ both in expectation and with high probability, for both ξ = 2,F, and for all k : 0 ≤k≤ rank(W). This approximation can be computed using O(n) additional space and time, after making two passes over the data from external storage. |_{\xi} + \sum\limits_{i=1}^{n} G^{2}_{ii},$$ both in expectation and with high probability, for both ξ = 2,F, and for all k : 0 ≤k≤ rank(W). This approximation can be computed using O(n) additional space and time, after making two passes over the data from external storage.

840 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the extent of thermal conductivity enhancement sometimes greatly exceeds the predictions of well-established theories, and new theoretical descriptions may be needed to account properly for the unique features of nanofluids, such as high particle mobility and large surface to volume ratio.

824 citations


Journal ArticleDOI
TL;DR: In this paper, the incorporation of silica nanoparticles into polyethylene increased the breakdown strength and voltage endurance significantly compared to the inclusion of micron scale fillers, and showed a decrease in dielectric permittivity for the nanocomposite over the base polymer.
Abstract: The incorporation of silica nanoparticles into polyethylene increased the breakdown strength and voltage endurance significantly compared to the incorporation of micron scale fillers. In addition, dielectric spectroscopy showed a decrease in dielectric permittivity for the nanocomposite over the base polymer, and changes in the space charge distribution and dynamics have been documented. The most significant difference between micron scale and nanoscale fillers is the tremendous increase in interfacial area in nanocomposites. Because the interfacial region (interaction zone) is likely to be pivotal in controlling properties, the bonding between the silica and polyethylene was characterized using Fourier transformed infrared (FTTR) spectroscopy, electron paramagnetic resonance (EPR), and x-ray photoelectron spectroscopy (XPS). The picture which is emerging suggests that the enhanced interfacial zone, in addition to particle-polymer bonding, plays a very important role in determining the dielectric behavior of nanocomposites.

817 citations


Journal ArticleDOI
TL;DR: This article used experiential learning theory to magnify the importance of learning within the process of entrepreneurship, making connections between knowledge, cognition, and creativity to develop the concept of learning asymmetries and illustrates how a greater appreciation for the differences in individual learning will fortify entrepreneurship research.
Abstract: The article uses experiential learning theory to magnify the importance of learning within the process of entrepreneurship. Previous research details the contributions of prior knowledge, creativity, and cognitive mechanisms to the process of opportunity identification and exploitation; however, the literature is devoid of work that directly addresses learning. The extant research assumes learning is occurring but does not directly address the importance of learning to the process. To fully understand the nature of the entrepreneurial process, researchers must take into account how individuals learn and how different modes of learning influence opportunity identification and exploitation. This article makes connections between knowledge, cognition, and creativity to develop the concept of learning asymmetries and illustrates how a greater appreciation for the differences in individual learning will fortify entrepreneurship research.

799 citations


Journal ArticleDOI
06 May 2005-Science
TL;DR: The temperatures substantiate the existence of wet, minimum-melting conditions within 200 million years of solar system formation and suggest that Earth had settled into a pattern of crust formation, erosion, and sediment recycling as early as 4.35 Ga.
Abstract: Ancient zircons from Western Australia9s Jack Hills preserve a record of conditions that prevailed on Earth not long after its formation. Widely considered to have been a uniquely violent period geodynamically, the Hadean Eon [4.5 to 4.0 billion years ago (Ga)] has recently been interpreted by some as far more benign—possibly even characterized by oceans like those of the present day. Knowledge of the crystallization temperatures of the Hadean zircons is key to this debate. A thermometer based on titanium content revealed that these zircons cluster strongly at ∼700°C, which is indistinguishable from temperatures of granitoid zircon growth today and strongly suggests a regulated mechanism producing zircon-bearing rocks during the Hadean. The temperatures substantiate the existence of wet, minimum-melting conditions within 200 million years of solar system formation. They further suggest that Earth had settled into a pattern of crust formation, erosion, and sediment recycling as early as 4.35 Ga.

Journal ArticleDOI
TL;DR: In this article, the authors examined problems with the existing literature on science parks and incubators in terms of four levels of analysis: the science parks themselves, the enterprises located upon them, the entrepreneurs and teams of entrepreneurs involved in these enterprises, and at the systemic level.

Journal ArticleDOI
TL;DR: The Third Data Release of the Sloan Digital Sky Survey (SDSS) as mentioned in this paper contains data taken up through 2003 June, including imaging data in five bands over 5282 deg2, photometric and astrometric catalogs of the 141 million objects detected in these imaging data, and spectra of 528,640 objects selected over 4188 deg2.
Abstract: This paper describes the Third Data Release of the Sloan Digital Sky Survey (SDSS). This release, containing data taken up through 2003 June, includes imaging data in five bands over 5282 deg2, photometric and astrometric catalogs of the 141 million objects detected in these imaging data, and spectra of 528,640 objects selected over 4188 deg2. The pipelines analyzing both images and spectroscopy are unchanged from those used in our Second Data Release.

Journal ArticleDOI
25 Nov 2005-Science
TL;DR: It is reported that freestanding films of vertically aligned carbon nanotubes exhibit super-compressible foamlike behavior, and the lightweight, highly resilient nanotube films may be useful as compliant and energy-absorbing coatings.
Abstract: We report that freestanding films of vertically aligned carbon nanotubes exhibit super-compressible foamlike behavior. Under compression, the nanotubes collectively form zigzag buckles that can fully unfold to their original length upon load release. Compared with conventional low-density flexible foams, the nanotube films show much higher compressive strength, recovery rate, and sag factor, and the open-cell nature of the nanotube arrays gives excellent breathability. The nanotube films present a class of open-cell foam structures, consisting of well-arranged one-dimensional units (nanotube struts). The lightweight, highly resilient nanotube films may be useful as compliant and energy-absorbing coatings.

Journal ArticleDOI
TL;DR: The thermomechanical properties of ‘polymer nanocomposites’ are quantitatively equivalent to the well-documented case of planar polymer films and it is conjecture that the glass-transition process requires that the interphase regions surrounding different particles interact.
Abstract: The thermomechanical responses of polymers, which provide limitations to their practical use, are favourably altered by the addition of trace amounts of a nanofiller. However, the resulting changes in polymer properties are poorly understood, primarily due to the non-uniform spatial distribution of nanoparticles. Here we show that the thermomechanical properties of ‘polymer nanocomposites’ are quantitatively equivalent to the well-documented case of planar polymer films. We quantify this equivalence by drawing a direct analogy between film thickness and an appropriate experimental interparticle spacing. We show that the changes in glass-transition temperature with decreasing interparticle spacing for two filler surface treatments are quantitatively equivalent to the corresponding thin-film data with a non-wetting and a wetting polymer–particle interface. Our results offer new insights into the role of confinement on the glass transition, and we conclude that the mere presence of regions of modified mobility in the vicinity of the particle surfaces, that is, a simple two-layer model, is insufficient to explain our results. Rather, we conjecture that the glass-transition process requires that the interphase regions surrounding different particles interact.

Journal ArticleDOI
TL;DR: In this article, the exponential decay of light output as a function of time provided a convenient method to rapidly estimate life by data extrapolation and showed that the life of these LEDs decreases in an exponential manner with increasing temperature.
Abstract: Even though light-emitting diodes (LEDs) may have a very long life, poorly designed LED lighting systems can experience a short life. Because heat at the p-n-junction is one of the main factors that affect the life of the LED, by knowing the relationship between life and heat, LED system manufacturers can design and build long-lasting systems. In this study, several white LEDs from the same manufacturer were subjected to life tests at different ambient temperatures. The exponential decay of light output as a function of time provided a convenient method to rapidly estimate life by data extrapolation. The life of these LEDs decreases in an exponential manner with increasing temperature. In a second experiment, several high-power white LEDs from different manufacturers were life-tested under similar conditions. Results show that the different products have significantly different life values.

Journal ArticleDOI
TL;DR: CHARM is an efficient algorithm for mining all frequent closed itemsets using a dual itemset-tidset search tree, using an efficient hybrid search that skips many levels, and uses a technique called diffsets to reduce the memory footprint of intermediate computations.
Abstract: The set of frequent closed itemsets uniquely determines the exact frequency of all itemsets, yet it can be orders of magnitude smaller than the set of all frequent itemsets. In this paper, we present CHARM, an efficient algorithm for mining all frequent closed itemsets. It enumerates closed sets using a dual itemset-tidset search tree, using an efficient hybrid search that skips many levels. It also uses a technique called diffsets to reduce the memory footprint of intermediate computations. Finally, it uses a fast hash-based approach to remove any "nonclosed" sets found during computation. We also present CHARM-L, an algorithm that outputs the closed itemset lattice, which is very useful for rule generation and visualization. An extensive experimental evaluation on a number of real and synthetic databases shows that CHARM is a state-of-the-art algorithm that outperforms previous methods. Further, CHARM-L explicitly generates the frequent closed itemset lattice.

Journal ArticleDOI
TL;DR: In this paper, the authors used grounded theory to build a framework to address two questions: (a) which UTTO's structures and licensing strategies are most conducive to new venture formation; and (b) how are the various UTTOs' structures and license strategies correlated with each other.

Journal ArticleDOI
TL;DR: A thorough analysis of the two programs provides some likely reasons why the programs alone may fail to achieve absolute perfection, and a lean, Six Sigma (LSS) organization would capitalize on the strengths of both lean management and Six Sigma.
Abstract: Purpose – To eliminate many misconceptions regarding Six Sigma and lean management by describing each system and the key concepts and techniques that underlie their implementation. This discussion is followed by a description of what lean organizations can gain from Six Sigma and what Six Sigma organizations can gain from lean management.Design/methodology/approach – Comparative study of Six Sigma and lean management using available literature, critical analysis, and knowledge and professional experience of the authors.Findings – The joint implementation of the programs will result in a lean, Six Sigma (LSS) organization, overcoming the limitations of each program when implemented in isolation. A thorough analysis of the two programs provides some likely reasons why the programs alone may fail to achieve absolute perfection.Practical implications – A lean, Six Sigma (LSS) organization would capitalize on the strengths of both lean management and Six Sigma. An LSS organization would include three primary t...

Journal ArticleDOI
TL;DR: In this paper, the authors consider the managerial and policy implications of the rise of spin-offs at public research institutions (PRIs), based on a knowledge-based view (KBV) of the firm.

Journal ArticleDOI
TL;DR: In this paper, the morphological correlation with the charge-carrier mobility in RR P3HT thin-film transistor (TFT) devices is investigated by combining results from atomic force microscopy (AFM) and GIXD.
Abstract: Regioregular poly(3-hexyl thiophene) (RR P3HT) is drop-cast to fabricate field-effect transistor (FET) devices from different solvents with different boiling points and solubilities for RR P3HT, such as methylene chloride, toluene, tetrahydrofuran, and chloroform. A Petri dish is used to cover the solution, and it takes less than 30 min for the solvents to evaporate at room temperature. The mesoscale crystalline morphology of RR P3HT thin films can be manipulated from well-dispersed nanofibrils to well-developed spherulites by changing solution processing conditions. The morphological correlation with the charge-carrier mobility in RR P3HT thin-film transistor (TFT) devices is investigated. The TFT devices show charge-carrier mobilities in the range of 10 - 4 ∼10 - 2 cm 2 V - 1 s - 1 depending on the solvent used, although grazing-incidence X-ray diffraction (GIXD) reveals that all films develop the same π-π-stacking orientation, where the -axis is normal to the polymer films. By combining results from atomic force microscopy (AFM) and GIXD, it is found that the morphological connectivity of crystalline nanofibrils and the -axis orientation distribution of the π-π-stacking plane with respect to the film normal play important roles on the charge-carrier mobility of RR P3HT for TFT applications.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed dynamic and probabilistic framework based on combining DBN with Ekman's facial action coding system (FACS) can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.
Abstract: This paper explores the use of multisensory information fusion technique with dynamic Bayesian networks (DBN) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBN with Ekman's facial action coding system (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.

Journal ArticleDOI
26 Sep 2005
TL;DR: Two approaches for generating white light from solid-state sources based on phosphor LEDs and multichip LED lamps are compared, which offer many advantages, such as chromaticity control, better light quality, and higher efficiency.
Abstract: Solid-state lighting technology is now emerging as a cost-competitive,energy-efficient alternative to conventional electrical lighting. We review the history of lighting, discuss the benefits and challenges of the solid-state lighting technologies, and compare two approaches for generating white light from solid-state sources based on phosphor LEDs (which could be considered as solid-state replacement of fluorescent tubes) and multichip LED lamps,which offer many advantages, such as chromaticity control, better light quality,and higher efficiency.

Journal ArticleDOI
TL;DR: Direct shear testing of epoxy thin films containing dense packing of multiwalled carbon nanotube fillers and report strong viscoelastic behaviour with up to 1,400% increase in loss factor (damping ratio) of the baseline epoxy, concluding that damping is related to frictional energy dissipation during interfacial sliding at the large, spatially distributed, nanotubes–nanotube interfaces.
Abstract: Polymer composites reinforced by carbon nanotubes have been extensively researched for their strength and stiffness properties. Unless the interface is carefully engineered, poor load transfer between nanotubes (in bundles) and between nanotubes and surrounding polymer chains may result in interfacial slippage and reduced performance. Interfacial shear, although detrimental to high stiffness and strength, could result in very high mechanical damping, which is an important attribute in many commercial applications. We previously reported evidence of damping in nanocomposites by measuring the modal response (at resonance) of cantilevered beams with embedded nanocomposite films. Here we carry out direct shear testing of epoxy thin films containing dense packing of multiwalled carbon nanotube fillers and report strong viscoelastic behaviour with up to 1,400% increase in loss factor (damping ratio) of the baseline epoxy. The great improvement in damping was achieved without sacrificing the mechanical strength and stiffness of the polymer, and with minimal weight penalty. Based on the interfacial shear stress (approximately 0.5 MPa) at which the loss modulus increases sharply for our system, we conclude that the damping is related to frictional energy dissipation during interfacial sliding at the large, spatially distributed, nanotube-nanotube interfaces.

Journal ArticleDOI
TL;DR: In this article, the authors extend the innovation speed theory by linking the antecedents and outcomes of technology commercialization at universities and find that the faster a university can commercialize patent-protected technologies, the greater their licensing revenues streams and the more new ventures they spin off.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: This work devise a graph cut algorithm for interactive segmentation which incorporates shape priors, and positive results on both medical and natural images are demonstrated.
Abstract: Interactive or semi-automatic segmentation is a useful alternative to pure automatic segmentation in many applications. While automatic segmentation can be very challenging, a small amount of user input can often resolve ambiguous decisions on the part of the algorithm. In this work, we devise a graph cut algorithm for interactive segmentation which incorporates shape priors. While traditional graph cut approaches to interactive segmentation are often quite successful, they may fail in cases where there are diffuse edges, or multiple similar objects in close proximity to one another. Incorporation of shape priors within this framework mitigates these problems. Positive results on both medical and natural images are demonstrated.

Journal ArticleDOI
TL;DR: In this paper, the relative performance of U.K. university technology transfer offices (TTOs) using data envelopment analysis (DEA) and stochastic frontier estimation (SFE) was investigated.

Journal ArticleDOI
TL;DR: This work presents object sensitivity, a new form of context sensitivity for flow-insensitive points-to analysis for Java, and proposes a parameterization framework that allows analysis designers to control the tradeoffs between cost and precision in the object-sensitive analysis.
Abstract: The goal of points-to analysis for Java is to determine the set of objects pointed to by a reference variable or a reference object field. We present object sensitivity, a new form of context sensitivity for flow-insensitive points-to analysis for Java. The key idea of our approach is to analyze a method separately for each of the object names that represent run-time objects on which this method may be invoked. To ensure flexibility and practicality, we propose a parameterization framework that allows analysis designers to control the tradeoffs between cost and precision in the object-sensitive analysis.Side-effect analysis determines the memory locations that may be modified by the execution of a program statement. Def-use analysis identifies pairs of statements that set the value of a memory location and subsequently use that value. The information computed by such analyses has a wide variety of uses in compilers and software tools. This work proposes new versions of these analyses that are based on object-sensitive points-to analysis.We have implemented two instantiations of our parameterized object-sensitive points-to analysis. On a set of 23 Java programs, our experiments show that these analyses have comparable cost to a context-insensitive points-to analysis for Java which is based on Andersen's analysis for C. Our results also show that object sensitivity significantly improves the precision of side-effect analysis and call graph construction, compared to (1) context-insensitive analysis, and (2) context-sensitive points-to analysis that models context using the invoking call site. These experiments demonstrate that object-sensitive analyses can achieve substantial precision improvement, while at the same time remaining efficient and practical.

Journal ArticleDOI
TL;DR: In this paper, a self-aligned regioregular poly(3-hexylthiophene) (P3HT) has been used to control the intermolecular interaction at the interface between P3HT and the insulator substrate by using self-assembled monolayers (SAMs) functionalized with various groups (NH2, NH2, OH, and CH3).
Abstract: With the aim of enhancing the field-effect mobility by promoting surface-mediated two-dimensional molecular ordering in self-aligned regioregular poly(3-hexylthiophene) (P3HT) we have controlled the intermolecular interaction at the interface between P3HT and the insulator substrate by using self-assembled monolayers (SAMs) functionalized with various groups (–NH2, –OH, and –CH3). We have found that, depending on the properties of the substrate surface, the P3HT nanocrystals adopt two different orientations—parallel and perpendicular to the insulator substrate—which have field-effect mobilities that differ by more than a factor of 4, and that are as high as 0.28 cm2 V–1 s–1. This surprising increase in field-effect mobility arises in particular for the perpendicular orientation of the nanocrystals with respect to the insulator substrate. Further, the perpendicular orientation of P3HT nanocrystals can be explained by the following factors: the unshared electron pairs of the SAM end groups, the π–H interactions between the thienyl-backbone bearing π-systems and the H (hydrogen) atoms of the SAM end groups, and interdigitation between the alkyl chains of P3HT and the alkyl chains of the SAMs.

Journal ArticleDOI
TL;DR: In Pang and Fukushima, a sequential penalty approach was presented for a quasi-variational inequality (QVI) with particular application to the generalized Nash game, but numerical results due to an inverted sign in the penalty term in the example and some missing terms in the derivatives of the firms’ Lagrangian functions are incorrect.
Abstract: In Pang and Fukushima (Comput Manage Sci 2:21–56, 2005), a sequential penalty approach was presented for a quasi-variational inequality (QVI) with particular application to the generalized Nash game. To test the computational performance of the penalty method, numerical results were reported with an example from a multi-leader-follower game in an electric power market. However, due to an inverted sign in the penalty term in the example and some missing terms in the derivatives of the firms’ Lagrangian functions, the reported numerical results in Pang and Fukushima (Comput Manage Sci 2:21–56, 2005) are incorrect. Since the numerical examples of this kind are scarce in the literature and this particular example may be useful in the future research, we report the corrected results.