scispace - formally typeset
Search or ask a question

Showing papers by "Lehigh University published in 2006"


Journal ArticleDOI
TL;DR: This article found that low-fit initiatives negatively impact consumer beliefs, attitudes, and intentions no matter what the firm's motivation, and that high-fit, proactive initiatives led to an improvement in consumer belief, attitudes and intentions.

1,720 citations


Journal ArticleDOI
TL;DR: The optimal detector in the Neyman–Pearson sense is developed and analyzed for the statistical MIMO radar and it is shown that the optimal detector consists of noncoherent processing of the receiver sensors' outputs and that for cases of practical interest, detection performance is superior to that obtained through coherent processing.
Abstract: Inspired by recent advances in multiple-input multiple-output (MIMO) communications, this proposal introduces the statistical MIMO radar concept To the authors' knowledge, this is the first time that the statistical MIMO is being proposed for radar The fundamental difference between statistical MIMO and other radar array systems is that the latter seek to maximize the coherent processing gain, while statistical MIMO radar capitalizes on the diversity of target scattering to improve radar performance Coherent processing is made possible by highly correlated signals at the receiver array, whereas in statistical MIMO radar, the signals received by the array elements are uncorrelated Radar targets generally consist of many small elemental scatterers that are fused by the radar waveform and the processing at the receiver, to result in echoes with fluctuating amplitude and phase It is well known that in conventional radar, slow fluctuations of the target radar cross section (RCS) result in target fades that degrade radar performance By spacing the antenna elements at the transmitter and at the receiver such that the target angular spread is manifested, the MIMO radar can exploit the spatial diversity of target scatterers opening the way to a variety of new techniques that can improve radar performance This paper focuses on the application of the target spatial diversity to improve detection performance The optimal detector in the Neyman–Pearson sense is developed and analyzed for the statistical MIMO radar It is shown that the optimal detector consists of noncoherent processing of the receiver sensors' outputs and that for cases of practical interest, detection performance is superior to that obtained through coherent processing An optimal detector invariant to the signal and noise levels is also developed and analyzed In this case as well, statistical MIMO radar provides great improvements over other types of array radars

1,413 citations


Journal ArticleDOI
TL;DR: A review of the literature on stochastic and robust facility location models can be found in this article, where the authors illustrate both the rich variety of approaches for optimization under uncertainty and their application to facility location problems.
Abstract: Plants, distribution centers, and other facilities generally function for years or decades, during which time the environment in which they operate may change substantially. Costs, demands, travel times, and other inputs to classical facility location models may be highly uncertain. This has made the development of models for facility location under uncertainty a high priority for researchers in both the logistics and stochastic/robust optimization communities. Indeed, a large number of the approaches that have been proposed for optimization under uncertainty have been applied to facility location problems. This paper reviews the literature on stochastic and robust facility location models. Our intent is to illustrate both the rich variety of approaches for optimization under uncertainty that have appeared in the literature and their application to facility location problems. In a few instances for which examples in facility location are not available, we provide examples from the more general logistics l...

970 citations


Journal ArticleDOI
TL;DR: It is shown that the structure of the optimal robust policy is of the same base-stock character as the optimal stochastic policy for a wide range of inventory problems in single installations, series systems, and general supply chains.
Abstract: We propose a general methodology based on robust optimization to address the problem of optimally controlling a supply chain subject to stochastic demand in discrete time. This problem has been studied in the past using dynamic programming, which suffers from dimensionality problems and assumes full knowledge of the demand distribution. The proposed approach takes into account the uncertainty of the demand in the supply chain without assuming a specific distribution, while remaining highly tractable and providing insight into the corresponding optimal policy. It also allows adjustment of the level of robustness of the solution to trade off performance and protection against uncertainty. An attractive feature of the proposed approach is its numerical tractability, especially when compared to multidimensional dynamic programming problems in complex supply chains, as the robust problem is of the same difficulty as the nominal problem, that is, a linear programming problem when there are no fixed costs, and a mixed-integer programming problem when fixed costs are present. Furthermore, we show that the optimal policy obtained in the robust approach is identical to the optimal policy obtained in the nominal case for a modified and explicitly computable demand sequence. In this way, we show that the structure of the optimal robust policy is of the same base-stock character as the optimal stochastic policy for a wide range of inventory problems in single installations, series systems, and general supply chains. Preliminary computational results are very promising.

619 citations


Journal ArticleDOI
TL;DR: A recently developed software tool executing on a computational grid is used to solve many large instances of these problems, allowing for high-quality solutions and to verify optimality and near-optimality of the computed solutions in various ways.
Abstract: We investigate the quality of solutions obtained from sample-average approximations to two-stage stochastic linear programs with recourse. We use a recently developed software tool executing on a computational grid to solve many large instances of these problems, allowing us to obtain high-quality solutions and to verify optimality and near-optimality of the computed solutions in various ways.

449 citations


Journal ArticleDOI
TL;DR: A pairwise error probability (PEP) expression is derived and the transfer function technique is applied in conjunction with the derived PEP to obtain upper bounds on the bit error rate.
Abstract: Error control coding can be used over free-space optical (FSO) links to mitigate turbulence-induced fading. In this paper, we derive error performance bounds for coded FSO communication systems operating over atmospheric turbulence channels, considering the recently introduced gamma-gamma turbulence model. We derive a pairwise error probability (PEP) expression and then apply the transfer function technique in conjunction with the derived PEP to obtain upper bounds on the bit error rate. Simulation results are further demonstrated to confirm the analytical results

444 citations


Journal ArticleDOI
TL;DR: In this article, a definition of global citizenship and global competency was developed through the use of a Delphi Techniq (Delphi) for the purpose of defining global competencies.
Abstract: To contribute to the valuable and ongoing debate regarding the definition of global citizenship and global competency, this study proposes a definition developed through the use of a Delphi Techniq...

442 citations


Proceedings ArticleDOI
03 Apr 2006
TL;DR: The results show that network-coded DAS leads to better diversity performance as compared to conventional DAS, at a lower hardware cost and higher spectral efficiency.
Abstract: This paper investigates the diversity gain offered by implementing network coding (R. Ahlswede et al., 2000) over wireless communication links. The network coding algorithm is applied to both a wireless network containing a distributed antenna system (DAS) as well as one that supports user cooperation between users. The results show that network-coded DAS leads to better diversity performance as compared to conventional DAS, at a lower hardware cost and higher spectral efficiency. In the case of user cooperation, network coding yields additional diversity, especially when there are multiple users

406 citations



Journal ArticleDOI
TL;DR: An effective heuristic that combines a genetic algorithm with a local tour improvement heuristic for the generalized traveling salesman problem, which is competitive with other heuristics published to date in both solution quality and computation time.

332 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that principal component analysis of EEL SIs can be used to extract chemically relevant components and with weighted or two-way scaled principal components analysis, both compositional and bonding information can be extracted.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted an in-depth qualitative study of supply chain coordination mechanisms, primarily price, non-price, and flow coordination, and found that managers prefer flow coordination mechanisms over price and nonprice coordination mechanisms.
Abstract: There is growing interest from industry and academic disciplines regarding coordination in supply chains, particularly addressing coordination mechanisms available to eliminate sub-optimization within supply chains. However, there is a disconnect between what is known in academic research about coordination mechanisms and what mechanisms practitioners apply and consider useful. This research fills a gap in the literature by conducting an in-depth qualitative study of supply chain coordination mechanisms, primarily price, non-price, and flow coordination mechanisms. Results suggest that: (1) managers prefer flow coordination mechanisms over price and non-price coordination mechanisms; (2) supply chain orientation and learning orientation are important for the implementation of flow coordination mechanisms; and (3) technology, capital, and volume are not pre-requisites for flow coordination mechanisms.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the near wake of a cylinder at values of Reynolds number corresponding to the onset and development of shear-layer instabilities and show that the flow structure is notably altered.
Abstract: We investigate the near wake of a cylinder at values of Reynolds number corresponding to the onset and development of shear-layer instabilities. By combining quantitative experimental imaging (particle image velocimetry, PIV) and direct numerical simulations at Re = 3900/4000 and 10000, we show that the flow structure is notably altered. At higher Reynolds number, the lengths of both the wake bubble and the separating shear layer decrease substantially. Corresponding patterns of velocity fluctuations and Reynolds stress contract towards the base of the cylinder. The elevated values of Reynolds stress at upstream locations in the separated layer indicate earlier onset of shear-layer transition. These features are intimately associated with the details of the shear-layer instability, which leads to small-scale vortices. The simulated signatures of the shear-layer vortices are characterized by a broadband peak at Re = 3900 and a broadband high spectral-density 'plateau' at Re=10000 in the power spectra. The shear-layer frequencies from the present direct numerical simulations study agree well with previous experimentally measured values, and follow the power law suggested by other workers.

01 Jan 2006
TL;DR: In this paper, the authors present a broad range of models for designing supply chains resilient to disruptions, and divide them based on the underlying optimization model (facility location or network design) and the risk measure (expected cost or worst-case cost).
Abstract: Recent events have highlighted the need for planners to consider the risk of disruptions when designing supply chain networks. Supply chain disruptions have a number of causes and may take a number of forms. Once a disruption occurs, there is very little recourse regarding supply chain infrastructure because these strategic decisions can- not be changed quickly. Therefore, it is critical to account for disruptions during the design of supply chain networks so that they perform well even after a disruption. Indeed, these systems can often be made substantially more reliable with only small additional investments in infrastructure. Planners have a range of options available to them in designing resilient supply chain networks, and their choice of approaches will depend on the financial resources available, the decision maker's risk preference, the type of network under consideration, and other factors. In this tutorial, we present a broad range of models for designing supply chains resilient to disruptions. We first cat- egorize these models by the status of the existing network: A network may be designed from scratch, or an existing network may be modified to prevent disruptions at some facilities. We next divide each category based on the underlying optimization model (facility location or network design) and the risk measure (expected cost or worst-case cost).

Journal ArticleDOI
TL;DR: Challenging behavior exhibited by young children is becoming recognized as a serious impediment to social-emotional development and a harbinger of severe maladjustment in school and adult life as mentioned in this paper.
Abstract: Challenging behavior exhibited by young children is becoming recognized as a serious impediment to social–emotional development and a harbinger of severe maladjustment in school and adult life. Con...

Journal ArticleDOI
Shivaji Sircar1
TL;DR: In this paper, the authors proposed a method to generate a multicomponent adsorption database for better understanding of the complex phenomenon, for seriously testing existing models, and for development of new models.
Abstract: Separation and purification of multicomponent gas mixtures by adsorption is an established process technology. Adsorptive process design requires accurate data on multicomponent gas adsorption equilibria, kinetics, and heats as input variables. These data often cannot be predicted by using today's models, particularly for complex practical systems where the gas mixtures have adsorbates of different sizes and polarities, and the adsorbent is energetically heterogeneous. There is a large volume of pure gas and some binary gas adsorption equilibrium and kinetic data in the published literature, but multicomponent adsorption data are rare. The data for heats of adsorption are only emerging. There is a desperate need to generate a multicomponent adsorption database for better understanding of the complex phenomenon, for seriously testing existing models, and for development of new models. Two recently developed methods called “isotope exchange technique” and “microcalorimetry for adsorption heats” are recommen...

Journal ArticleDOI
TL;DR: The recent knowledge on where disturbances in A-to-I RNAediting have been correlated with human disease phenotypes is reviewed.
Abstract: The post-transcriptional modification of mammalian transcripts by A-to-I RNA editing has been recognized as an important mechanism for the generation of molecular diversity and also regulates protein function through recoding of genomic information. As the molecular players of editing are characterized and an increasing number of genes become identified that are subject to A-to-I modification, the potential impact of editing on the etiology or progression of human diseases is realized. Here we review the recent knowledge on where disturbances in A-to-I RNA editing have been correlated with human disease phenotypes.

01 Jan 2006
TL;DR: The purpose of this tutorial is to present a mathematical framework that is well-suited to the limited information available in real-life problems and captures the decision-maker’s attitude towards uncertainty; the proposed approach builds upon recent developments in robust and data-driven optimization.
Abstract: Traditional models of decision-making under uncertainty assume perfect information, i.e., accurate values for the system parameters and speciflc probability distributions for the random variables. However, such precise knowledge is rarely available in practice, and a strategy based on erroneous inputs might be infeasible or exhibit poor performance when implemented. The purpose of this tutorial is to present a mathematical framework that is well-suited to the limited information available in real-life problems and captures the decision-maker’s attitude towards uncertainty; the proposed approach builds upon recent developments in robust and data-driven optimization. In robust optimization, random variables are modeled as uncertain parameters belonging to a convex uncertainty set and the decision-maker protects the system against the worst case within that set. Data-driven optimization uses observations of the random variables as direct inputs to the mathematical programming problems. The flrst part of the tutorial describes the robust optimization paradigm in detail in single-stage and multi-stage problems. In the second part, we address the issue of constructing uncertainty sets using historical realizations of the random variables and investigate the connection between convex sets, in particular polyhedra, and a speciflc class of risk measures.

Journal ArticleDOI
TL;DR: These findings conflict with the notion of stored visual word forms and instead suggest that this region acts as an interface between visual form information and higher order stimulus properties such as its associated sound and meaning when processing any meaningful visual stimulus.
Abstract: Studies of skilled reading [Price, C. J., & Mechelli, A. Reading and reading disturbance. Current Opinion in Neurobiology, 15, 231--238, 2005], its acquisition in children [Shaywitz, B. A., Shaywitz, S. E., Pugh, K. R., Mencl, W. E., Fulbright, R. K., Skudlarski, P., et al. Disruption of posterior brain systems for reading in children with developmental dyslexia. Biological Psychiatry, 52, 101--110, 2002; Turkeltaub, P. E., Gareau, L., Flowers, D. L., Zeffiro, T. A., & Eden, G. F. Development of neural mechanisms for reading. Nature Neuroscience, 6, 767--773, 2003], and its impairment in patients with pure alexia [Leff, A. P., Crewes, H., Plant, G. T., Scott, S. K., Kennard, C., & Wise, R. J. The functional anatomy of single word reading in patients with hemianopic and pure alexia. Brain, 124, 510--521, 2001] all highlight the importance of the left posterior fusiform cortex in visual word recognition. We used visual masked priming and functional magnetic resonance imaging to elucidate the specific functional contribution of this region to reading and found that (1) unlike words, repetition of pseudowords (“solst-solst”) did not produce a neural priming effect in this region, (2) orthographically related words such as “corner-corn” did produce a neural priming effect, but (3) this orthographic priming effect was reduced when prime-target pairs were semantically related (“teacher-teach”). These findings conflict with the notion of stored visual word forms and instead suggest that this region acts as an interface between visual form information and higher order stimulus properties such as its associated sound and meaning. More importantly, this function is not specific to reading but is also engaged when processing any meaningful visual stimulus.

Journal ArticleDOI
TL;DR: An overview of the iron nanoparticle technology is provided in this paper, beginning with a description of the process fundamentals and concluding with a discussion of the synthetic schemes for the nanoparticle types developed at Lehigh University.
Abstract: Iron nanoparticles are increasingly being applied in site remediation and hazardous waste treatment. Nearly a decade after it was first proposed in 1996, the iron nanoparticle technology is at a critical stage of its developmental process. Significant research innovations have been made in terms of synthetic methods, surface property modification, and enhancement for field delivery and reactions. Extensive laboratory studies have demonstrated that nanoscale iron particles are effective for the treatment of a wide array of common groundwater contaminants such as chlorinated organic solvents, organochlorine pesticides, polychlorinated biphenyls (PCBs), organic dyes, and various inorganic compounds. Several field tests have also demonstrated the promising prospective for in situ remediation. Nonetheless, there are still considerable knowledge gaps on many fundamental scientific issues (e.g., fate, transport, and environmental impact) and economic hurdles, which could determine the acceptance of the technology within the academic community as well as by regulators and the private sector. An overview of the iron nanoparticle technology is provided in this article, beginning with a description of the process fundamentals. This is followed by a discussion of the synthetic schemes for the nanoparticle types developed at Lehigh University. Next, a summary of the major research findings is provided, highlighting the key characteristics and remediation-related advantages of the iron nanoparticle technology versus the granular/microscale iron technology. A discussion of challenges related to its future directions and environmental impact is presented. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper presents p-robust models based on two classical facility location problems, and shows how the algorithm can be used as a heuristic to solve minimax regret versions of the location problems.
Abstract: The two most widely considered measures for optimization under uncertainty are minimizing expected cost and minimizing worst-case cost or regret. In this paper, we present a novel robustness measure that combines the two objectives by minimizing the expected cost while bounding the relative regret in each scenario. In particular, the models seek the minimum-expected-cost solution that is p-robust; i.e., whose relative regret is no more than 100p% in each scenario. We present p-robust models based on two classical facility location problems. We solve both problems using variable splitting, with the Lagrangian subproblem reducing to the multiple-choice knapsack problem. For many instances of the problems, finding a feasible solution, and even determining whether the instance is feasible, is difficult; we discuss a mechanism for determining infeasibility. We also show how the algorithm can be used as a heuristic to solve minimax regret versions of the location problems.

Journal ArticleDOI
TL;DR: A novel approach for nanoscale imaging and characterization of the orientation dependence of electromechanical properties—vector piezoresponse force microscopy (Vector PFM)—is described and the potential of vector PFM for molecular orientation imaging in macroscopically disordered piezoelectric polymers and biological systems is discussed.
Abstract: A novel approach for nanoscale imaging and characterization of the orientation dependence of electromechanical properties-vector piezoresponse force microscopy (Vector PFM)-is described. The relationship between local electromechanical response, polarization, piezoelectric constants, and crystallographic orientation is analyzed in detail. The image formation mechanism in vector PFM is discussed. Conditions for complete three-dimensional (3D) reconstruction of the electromechanical response vector and evaluation of the piezoelectric constants from PFM data are set forth. The developed approach can be applied to crystallographic orientation imaging in piezoelectric materials with a spatial resolution below 10 nm. Several approaches for data representation in 2D-PFM and 3D-PFM are presented. The potential of vector PFM for molecular orientation imaging in macroscopically disordered piezoelectric polymers and biological systems is discussed.

Journal ArticleDOI
TL;DR: In this article, the authors examined the moderating role of turbulence on the relationships between firms' learning orientation and memory and their organizational performance and innovativeness, and found that under low environmental turbulence, learning orientation was a useful predictor of performance and innovation.
Abstract: Extensive research has documented how firms’ learning orientation and memory are related to organizational performance. The objective of this study is to examine the moderating role of turbulence on the relationships between firms’ learning orientation and memory and their organizational performance and innovativeness. The study also provides insight into the differential relationships of firms’ learning orientation and memory to their performance and innovativeness. Using survey data collected from 200 supply management professionals, the results suggest that the extent to which learning and memory are associated with organizational performance is contingent on the level of environmental turbulence. Specifically, under low environmental turbulence, learning orientation and organizational memory appear to be related to performance and innovativeness; however, under high environmental turbulence, only learning orientation is a useful predictor.

Journal ArticleDOI
TL;DR: Overall, findings suggest that college students with ADHD are at greater risk for academic and psychological difficulties, and they perform similar to non-ADHD controls on many neuropsychological tasks.
Abstract: Objective: According to the American Psychiatric Association, 3% to 7% of the school-age population has ADHD and many children continue to display significant symptoms throughout adolescences and a...

Journal ArticleDOI
TL;DR: A framework to examine different functions in reprograming is outlined, followed by an analysis of reprogramming challenges, and a comprehensive survey of the state-of-the-art repprogramming systems is provided.
Abstract: Wireless sensor networks need an efficient and reliable reprogramming service to facilitate management and maintenance tasks. In this article we first outline a framework to examine different functions in reprogramming, followed by an analysis of reprogramming challenges. We then provide a comprehensive survey of the state-of-the-art reprogramming systems, and discuss different approaches to address these challenges. Finally we explore performance, protocol behavior, and the impact of several design factors.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: It is demonstrated that MIMO radar with centralized coherent processing is able to resolve scatterers with a range resolution well beyond that supported by the signal bandwidth, and a distributed target model is developed.
Abstract: Multiple-input multiple-output (MIMO) radar is a multistatic architecture composed of multiple transmitters and receivers, which seeks to exploit the spatial diversity of radar backscatter. In conjunction with centralized processing, MIMO radar has the potential to significantly improve radar functions such as detection and parameter estimation. MIMO radar is distinct from other types of array radars such as phased array or STAP, which process the signals of closely spaced elements and, hence, cannot capitalize on the spatial characteristics of targets. In this work, we explore the ability of MIMO radar and coherent processing to locate a target with high resolution and to resolve targets located in the same range cell. A distributed target model is developed. It is demonstrated that MIMO radar with centralized coherent processing is able to resolve scatterers with a range resolution well beyond that supported by the signal bandwidth. The location estimation capabilities are further illustrated by introducing a new two-dimensional ambiguity function. The analysis is discussed in the context of established results for randomly thinned arrays. The investigation of high resolution MIMO radar also includes comparison with the performance of non-coherent MIMO radar and the effect on performance of the number of sensors and their locations.

Journal ArticleDOI
TL;DR: Using restricted-use versions of the National Mortality Detail File combined with Census data, this article used this quasi experiment to examine the income mortality link in an elderly population, finding that the higher-income group has a statistically significantly higher mortality rate.
Abstract: Legislation in the 1970s created a Notch in social security payments, with those born after January 1, 1917, receiving sharply lower benefits. Using restricted-use versions of the National Mortality Detail File combined with Census data, we use this quasi experiment to examine the income mortality link in an elderly population. Estimates from difference-in-difference and regression discontinuity models show the higher-income group has a statistically significantly higher mortality rate, contradicting the previous literature. We also found that younger cohorts responded to lower incomes by increasing postretirement work effort, suggesting that moderate employment has beneficial health effects for the elderly.

Journal ArticleDOI
TL;DR: In this paper, the authors show that donor additions to BaTiO3 up to a few tenths atom percent are compensated by electrons and the resulting electrical conductivity is independent of temperature and Po2 at 700° to 1000°C.
Abstract: Donor additions to BaTiO3 up to a few tenths atom percent are compensated by electrons and the resulting electrical conductivity is independent of temperature and Po2 at 700° to 1000°C. The conductivities are impurity-insensitive at very low Poz and high temperature where reduction is the major source of defects. Variation in the site occupation ratio (A/B in ABO3 has a small effect on the conductivities for donor additions in the 100 ppm range. Nb is more effective as a donor than is Al as an acceptor, and Nb can compensate approximately 2½ times as much Al on an atomic basis.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate that ratios of the abundances of n-alkanes provide a new and efficient way of reconstructing past peatland surface moisture change, and find strong correlations among nalkane indices, humification indices and testate amoebae-inferred water table depths from a single core.

Journal ArticleDOI
TL;DR: Evidence that intracellular protein translocation contributes to adaptation of photoreceptors to diurnal changes in ambient light intensity is discussed and the current debate on whether it is driven by diffusion or molecular motors is summarized.