scispace - formally typeset
Search or ask a question

Showing papers by "Hong Kong University of Science and Technology published in 1996"


Book
20 Aug 1996

2,938 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the characters of the integrable highest weight modules of affine Lie algebras and the minimal series of the Virasoro algebra give rise to conformal field theories.
Abstract: In contrast with the finite dimensional case, one of the distinguished features in the theory of infinite dimensional Lie algebras is the modular invariance of the characters of certain representations. It is known [Fr], [KP] that for a given affine Lie algebra, the linear space spanned by the characters of the integrable highest weight modules with a fixed level is invariant under the usual action of the modular group SL2(Z). The similar result for the minimal series of the Virasoro algebra is observed in [Ca] and [IZ]. In both cases one uses the explicit character formulas to prove the modular invariance. The character formula for the affine Lie algebra is computed in [K], and the character formula for the Virasoro algebra is essentially contained in [FF]; see [R] for an explicit computation. This mysterious connection between the infinite dimensional Lie algebras and the modular group can be explained by the two dimensional conformal field theory. The highest weight modules of affine Lie algebras and the Virasoro algebra give rise to conformal field theories. In particular, the conformal field theories associated to the integrable highest modules and minimal series are rational. The characters of these modules are understood to be the holomorphic parts of the partition functions on the torus for the corresponding conformal field theories. From this point of view, the role of the modular group SL2(Z) is manifest. In the study of conformal field theory, physicists arrived at the notion of chiral algebras (see e.g. [MS]). Independently, in the attempt to realize the Monster sporadic group as a symmetry group of certain algebraic structure, an infinite dimensional graded representation of the Monster sporadic group, the so called Moonshine module, was constructed in [FLM1]. This algebraic structure was later found in [Bo] and called the vertex algebra; the first axioms of vertex operator algebras were formulated in that paper. The proof that the Moonshine module is a vertex operator algebra and the Monster group acts as its automorphism group was given in [FLM2]. Notably the character of the Moonshine module is also a modular function, namely j(τ) − 744. It turns out that the vertex operator algebra can be regarded as a rigorous mathematical definition of the chiral algebras in the physical literature. And it is expected that a pair of isomorphic vertex operator algebras and their representations (corresponding to the holomorphic and antiholomorphic sectors) are the basic objects needed to build a conformal field theory of a certain type.

1,122 citations


Journal ArticleDOI
TL;DR: Pulsed UV-laser irradiation can produce submicron periodic linear and dot patterns on polymer surfaces without photomask, which can be used to increase surface roughness of inert polymers for improved adhesion as mentioned in this paper.

1,052 citations


Journal ArticleDOI
TL;DR: In this article, a psychological explanation for the rejection of ultimatum offers was proposed, based on the wounded pride/spite model, which predicts that informed, knowledgeable respondents may react to small ultimiatum offers by perceiving them as unfair, feeling anger, and acting spitefully.

858 citations


Journal ArticleDOI
TL;DR: A static scheduling algorithm for allocating task graphs to fully connected multiprocessors which has admissible time complexity, is economical in terms of the number of processors used and is suitable for a wide range of graph structures.
Abstract: In this paper, we propose a static scheduling algorithm for allocating task graphs to fully connected multiprocessors. We discuss six recently reported scheduling algorithms and show that they possess one drawback or the other which can lead to poor performance. The proposed algorithm, which is called the Dynamic Critical-Path (DCP) scheduling algorithm, is different from the previously proposed algorithms in a number of ways. First, it determines the critical path of the task graph and selects the next node to be scheduled in a dynamic fashion. Second, it rearranges the schedule on each processor dynamically in the sense that the positions of the nodes in the partial schedules are not fixed until all nodes have been considered. Third, it selects a suitable processor for a node by looking ahead the potential start times of the remaining nodes on that processor, and schedules relatively less important nodes to the processors already in use. A global as well as a pair-wise comparison is carried out for all seven algorithms under various scheduling conditions. The DCP algorithm outperforms the previous algorithms by a considerable margin. Despite having a number of new features, the DCP algorithm has admissible time complexity, is economical in terms of the number of processors used and is suitable for a wide range of graph structures.

842 citations


Posted Content
TL;DR: While top management support is essential for IS effectiveness, high quality external IS expertise is even more critical for small businesses operating in an environment of resource poverty.
Abstract: Top management support is a key recurrent factor critical for effective information systems (IS) implementation. However, the role of top management support may not be as critical as external IS expertise, in the form of consultants and vendors, in small business IS implementation due to the unique characteristics of small businesses. This paper describes an empirical study of the relative importance of top management support and external IS expertise on IS effectiveness in 114 small businesses. Partial least squares (PLS) was used for statistical testing. The results show that top management support is not as important as effective external IS expertise in small business IS implementation. While top management support is essential for IS effectiveness, high quality external IS expertise is even more critical for small businesses operating in an environment of resource poverty. These findings call for more research efforts to be directed at selecting and engaging high quality external IS expertise for IS implementation in small businesses.

627 citations


Journal ArticleDOI
TL;DR: The authors showed that the sample selection model is susceptible to collinearity problems and a t -test can be used to distinguish between the two models as long as there are no collinearlyity problems.

519 citations


Journal ArticleDOI
TL;DR: The authors conduct an experimental study to examine the impact of two types of waiting information—waiting-duration information and queuing information—on consumers’ reactions to waits of differen...
Abstract: The authors conduct an experimental study to examine the impact of two types of waiting information—waiting-duration information and queuing information—on consumers’ reactions to waits of differen...

460 citations


Journal ArticleDOI
Abstract: We observe significant post-split excess returns of 7.93 percent in the first year and 12.15 percent in the first three years for a sample of 1,275 two-for-one stock splits. These excess returns follow an announcement return of 3.38 percent, indicating that the market underreacts to split announcements. The evidence suggests that splits realign prices to a lower trading range, but managers self-select by conditioning the decision to split on expected future performance. Presplit runup and post-split excess returns are inversely related, indi? cating that our results are not caused by momentum.

450 citations


Journal ArticleDOI
TL;DR: A notion of causal independence is presented that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability.
Abstract: A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as "or", "sum" or "max", on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.

449 citations


Posted Content
TL;DR: In this article, the authors proposed a new method for exploiting causal independencies in exact Bayesian network inference, which enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability.
Abstract: A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as ``or'', ``sum'' or ``max'', on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.

Journal ArticleDOI
TL;DR: A design approach to mass customization (DFMC) that is based on the belief that mass customization can be effectively achieved through design, in particular during the conceptual design and preliminary development stages is proposed.

Journal ArticleDOI
TL;DR: Phase-contrast microscopy shows that the structure of the refractive-index inhomogeneities in a variety of mammalian tissues resembles that of frozen turbulence, and the observed structure function fits the classical Kolmogorov model of turbulence.
Abstract: Phase-contrast microscopy shows that the structure of the refractive-index inhomogeneities in a variety of mammalian tissues resembles that of frozen turbulence. Viewed over a range of scales, the spectrum of index variations exhibits a power-law behavior for spatial frequencies spanning at least a decade (0.5–5 μm−1) and has an outer scale in the range of 4–10 μm, above which correlations are no longer seen. The observed structure function fits the classical Kolmogorov model of turbulence. These observations are fundamental to understanding light propagation in tissue and may provide clues about how tissues develop and organize.

Journal ArticleDOI
01 Oct 1996
TL;DR: It is proved, for both adaptive fuzzy controllers, that all signals in the closed-loop systems are uniformly bounded; and the tracking errors converge to zero under mild conditions.
Abstract: An adaptive fuzzy controller is constructed from a set of fuzzy IF-THEN rules whose parameters are adjusted on-line according to some adaptation law for the purpose of controlling the plant to track a given-trajectory. In this paper, two adaptive fuzzy controllers are designed based on the Lyapunov synthesis approach. We require that the final closed-loop system must be globally stable in the sense that all signals involved (states, controls, parameters, etc.) must be uniformly bounded. Roughly speaking, the adaptive fuzzy controllers are designed through the following steps: first, construct an initial controller based on linguistic descriptions (in the form of fuzzy IF-THEN rules) about the unknown plant from human experts; then, develop an adaptation law to adjust the parameters of the fuzzy controller on-line. We prove, for both adaptive fuzzy controllers, that: (1) all signals in the closed-loop systems are uniformly bounded; and (2) the tracking errors converge to zero under mild conditions. We provide the specific formulas of the bounds so that controller designers can determine the bounds based on their requirements. Finally, the adaptive fuzzy controllers are used to control the inverted pendulum to track a given trajectory, and the simulation results show that: (1) the adaptive fuzzy controllers can perform successful tracking without using any linguistic information; and (2) after incorporating some linguistic fuzzy rules into the controllers, the adaptation speed becomes faster and the tracking error becomes smaller.

Journal ArticleDOI
TL;DR: It is shown that separability is an intrinsic property of the measured signals and can be described by the concept of m-row decomposability introduced in this paper, and that separation principles can be developed by using the structure characterization theory of random variables.
Abstract: This paper identifies and studies two major issues in the blind source separation problem: separability and separation principles. We show that separability is an intrinsic property of the measured signals and can be described by the concept of m-row decomposability introduced in this paper; we also show that separation principles can be developed by using the structure characterization theory of random variables. In particular, we show that these principles can be derived concisely and intuitively by applying the Darmois-Skitovich theorem, which is well known in statistical inference theory and psychology. Some new insights are gained for designing blind source separation filters.

Journal ArticleDOI
TL;DR: The authors explored the role of culture in moderating consumers' opinion exchange behavior and found that the cultural characteristics of power distance and uncertainty avoidance influence the focus of consumers' product information search activities, but not their tendency to share product-related opinions with others.
Abstract: Research conducted primarily in the United States has shown that interpersonal influence arising from opinion exchange behavior is an important factor in consumers' product adoption and brand choice decisions. An important managerial question in the international arena is whether information-giving and seeking behaviors depend on culture. In a study representing eleven nationalities, we explore the role of culture in moderating consumers' opinion exchange behavior. Results indicate that the cultural characteristics of power distance and uncertainty avoidance [Hofstede 1980] influence the focus of consumers' product information search activities, but not their tendencies to share product-related opinions with others. Following earlier opinion leadership studies, we find that individual characteristics such as product category interest and involvement are most indicative of active opinion leadership behavior.

Journal ArticleDOI
TL;DR: A bi-level programming approach for determination of road toll pattern in Hong Kong using a queueing network equilibrium model that describes users' route choice behavior under conditions of both queueing and congestion is presented.
Abstract: Urban road networks in Hong Kong are highly congested, particularly during peak periods. Long vehicle queues at bottlenecks, such as the harbor tunnels, have become a daily occurrence. At present, tunnel tolls are charged in Hong Kong as one means to reduce traffic congestion. In general, flow pattern and queue length on a road network are highly dependent on traffic control and road pricing. An efficient control scheme must, therefore, take into account the effects of traffic control and road pricing on network flow. In this paper, we present a bi-level programming approach for determination of road toll pattern. The lower-level problem represents a queueing network equilibrium model that describes users' route choice behavior under conditions of both queueing and congestion. The upper-level problem is to determine road tolls to optimize a given system's performance while considering users' route choice behavior. Sensitivity analysis is also performed for the queueing network equilibrium problem to obtain the derivatives of equilibrium link flows with respect to link tolls. This derivative information is then applied to the evaluation of alternative road pricing policies and to the development of heuristic algorithms for the bi-level road pricing problem. The proposed model and algorithm are illustrated with numerical examples.

Journal ArticleDOI
TL;DR: In this paper, the authors formulated the mechanical and electric fields in a piezoelectric material around an elliptical cylinder cavity and the electric field within the cavity are formulated by complex potentials.

Book ChapterDOI
01 Jan 1996
TL;DR: The first large eddy simulation (Les) results by Deardorff as mentioned in this paper were published in 1970, and over twenty years have passed since then, this technique has matured considerably: the underlying theory has been advanced, new models have been developed and tested, more efficient numerical schemes have been used.
Abstract: Over twenty years have passed since the first large eddy simulation (Les) results by Deardorff (1970) were published. During this period, this technique has matured considerably: the underlying theory has been advanced, new models have been developed and tested, more efficient numerical schemes have been used. The progress in computer power and memory has made possible the application of Les to a variety of flows, compressible and incompressible, including heat transfer, stratification, passive scalars and chemical reactions.

Journal ArticleDOI
TL;DR: This poster presents a probabilistic simulation of the response of the eye to laser-spot assisted, 3D image analysis and shows how the eye’s response to light changes changes over time.
Abstract: Femtosecond laser pulses generate in Sb coherent ${E}_{g}$ phonons at $\ensuremath{\approx}3.4\mathrm{THz}$, in addition to oscillations of ${A}_{1g}$ symmetry accounted for by the phenomenological displacive-excitation model. Experiments agree with theoretical calculations showing that the coherent driving force in absorbing materials like Sb is determined by Raman processes, as in transparent media. The Raman formalism provides a unifying approach for describing light-induced motion of atoms of both impulsive and displacive character.

Journal ArticleDOI
TL;DR: In this article, the authors present a model of the relationship between technological and process innovations and describe the interdependence of these two forces, which is used to explain the inconsistency in the literature regarding the benefits of EDI and other interorganizational systems, which are described as providing strategic competitive advantage in some papers and as providing little or no benefits for implementing firms.
Abstract: Interorganizational business process reengineering is a logical extension of discussions of the potential for interorganizational systems to fundamentally redefine relationships among buyers, sellers, and even competitors within an industry. This paper presents a model of the relationship between technological and process innovations and describes the interdependence of these two forces. This model is used to explain the inconsistency in the literature regarding the benefits of EDI and other interorganizational systems, which are described as providing strategic competitive advantage in some papers and as providing little or no benefits for implementing firms in other articles. The framework describes the importance of merging technological and process innovations in order to transform organizations, processes, and relationships.

Journal ArticleDOI
TL;DR: Growth in Chlorella vulgaris media was comparable to growth in the commercial Bristol medium which contains nitrate as the nitrogen source and was accompanied by a decrease in nitrogen content in the medium, indicating that nitrogen removal was due to algal uptake and assimilation.

Journal ArticleDOI
TL;DR: The economic forces and barriers behind the electronic market adoptions from the perspective of market process reengineering are analyzed and suggestions based on these case studies are presented, relevant to the analysis, design, and implementation of electronic market systems by market-making firms.
Abstract: Over the past few years, various electronic market systems have been introduced by market-making firms to improve transaction effectiveness and efficiency within their markets. Although successful implementation of electronic marketplaces may be found in several industries, some systems have failed or their penetration pace is slower than was projected, indicating that significant barriers remain. This paper analyzes the economic forces and barriers behind the electronic market adoptions from the perspective of market process reengineering. Four cases of electronic market adoptions--two successful and two failed--are used for this analysis. Economic benefits are examined by investigating how the market process innovation enabled by information technology (IT) reduces transaction costs and increases market efficiency. Adoption barriers are identified by analyzing transaction risks and resistance resulting from the reengineering. Successful deployment of electronic market systems requires taking into account these barriers along with the economic benefits of adoption. The paper presents suggestions based on these case studies, which are relevant to the analysis, design, and implementation of electronic market systems by market-making firms.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a useful database for the leachate quality of Hong Kong landfills, and compared different experimental trials for the treatment of methanogenic-stage sanitary landfill leachates, which is generally characterized by a low chemical oxygen demand (COD) of 3000 mg L−1 or less.

Journal ArticleDOI
TL;DR: The measured thermal activation energies of different samples demonstrated that the InAs wetting layer may act as a barrier for the thermionic emission of carriers in high-quality InAs multilayers, while in InAs monolayers and submonolayers the carriers are required to overcome the GaAs barrier to escape thermally from the localized states.
Abstract: We have investigated the temperature dependence of photoluminescence (PL) properties of a number of self-organized InAs/GaAs heterostructures with InAs layer thickness ranging from 0.5 to 3 ML. The temperature dependence of InAs exciton emission and linewidth was found to display a significant difference when the InAs layer thickness is smaller or larger than the critical thickness around 1.7 ML. The fast redshift of PL energy and an anomalous decrease of linewidth with increasing temperature were observed and attributed to the efficient relaxation process of carriers in multilayer samples, resulting from the spread and penetration of the carrier wave functions in coupled InAs quantum dots. The measured thermal activation energies of different samples demonstrated that the InAs wetting layer may act as a barrier for the thermionic emission of carriers in high-quality InAs multilayers, while in InAs monolayers and submonolayers the carriers are required to overcome the GaAs barrier to escape thermally from the localized states. \textcopyright{} 1996 The American Physical Society.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed consumer reactions to price discounts in a parsimonious preference model featuring loss aversion and reference-dependence along dimensions of price and quality, and found that lower quality/lower price brands generally promote more effectively than higher quality/higher price brands.
Abstract: Several studies have shown that promotions of national brands yield more effect than those of store brands e.g., Allenby and Rossi 1991, Blattberg and Wisniewski 1989. However, the evolution of price-quality data available from Consumer Reports over the last 15 years seems to reveal a reduction of the quality gap between store brands and national brands, while price differences remain substantial. Simultaneously, the share of private label brands has increased Progressive Grocer 1994. In this context, we study whether we can maintain a view of the world where national brands may easily attract consumers from store brands through promotions, whereas store brands are relatively ineffective in attracting consumers from national brands by such means. We analyze consumer reactions to price discounts in a parsimonious preference model featuring loss aversion and reference-dependence along dimensions of price and quality Hardie, Johnson, and Fader 1993, Tversky and Kahneman 1991. The key result of our analysis is that, given any two brands, there is an asymmetric promotion effect in favor of the higher quality/higher price brands if and only if the quality gap between the brands is sufficiently large in comparison with the price gap. Thus, the direction of promotion asymmetry is not unconditional. It depends uniquely on the value of the ratio of quality and price differences compared to a category specific criterion, which we call Φ. If the ratio of quality and price differences is larger than this criterion, the usual asymmetry prevails; if such is not the case, the lower quality/lower price brands promote more effectively. More precisely, our model predicts that cross promotion effects depend on two components of brand positioning in the price/quality quadrant. First, we define a variable termed “positioning advantage” that indicates whether, relative to the standards achieved by another brand, a given brand is underpriced positive advantage or overpriced negative advantage. Promotion effectiveness is increasing in this variable. Second, cross promotion effects between two brands depend on their distance in the price/quality quadrant. This variable impacts promotion effectiveness negatively and symmetrically for any pair of brands. “Positioning advantage” and “brand distance” are orthogonal components of brand positioning, irrespective of the degree of correlation between available price and quality levels in the market. Empirically, we investigate the role of brand positioning in explaining cross promotion effects using panel data from the chilled orange juice and peanut butter categories. We compute the independent positioning variables, “positioning advantage” and “brand distance,” from readily available data on price and quality positioning after obtaining our estimates of Φ. We next measure promotion effectiveness by estimating choice share changes in response to a price discount, using a choice model that does not contain any information about quality/price ratios. Finally, we test the relation between the two positioning variables and the promotion effectiveness measures. The data reveal that in the orange juice category lower quality/lower price brands generally promote more effectively than higher quality/higher price brands. In the peanut butter data the opposite asymmetry holds. In both cases, inter-brand promotion patterns are well explained by the positioning variables. An attractive feature of our model is that, in addition to the direction of promotion asymmetries, it also explains the extent of those asymmetries. A further interesting aspect of this approach is that we go beyond a categorization of brands into price tiers. For instance, lower tier brands in our data may promote more effectively than one national brand but less effectively than another. Consistent with our theoretical predictions, the data presented here seem to confirm that such cases occur because the lower tier brand offers a favorable trade-off of price and quality differences compared with one national brand and a less favorable trade-off compared with the other. The content of this paper is potentially relevant for brand managers or retailers concerned with predicting the impact of their promotions. The paper is of particular interest to marketing scientists who study the performance of store brands versus national brands and may also appeal to those who wish to explore the marketing implications of behavioral decision theory. Finally, our investigation does not reject Blattberg and Wisniewski's 1989 finding, shared by Allenby and Rossi 1991 and Hardie, Johnson, and Fader 1993, that national brands have a principle advantage in promotion effectiveness. Rather, it formalizes when this principle advantage is overruled by positioning disadvantages of such brands.

Journal ArticleDOI
TL;DR: It is becoming increasingly evident that a consumer's brand choice decision in low-involvement categories does not involve full search, evaluation, and comparison of price information of all brands.
Abstract: It is becoming increasingly evident that a consumer's brand choice decision in low-involvement categories does not involve full search, evaluation, and comparison of price information of all brands...

Journal ArticleDOI
TL;DR: An integrated model which consists of six variables and incorporates key elements of both models was developed to examine determinants of CASE acceptance and indicates that ease of use has the largest influence on CASE acceptance, followed by long-term consequences.

Journal Article
TL;DR: It is demonstrated that ligation of PECAM on the surface of NK cells activates their beta 2 integrins and could play an important role in the extravasation ofNK cells into tissues for constitutive surveillance and into sites of inflammation.
Abstract: Platelet/endothelial cell adhesion molecule-1 (PECAM-1, CD31) is a glycoprotein expressed on the surfaces of monocytes, neutrophils, platelets, a subpopulation of T cells, and, as described in this work, on NK cells. It is also concentrated at the junctions between endothelial cells (EC) in culture, and is expressed on continuous endothelia in all tissues. PECAM has been shown to be involved in monocyte and neutrophil transendothelial migration in vitro and in vivo. The function of PECAM in NK cell interaction with EC has never been studied. In this work, we demonstrate that ligation of PECAM on the surface of NK cells activates their beta 2 integrins. Anti-PECAM Abs added to NK cells caused a 2.5- to 4-fold increase in the binding of these cells to monolayers of EC or 3T3 cells transfected with ICAM-1, and this was inhibited by a mAb against CD18. PECAM also plays a role in NK cell transendothelial migration. Anti-PECAM Abs inhibited 50% of NK cell transmigration through resting EC in an in vitro system. The transmigration of CD56dim and CD56bright cells was inhibited equally. IFN-gamma increased NK cell transmigration; the transmigration of CD56bright cells was increased to a much greater extent than CD56dim transmigration (4-fold vs 1.5-fold). Anti-PECAM inhibited the transmigration of CD56dim cells by 30%, while that of CD56bright cells was not blocked. These studies demonstrate that PECAM-1 could play an important role in the extravasation of NK cells into tissues for constitutive surveillance and into sites of inflammation.

Journal ArticleDOI
TL;DR: In this article, the authors consider a linear system with Markovian switching which is perturbed by Gaussian type noise and show that under certain conditions the perturbed system is also stable.