scispace - formally typeset
Search or ask a question

Showing papers by "Stevens Institute of Technology published in 2008"


Proceedings ArticleDOI
01 Jun 2008
TL;DR: Simulation analyses on several machine learning data sets show the effectiveness of the ADASYN sampling approach across five evaluation metrics.
Abstract: This paper presents a novel adaptive synthetic (ADASYN) sampling approach for learning from imbalanced data sets. The essential idea of ADASYN is to use a weighted distribution for different minority class examples according to their level of difficulty in learning, where more synthetic data is generated for minority class examples that are harder to learn compared to those minority examples that are easier to learn. As a result, the ADASYN approach improves learning with respect to the data distributions in two ways: (1) reducing the bias introduced by the class imbalance, and (2) adaptively shifting the classification decision boundary toward the difficult examples. Simulation analyses on several machine learning data sets show the effectiveness of this method across five evaluation metrics.

2,675 citations


Journal ArticleDOI
TL;DR: It is demonstrated that consumers understand the value difference between favorable news and unfavorable news and respond accordingly, and the impact of online reviews on sales diminishes over time, suggesting that firms need not provide incentives for customers to write reviews beyond a certain time period after products have been released.
Abstract: Online product reviews provided by consumers who previously purchased products have become a major information source for consumers and marketers regarding product quality. This study extends previous research by conducting a more compelling test of the effect of online reviews on sales. In particular, we consider both quantitative and qualitative aspects of online reviews, such as reviewer quality, reviewer exposure, product coverage, and temporal effects. Using transaction cost economics and uncertainty reduction theories, this study adopts a portfolio approach to assess the effectiveness of the online review market. We show that consumers understand the value difference between favorable news and unfavorable news and respond accordingly. Furthermore, when consumers read online reviews, they pay attention not only to review scores but to other contextual information such as a reviewer's reputation and reviewer exposure. The market responds more favorably to reviews written by reviewers with better reputation and higher exposure. Finally, we demonstrate that the impact of online reviews on sales diminishes over time. This suggests that firms need not provide incentives for customers to write reviews beyond a certain time period after products have been released.

559 citations


Journal ArticleDOI
TL;DR: A biomatrix was prepared from rice husk, a lignocellulosic waste from agro-industry, for the removal of several heavy metals as a function of pH and metal concentrations in single and mixed solutions, which indicated the presence of several functional groups for binding metal ions.

521 citations


Journal ArticleDOI
TL;DR: By developing near-perfect samples that delay the transition from a dewetted to a wetted (Wenzel) state until near the theoretical limit, this work achieves giant slip lengths, as large as 185 microm.
Abstract: We study experimentally how two key geometric parameters (pitch and gas fraction) of textured hydrophobic surfaces affect liquid slip. The two are independently controlled on precisely fabricated microstructures of posts and grates, and the slip length of water on each sample is measured using a rheometer system. The slip length increases linearly with the pitch but dramatically with the gas fraction above 90%, the latter trend being more pronounced on posts than on grates. Once the surfaces are designed for very large slips (>20 microm), however, further increase is not obtained in regular practice because the meniscus loses its stability. By developing near-perfect samples that delay the transition from a dewetted (Cassie) to a wetted (Wenzel) state until near the theoretical limit, we achieve giant slip lengths, as large as 185 microm.

401 citations


Book ChapterDOI
16 Jun 2008
TL;DR: The findings indicate that BPMN is used in groups of several, well-defined construct clusters, but less than 20% of its vocabulary is regularly used and some constructs did not occur in any of the models the authors analyzed.
Abstract: The Business Process Modeling Notation (BPMN) is an increasingly important industry standard for the graphical representation of business processes. BPMN offers a wide range of modeling constructs, significantly more than other popular languages. However, not all of these constructs are equally important in practice as business analysts frequently use arbitrary subsets of BPMN. In this paper we investigate what these subsets are, and how they differ between academic, consulting, and general use of the language. We analyzed 120 BPMN diagrams using mathematical and statistical techniques. Our findings indicate that BPMN is used in groups of several, well-defined construct clusters, but less than 20% of its vocabulary is regularly used and some constructs did not occur in any of the models we analyzed. While the average model contains just 9 different BPMN constructs, models of this complexity have typically just 4-5 constructs in common, which means that only a small agreed subset of BPMN has emerged. Our findings have implications for the entire ecosystems of analysts and modelers in that they provide guidance on how to reduce language complexity, which should increase the ease and speed of process modeling.

355 citations


Journal ArticleDOI
TL;DR: The demonstrated hybrid method should allow much better control of the distributions of various ingredients, including the concentrations of drugs/growth factors, as well as the porosity, mechanical property, wettability, biodegradation rate distributions in tissue engineering scaffolds, aiming to mimic the elegant complex distributions found in native tissue.

330 citations


Journal ArticleDOI
TL;DR: In this article, the association of tannic acid (TA) with neutral or charged polymers in solution and at surfaces and contrast hydrogen-bonded and electrostatically associated polymer/TA complexes and TA/polymer layer-by-layer (LbL) films as per their stability in the pH scale.
Abstract: We report on association of tannic acid (TA) with neutral or charged polymers in solution and at surfaces and contrast hydrogen-bonded and electrostatically associated polymer/TA complexes and TA/polymer layer-by-layer (LbL) films as per their stability in the pH scale. The neutral polymers used for hydrogen bonding with TA were poly(N-vinylcaprolactam) (PVCL), poly(N-vinylpyrrolidone) (PVPON), poly(ethylene oxide) (PEO), or poly(N-isopropylacrylamide) (PNIPAM), and the polymer used to explore electrostatic binding with TA was 90% quaternized poly(4-vinylpyridine) (Q90). Association of TA with polymers in solution was explored by measuring the turbidity of solutions. At surfaces, LbL film deposition and pH stability were followed by phase-modulated ellipsometry and in-situ Fourier transform infrared spectroscopy in attenuated total reflection mode (ATR-FTIR). While electrostatically stabilized films of TA with Q90 could not be deposited at low pH values (pH = 2), hydrogen-bonded films of TA with PVCL, PVP...

274 citations


Journal ArticleDOI
TL;DR: This paper reviews the SoS literature to illustrate the need to create an SoSE management framework based on the demands of constant technological progress in a complex dynamic environment and utilizes modified fault, configuration, accounting, performance, and security (FCAPS) network principles (SoSE management conceptual areas).
Abstract: As our knowledge of system of systems (SoS) has grown and evolved, so has our understanding of how to engineer and manage them. In systems engineering, we develop architectures and frameworks to bring meaning to this kind of uncertainty, but for SoS engineering (SoSE) we are still in search of how we can structure this understanding. In this paper, we review the SoS literature to illustrate the need to create an SoSE management framework based on the demands of constant technological progress in a complex dynamic environment. We conclude from this review that the history and evolution of defining SoS has shown that: (1) SoS can be defined by distinguishing characteristics and (2) SoS can be viewed as a network where the ldquobest practicesrdquo of network management can be applied to SoSE. We use these two theories as a foundation for our objective to create an effective SoSE management framework. To accomplish this, we utilize modified fault, configuration, accounting, performance, and security (FCAPS) network principles (SoSE management conceptual areas). Furthermore, cited distinguishing characteristics of SoS are also used to present a SoSE management framework. We conclude with a case analysis of this framework using a known and well-documented SoS (i.e., Integrated Deepwater System) to illustrate how to better understand, engineer, and manage within the domain of SoSE.

266 citations


Journal ArticleDOI
TL;DR: In this article, the effect of roughness on wetting mechanisms and relevant roughness parameters are discussed. But the focus of this paper is not on surface roughness, but on surface mechanics, physics, chemistry, and biology.
Abstract: Superhydrophobicity can be used for many applications that require non-adhesive and water-repellent surfaces. A successful design of superhydrophobic surfaces requires a correct assessment of the surface roughness effect on wetting. Roughness is an important property in surface mechanics, physics, chemistry, and biology, and it is critical for many tribological applications. Roughness can be defined in different ways, and the definition should be adequate to the problem under investigation. Our recent studies of biological and biomimetic superhydrophobic surfaces show that traditional roughness parameters, such as the root-mean-square, correlation length, or fractal dimension, are not always appropriate for the analysis of wetting. This is, in particular, due to the hierarchical nature of wetting mechanisms and interfaces. We discuss the effect of roughness on wetting mechanisms and relevant roughness parameters and ways to broaden the concept and scope of surface roughness.

235 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a new spatial individual-based forest model that includes a perfect plasticity formulation for crown shape and derived a series of analytical results including equilibrium abundances for trees of different crown shapes, stability conditions, transient behaviors, such as the constant yield law and self-thinning exponents.
Abstract: Individual-based forest simulators, such as TASS and SORTIE, are spatial stochastic processes that predict properties of populations and communities by simulating the fate of every plant throughout its life cycle. Although they are used for forest management and are able to predict dynamics of real forests, they are also analytically intractable, which limits their usefulness to basic scientists. We have developed a new spatial individual-based forest model that includes a perfect plasticity formulation for crown shape. Its structure allows us to derive an accurate approximation for the individual-based model that predicts mean densities and size structures using the same parameter values and functional forms, and also it is analytically tractable. The approximation is represented by a system of von Foerster partial differential equations coupled with an integral equation that we call the perfect plasticity approximation (PPA). We have derived a series of analytical results including equilibrium abundances for trees of different crown shapes, stability conditions, transient behaviors, such as the constant yield law and self-thinning exponents, and two species coexistence conditions.

204 citations


Journal ArticleDOI
TL;DR: It appears that the presence of nano-aluminum particles did not have an adverse effect on the growth of California red kidney bean and rye grass plants in the concentration range tested, and soil respiration studies show that there are not statistical differences between the time and sizes of peaks in CO(2) production and the total mineralization of glucose.

Proceedings ArticleDOI
24 Oct 2008
TL;DR: This paper proposes an analytical approach based on Fenton's approximation and Markov inequality and obtains a lower bound on the probability of a successful PUEA on a secondary user by a set of co-operating malicious users.
Abstract: In this paper, we study the denial-of-service (DoS) attack on secondary users in a cognitive radio network by primary user emulation (PUE). Most approaches in the literature on primary user emulation attacks (PUEA) discuss mechanisms to deal with the attacks but not analytical models. Simulation studies and results from test beds have been presented but no analytical model relating the various parameters that could cause a PUE attack has been proposed and studied. We propose an analytical approach based on Fenton's approximation and Markov inequality and obtain a lower bound on the probability of a successful PUEA on a secondary user by a set of co-operating malicious users. We consider a fading wireless environment and discuss the various parameters that can affect the feasibility of a PUEA. We show that the probability of a successful PUEA increases with the distance between the primary transmitter and secondary users. This is the first analytical treatment to study the feasibility of a PUEA.

Journal ArticleDOI
01 Jan 2008
TL;DR: This paper proposes a technique for distributed multimedia transmission over the secondary user network, which makes use of opportunistic spectrum access with the help of cognitive radios and uses digital fountain codes to distribute the multimedia content over unused spectrum.
Abstract: With the explosive growth of wireless multimedia applications over the wireless Internet in recent years, the demand for radio spectral resources has increased significantly In order to meet the quality of service, delay, and large bandwidth requirements, various techniques such as source and channel coding, distributed streaming, multicast etc have been considered In this paper, we propose a technique for distributed multimedia transmission over the secondary user network, which makes use of opportunistic spectrum access with the help of cognitive radios We use digital fountain codes to distribute the multimedia content over unused spectrum and also to compensate for the loss incurred due to primary user interference Primary user traffic is modelled as a Poisson process We develop the techniques to select appropriate channels and study the trade-offs between link reliability, spectral efficiency and coding overhead Simulation results are presented for the secondary spectrum access model

Journal ArticleDOI
TL;DR: In this article, two optimization models that will assist management to choose process improvement opportunities are presented for a multi-stage, asynchronous manufacturing process with the opportunity to improve quality (scrap and rework rates) at each of the stages.

Journal ArticleDOI
TL;DR: This analysis shows that MHT and KSAC are vulnerable to low complexity known- and/or chosen-plaintext attacks, and points out some disadvantages of RAC over the classical compress-then-encrypt approach.
Abstract: Encryption is one of the fundamental technologies that is used in digital rights management. Unlike ordinary computer applications, multimedia applications generate large amounts of data that has to be processed in real time. So, a number of encryption schemes for multimedia applications have been proposed in recent years. We analyze the following proposed methods for multimedia encryption: key-based multiple Huffman tables (MHT), arithmetic coding with key-based interval splitting (KSAC), and randomized arithmetic coding (RAC). Our analysis shows that MHT and KSAC are vulnerable to low complexity known- and/or chosen-plaintext attacks. Although we do not provide any attacks on RAC, we point out some disadvantages of RAC over the classical compress-then-encrypt approach.

Posted Content
TL;DR: In this paper, the existence of online review manipulation was investigated using data from Amazon and Barnes & Noble, and it was shown that the manipulation strategy of firms seems to be a monotonically decreasing function of the product's true quality or the mean consumer rating of that product.
Abstract: Increasingly, consumers depend on social information channels, such as user-posted online reviews, to make purchase decisions. These reviews are assumed to be unbiased reflections of other consumers’ experiences with the products or services. While extensively assumed, the literature has not tested the existence or non-existence of review manipulation. By using data from Amazon and Barnes & Noble, our study investigates if vendors, publishers, and writers consistently manipulate online consumer reviews. We document the existence of online review manipulation and show that the manipulation strategy of firms seems to be a monotonically decreasing function of the product’s true quality or the mean consumer rating of that product. Hence, manipulation decreases the informativeness of online reviews. Furthermore though consumers understand the existence of manipulation, they can only partially correct it based on their expectation of the overall level of manipulation. Hence, vendors are able to change the final outcomes by manipulating online reviewers. In addition, we demonstrate that at the early stages, after an item is released to the Amazon market, both price and reviews serve as quality indicators. Thus, at this stage, a higher price leads to an increase in sales instead of a decrease in sales. At the late stages, price assumes its normal role, meaning a higher price leads to a decrease in sales. Finally, on average, there is a higher level of manipulation on Barnes & Noble than on Amazon.

Proceedings ArticleDOI
02 Dec 2008
TL;DR: This paper reviews some of the recent work in understanding the newest botnets that employ P2P technology to increase their survivability, and to conceal the identities of their operators, and compares how current proposals for dealing with P1P botnets would or would not affect a pure-P2P botnet like Nugache.
Abstract: The research community is now focusing on the integration of peer-to-peer (P2P) concepts as incremental improvements to distributed malicious software networks (now generically referred to as botnets). While much research exists in the field of P2P in terms of protocols, scalability, and availability of content in P2P file sharing networks, less exists (until this last year) in terms of the shift in C&C from central C&C using clear-text protocols, such as IRC and HTTP, to distributed mechanisms for C&C where the botnet becomes the C&C, and is resilient to attempts to mitigate it. In this paper we review some of the recent work in understanding the newest botnets that employ P2P technology to increase their survivability, and to conceal the identities of their operators. We extend work done to date in explaining some of the features of the Nugache P2P botnet, and compare how current proposals for dealing with P2P botnets would or would not affect a pure-P2P botnet like Nugache. Our findings are based on a comprehensive 2-year study of this botnet.

Journal ArticleDOI
TL;DR: In this paper, a linear weighting function is proposed that is motivated by likelihood ratio testing concepts and that achieves superior detection performance, in view of the lower efficiency in tracking relative large mean shifts of the Exponentially Weighted Moving Average (EWMA) estimate.
Abstract: Adaptive Cumulative SUM charts (ACUSUM) have been recently proposed for providing an overall good detection over a range of mean shift sizes. The basic idea of the ACUSUM chart is to first adaptively update the reference value based on an Exponentially Weighted Moving Average (EWMA) estimate and then to assign a weight on it using a certain type of weighting function. A linear weighting function is proposed that is motivated by likelihood ratio testing concepts and that achieves superior detection performance. Moreover, in view of the lower efficiency in tracking relative large mean shifts of the EWMA estimate, a generalized EWMA estimate is proposed as an alternative. A comparison of run length performance of the proposed ACUSUM scheme and other control charts is shown to be favorable to the former.

Journal ArticleDOI
TL;DR: This paper proposes two probabilistic power adaptation algorithms and analyzes their theoretical properties along with the numerical behavior and approximate the discrete power control iterations by an equivalent ordinary differential equation to prove that the proposed stochastic learning power control algorithm converges to a stable Nash equilibrium.
Abstract: Distributed power control is an important issue in wireless networks. Recently, noncooperative game theory has been applied to investigate interesting solutions to this problem. The majority of these studies assumes that the transmitter power level can take values in a continuous domain. However, recent trends such as the GSM standard and Qualcomm's proposal to the IS-95 standard use a finite number of discretized power levels. This motivates the need to investigate solutions for distributed discrete power control which is the primary objective of this paper. We first note that, by simply discretizing, the previously proposed continuous power adaptation techniques will not suffice. This is because a simple discretization does not guarantee convergence and uniqueness. We propose two probabilistic power adaptation algorithms and analyze their theoretical properties along with the numerical behavior. The distributed discrete power control problem is formulated as an N-person, nonzero sum game. In this game, each user evaluates a power strategy by computing a utility value. This evaluation is performed using a stochastic iterative procedures. We approximate the discrete power control iterations by an equivalent ordinary differential equation to prove that the proposed stochastic learning power control algorithm converges to a stable Nash equilibrium. Conditions when more than one stable Nash equilibrium or even only mixed equilibrium may exist are also studied. Experimental results are presented for several cases and compared with the continuous power level adaptation solutions.

Proceedings ArticleDOI
18 May 2008
TL;DR: An end-to-end semantic property is introduced, based on a model that allows observations of intermediate low states as well as termination, and static enforcement is provided by combining type-checking with program verification techniques applied to the small subprograms that carry out declassifications.
Abstract: This paper provides a way to specify expressive declassification policies, in particular, when, what, and where policies that include conditions under which downgrading is allowed. Secondly, an end-to-end semantic property is introduced, based on a model that allows observations of intermediate low states as well as termination. An attacker's knowledge only increases at explicit declassification steps, and within limits set by policy. Thirdly, static enforcement is provided by combining type-checking with program verification techniques applied to the small subprograms that carry out declassifications. Enforcement is proved sound for a simple programming language and the extension to object-oriented programs is described.

Book ChapterDOI
07 Jul 2008
TL;DR: A novel logic for error-avoiding partial correctness of programs featuring shared mutable objects is presented, using a first order assertion language, and facilitates heap-local reasoning about object invariants.
Abstract: Shared mutable objects pose grave challenges in reasoning, especially for data abstraction and modularity. This paper presents a novel logic for error-avoiding partial correctness of programs featuring shared mutable objects. Using a first order assertion language, the logic provides heap-local reasoning about mutation and separation, via ghost fields and variables of type `region' (finite sets of object references). A new form of modifies clause specifies write, read, and allocation effects using region expressions; this supports effect masking and a frame rule that allows a command to read state on which the framed predicate depends. Soundness is proved using a standard program semantics. The logic facilitates heap-local reasoning about object invariants: disciplines such as ownership are expressible but not hard-wired in the logic.

Posted Content
TL;DR: This paper developed a model in which CEOs envy each other based on their compensation and showed that envy can cause merger waves even when the shock that precipitated the first merger in the wave is purely idiosyncratic.
Abstract: We develop a model in which CEOs envy each other based on their compensation. When CEO compensation is increasing in the firm's market value and size, we show that envy can cause merger waves even when the shock that precipitated the first merger in the wave is purely idiosyncratic. The analysis produces numerous predictions, some of which are as follows. First, the earlier acquisitions in a merger wave display higher synergies than the later acquisitions in the wave, so bidder returns will be higher for the earlier acquisitions. Second, earlier acquisitions in a merger wave involve smaller targets than later acquisitions. Third, the gain in compensation for the top management team of the acquiring firm should be higher for earlier acquisitions than for later acquisitions. Fourth, more envious CEOs are more likely to engage in acquisitions and pay higher premia. Fifth, an envy-generated merger wave is more likely in a bull stock market than in a bear market even when there is no mispricing that creates opportunities to time the market, so the quality of bull-market acquisitions is lower than that of bear-market acquisitions. Finally, controlling for the dispersion in firm values, the bull-market-versus-bear-market effect largely disappears. We test the first three predictions and find strong empirical support.

Journal ArticleDOI
TL;DR: In this article, a heterogeneous cohesive (HC) crack model was developed to predict macroscopic strength of materials based on meso-scale random fields of fracture properties, and a new stress-based criterion to determine the crack growth direction was developed by taking into account both the crack-tip stress state and heterogeneity of the tensile strength.

Journal ArticleDOI
TL;DR: In this article, the effects of the type of the antisolvent and the presence of multiwalled carbon nanotubes (MWNTs) on membrane morphology and the crystal structure developed within the membranes were investigated by wide angle X-ray diffraction, Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), and differential scanning calorimetry (DSC).
Abstract: Microporous polyvinylidene fluoride (PVDF) and PVDF nanocomposite membranes were prepared via an isothermal immersion precipitation method using two different antisolvents (ethanol and water). The structure and morphology of the resulting membranes were investigated by wide angle X-ray diffraction (WAXD), Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), and differential scanning calorimetry (DSC). The effects of the type of the antisolvent and the presence of multiwalled carbon nanotubes (MWNTs) on membrane morphology and the crystal structure developed within the membranes were studied. The crystallization of the PVDF upon immersion precipitation occurred predominantly in the α-phase when water is used as the antisolvent or in the absence of the carbon nanotubes. On the other hand, β-phase crystallization of the PVDF was promoted upon the use of ethanol as the antisolvent in conjunction with the incorporation of the MWNTs. The morphology and the total crystallinity of the PVDF membranes were also affected by the incorporation of the MWNTs and the antisolvent used, suggesting that the microstructure and the ultimate properties of the PVDF membranes can be engineered upon the judicious selection of crystallization conditions and the use of carbon nanotubes.

Journal ArticleDOI
TL;DR: In this paper, the relation between wind driven wave slope variance and sea surface wind speed was investigated using global satellite observations of lidar backscatter measurements acquired by the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) mission.
Abstract: Global satellite observations of lidar backscatter measurements acquired by the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) mission and collocated sea surface wind speed data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), are used to investigate the relation between wind driven wave slope variance and sea surface wind speed. The new slope variance – wind speed relation established from this study is similar to the linear relation from Cox-Munk (1954) and the log-linear relation from Wu (1990) for wind speed larger than 7 m/s and 13.3 m/s, respectively. For wind speed less than 7 m/s, the slope variance is proportional to the square root of the wind speed, assuming a two dimensional isotropic Gaussian wave slope distribution. This slope variance – wind speed relation becomes linear if a one dimensional Gaussian wave slope distribution and linear slope variance – wind speed relation are assumed. Contributions from whitecaps and subsurface backscattering are effectively removed by using 532 nm lidar depolarization measurements. This new slope variance – wind speed relation is used to derive sea surface wind speed from CALIPSO single shot lidar measurements (70 m spot size), after correcting for atmospheric attenuation. The CALIPSO wind speed result agrees with the collocated AMSR-E wind speed, with 1.2 m/s rms error. Ocean surface with lowest atmospheric loading and moderate wind speed (7–9 m/s) is used as target for lidar calibration correction.

Journal ArticleDOI
TL;DR: 5-ALA-conjugated nanoparticles offer a new modality for selective and efficient destruction of tumor cells, with minimal damage to fibroblasts in the 5- ALA-photodynamic therapy.

Journal ArticleDOI
TL;DR: The concept of superhydrophobicity was introduced in the 1990s as a result of the investigation of the microstructure of extremely water-repellent plant leaves as mentioned in this paper.
Abstract: The concept of superhydrophobicity was introduced in the 1990s as a result of the investigation of the microstructure of extremely water-repellent plant leaves. Since that time, artificial superhydrophobic surfaces have been developed and implemented, stimulated by advances in nanotechnology, and giving one of the most successful examples of a bio-inspired technology transferred into engineering applications. Superhydrophobicity is usually defined as the ability of a surface to have (i) a very high water contact angle (CA) and (ii) low CA hysteresis. Here we argue that the ability of a water droplet to bounce off a surface constitutes a third property that is crucial for applications. Furthermore, this property is naturally related to the first two properties, since the energy barriers separating the 'sticky' and 'non-sticky' states needed for bouncing droplets have the same origin as those needed for high CA and for low CA hysteresis.

Journal ArticleDOI
TL;DR: The optimization design of the combined Shewhart chart and CUSUM chart used in Statistical Process Control (SPC) is presented, and a new feature pertaining to an additional charting parameter w (the exponential of the sample mean shift) is investigated, with the hope of further enhancing the detection effectiveness of the X@?&C USUM chart.

Journal ArticleDOI
TL;DR: Combined dynamic delivery of dispersin B and rifampicin was found to be effective for complete removal of the S. epidermidis biofilm.
Abstract: Microfluidic devices were used to study the influences of hydrodynamics of local microenvironments on Staphylococcus epidermidis (S. epidermidis) biofilm formation and the effects of a poly(β-1,6-N-acetyl glucosamine)-hydrolyzing enzyme (dispersin B) and/or an antibiotic (rifampicin) on the detachment of the biofilm. Elongated, monolayered biofilm morphologies were observed at high flow velocity and fluid shear locations whereas large clump-like, multilayered biofilm structures were produced at low flow velocity and fluid shear locations. Upon dispersin B treatment, most of the biofilm was detached from the microchannel surface. However, a trace amount of bacterial cells could not be removed from corner locations most likely due to the insufficient wall shear stress of the fluid at these locations. Dispersin B or rifampicin treatment was effective in delaying the dispersal behavior of bacterial cells, but could not completely remove the biofilm. Combined dynamic delivery of dispersin B and rifampicin was found to be effective for complete removal of the S. epidermidis biofilm.

Journal ArticleDOI
TL;DR: A distributed adaptive quantization (AQ) approach is proposed, which, with sensors sequentially broadcasting their quantized data, allows each sensor to adaptively adjust its quantization threshold, yielding an asymptotic CRB that is only pi/2 times that of the clairvoyant sample-mean estimator using unquantized observations.
Abstract: We consider distributed parameter estimation using quantized observations in wireless sensor networks (WSNs) where, due to bandwidth constraint, each sensor quantizes its local observation into one bit of information. A conventional fixed quantization (FQ) approach, which employs a fixed threshold for all sensors, incurs an estimation error growing exponentially with the difference between the threshold and the unknown parameter to be estimated. To address this difficulty, we propose a distributed adaptive quantization (AQ) approach, which, with sensors sequentially broadcasting their quantized data, allows each sensor to adaptively adjust its quantization threshold. Three AQ schemes are presented: (1) AQ-FS that involves distributed delta modulation (DM) with a fixed stepsize, (2) AQ-VS that employs DM with a variable stepsize, and (3) AQ-ML that adjusts the threshold through a maximum likelihood (ML) estimation process. The ML estimators associated with the three AQ schemes are developed and their corresponding Cramer-Rao bounds (CRBs) are analyzed. We show that our 1-bit AQ approach is asymptotically optimum, yielding an asymptotic CRB that is only pi/2 times that of the clairvoyant sample-mean estimator using unquantized observations.