scispace - formally typeset
Search or ask a question

Showing papers by "Worcester Polytechnic Institute published in 2008"


Journal ArticleDOI
TL;DR: Willingness to cannibalize, constructive conflict, scanning, and slack have contemporaneous effects, while scanning also has a lagged effect and slack has a U‐shaped lagging effect on marketing and R&D second‐order competences.
Abstract: According to dynamic capability theory, some firms are better able than others at altering their resource base by adding, reconfiguring, and deleting resources or competences. This study focuses on the first form of dynamic capability: the competence to build new competences. Two such second-order competences are studied: the ability to explore new markets and the ability to explore new technologies—referred to as marketing and R&D second-order competences, respectively. Using two wave panel data on a sample of U.S. public manufacturing firms, five organizational antecedents of these second-order competences are examined: willingness to cannibalize, constructive conflict, tolerance for failure, environmental scanning, and resource slack. Willingness to cannibalize, constructive conflict, scanning, and slack have contemporaneous effects, while scanning also has a lagged effect and slack has a U-shaped lagged effect on marketing and R&D second-order competences. Copyright © 2008 John Wiley & Sons, Ltd.

645 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined and extended these models using game theory concepts and showed that the non-cooperative approach yields a unique efficiency decomposition under multiple intermediate measures, while the centralized approach is likely to yield multiple decompositions.
Abstract: Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). This tool has been utilized by a number of authors to examine two-stage processes, where all the outputs from the first stage are the only inputs to the second stage. The current article examines and extends these models using game theory concepts. The resulting models are linear, and imply an efficiency decomposition where the overall efficiency of the two-stage process is a product of the efficiencies of the two individual stages. When there is only one intermediate measure connecting the two stages, both the noncooperative and centralized models yield the same results as applying the standard DEA model to the two stages separately. As a result, the efficiency decomposition is unique. While the noncooperative approach yields a unique efficiency decomposition under multiple intermediate measures, the centralized approach is likely to yield multiple decompositions. Models are developed to test whether the efficiency decomposition arising from the centralized approach is unique. The relations among the noncooperative, centralized, and standard DEA approaches are investigated. Two real world data sets and a randomly generated data set are used to demonstrate the models and verify our findings. © 2008 Wiley Periodicals, Inc. Naval Research Logistics 55: 643-653, 2008

433 citations


Proceedings ArticleDOI
18 Aug 2008
TL;DR: This study examines popular OSNs from a viewpoint of characterizing potential privacy leakage, and identifies what bits of information are currently being shared, how widely, and what users can do to prevent such sharing.
Abstract: Online social networks (OSNs) with half a billion users have dramatically raised concerns on privacy leakage. Users, often willingly, share personal identifying information about themselves, but do not have a clear idea of who accesses their private information or what portion of it really needs to be accessed. In this study we examine popular OSNs from a viewpoint of characterizing potential privacy leakage. Our study identifies what bits of information are currently being shared, how widely, and what users can do to prevent such sharing. We also examine the role of third-party sites that track OSN users and compare with privacy leakage on popular traditional Web sites. Our long term goal is to identify the narrow set of private information that users really need to share to accomplish specific interactions on OSNs.

328 citations


Journal ArticleDOI
TL;DR: The original DEA cross-efficiency concept is generalized to game cross efficiency, where each DMU is viewed as a player that seeks to maximize its own efficiency, under the condition that the cross efficiency of each of the other DMUs does not deteriorate.
Abstract: In this paper, we examine the cross-efficiency concept in data envelopment analysis (DEA). Cross efficiency links one decision-making unit's (DMU) performance with others and has the appeal that scores arise from peer evaluation. However, a number of the current cross-efficiency approaches are flawed because they use scores that are arbitrary in that they depend on a particular set of optimal DEA weights generated by the computer code in use at the time. One set of optimal DEA weights (possibly out of many alternate optima) may improve the cross efficiency of some DMUs, but at the expense of others. While models have been developed that incorporate secondary goals aimed at being more selective in the choice of optimal multipliers, the alternate optima issue remains. In cases where there is competition among DMUs, this situation may be seen as undesirable and unfair. To address this issue, this paper generalizes the original DEA cross-efficiency concept to game cross efficiency. Specifically, each DMU is viewed as a player that seeks to maximize its own efficiency, under the condition that the cross efficiency of each of the other DMUs does not deteriorate. The average game cross-efficiency score is obtained when the DMU's own maximized efficiency scores are averaged. To implement the DEA game cross-efficiency model, an algorithm for deriving the best (game cross-efficiency) scores is presented. We show that the optimal game cross-efficiency scores constitute a Nash equilibrium point.

295 citations


Journal ArticleDOI
TL;DR: This paper seeks to extend the model of Doyle and Green by introducing a number of different secondary objective functions, which provide a ranking among the decision-making units (DMUs) and eliminates unrealistic DEA weighting schemes without requiring a priori information on weight restrictions.

265 citations


Journal Article
TL;DR: In this paper, the authors present three studies, conducted with two different learning environments, which present evidence on which student behaviors, motivations, and emotions are associated with the choice to game the system.
Abstract: In recent years there has been increasing interest in the phenomena of \" gaming the system, \" where a learner attempts to succeed in an educational environment by exploiting properties of the system's help and feedback rather than by attempting to learn the material. Developing environments that respond constructively and effectively to gaming depends upon understanding why students choose to game. In this article , we present three studies, conducted with two different learning environments, which present evidence on which student behaviors, motivations, and emotions are associated with the choice to game the system. We also present a fourth study to determine how teachers' perspectives on gaming behavior are similar to, and different from, researchers' perspectives and the data from our studies. We discuss what motivational and attitudinal patterns are associated with gaming behavior across studies, and what the implications are for the design of interactive learning environment.

256 citations


Journal ArticleDOI
TL;DR: A finite-difference/front-tracking method is developed for computations of interfacial flows with soluble surfactants and the results are found to be in a good agreement with available experimental data.

247 citations


Journal ArticleDOI
TL;DR: Testing both models, the delivery of Cu+ by Archaeoglobus fulgidus Cu+ chaperone CopZ to the corresponding Cu+-ATPase, CopA, was studied, and under nonturnover conditions, CopZ transferred Cu+ to the TM-MBS of a CopA lacking MBDs.
Abstract: As in other P-type ATPases, metal binding to transmembrane metal-binding sites (TM-MBS) in Cu+-ATPases is required for enzyme phosphorylation and subsequent transport. However, Cu+ does not access Cu+-ATPases in a free (hydrated) form but is bound to a chaperone protein. Cu+ transfer from Cu+ chaperones to regulatory cytoplasmic metal-binding domains (MBDs) present in these ATPases has been described, but there is no evidence of a proposed subsequent Cu+ movement from the MBDs to the TM-MBS. Alternatively, we postulate the parsimonious Cu+ transfer by the chaperone directly to TM-MBS. Testing both models, the delivery of Cu+ by Archaeoglobus fulgidus Cu+ chaperone CopZ to the corresponding Cu+-ATPase, CopA, was studied. As expected, CopZ interacted with and delivered the metal to CopA MBDs. Cu+-loaded MBDs, acting as metal donors, were unable to activate CopA or a truncated CopA lacking MBDs. Conversely, Cu+-loaded CopZ activated the CopA ATPase and CopA constructs in which MBDs were rendered unable to bind Cu+. Furthermore, under nonturnover conditions, CopZ transferred Cu+ to the TM-MBS of a CopA lacking MBDs. These data are consistent with a model where MBDs serve a regulatory function without participating in metal transport and the chaperone delivers Cu+ directly to transmembrane transport sites of Cu+-ATPases.

213 citations


Journal ArticleDOI
TL;DR: The new measures of energy efficiency suggested in this study do provide some additional insights and could be helpful if used alongside the traditional measure ofEnergy efficiency based on energy intensity.

204 citations


Journal ArticleDOI
TL;DR: A sensitive SPR sensor for detecting small molecules is designed that possesses a good sensitivity and a high selectivity for adenosine and may offer a new direction in designing high-performance SPR biosensors for sensitive and selective detection of a wide spectrum of small molecules.
Abstract: Small molecules are difficult to detect by conventional SPR technique directly because the changes in the refractive index resulting from the binding processes of small biomolecules are often small. In order to extend the application of SPR biosensor in detecting a small molecule, we combine the advantage of aptamer technique with the amplifying effect of Au nanoparticles to design a sensitive SPR sensor for detecting small molecules. The principle of this sensor is based on surface inhibition detection. The aptamer is first immobilized on SPR gold film with its ss-DNA structure. The aptamer possessing this structure can be hybridized with Au nanoparticles-tagged complementary ss-DNA and result in a large change of SPR signal. However, the aptamer will change its structure from ss-DNA to tertiary structure after adenosine is added to the SPR cell. The aptamer possessing tertiary structure could not hybridize with Au nanoparticles-tagged complementary ss-DNA. Thus, the change of SPR signal resulted in the ...

170 citations


Journal ArticleDOI
TL;DR: A location-aware end-to-end security framework in which secret keys are bound to geographic locations and each node stores a few keys based on its own location, which effectively limits the impact of compromised nodes only to their vicinity without affecting end- to-end data security.
Abstract: Providing desirable data security, that is, confidentiality, authenticity, and availability, in wireless sensor networks (WSNs) is challenging, as a WSN usually consists of a large number of resource constraint sensor nodes that are generally deployed in unattended/hostile environments and, hence, are exposed to many types of severe insider attacks due to node compromise. Existing security designs mostly provide a hop-by-hop security paradigm and thus are vulnerable to such attacks. Furthermore, existing security designs are also vulnerable to many types of denial of service (DoS) attacks, such as report disruption attacks and selective forwarding attacks and thus put data availability at stake. In this paper, we seek to overcome these vulnerabilities for large-scale static WSNs. We come up with a location-aware end-to-end security framework in which secret keys are bound to geographic locations and each node stores a few keys based on its own location. This location-aware property effectively limits the impact of compromised nodes only to their vicinity without affecting end-to-end data security. The proposed multifunctional key management framework assures both node-to-sink and node-to-node authentication along the report forwarding routes. Moreover, the proposed data delivery approach guarantees efficient en-route bogus data filtering and is highly robust against DoS attacks. The evaluation demonstrates that the proposed design is highly resilient against an increasing number of compromised nodes and effective in energy savings.

Journal ArticleDOI
TL;DR: In this paper, the authors used data from the Annual Survey of Industries for the years 1998-99 through 2003-04, and compared the energy efficiency in manufacturing across states, based on several models, and found that the relative pricing of energy does not provide the appropriate incentives for energy conservation.

Book ChapterDOI
01 Jan 2008
TL;DR: In the context of data visualization, a glyph is a visual representation of a piece of data where the attributes of a graphical entity are dictated by one or more Attributes of a data record.
Abstract: In the context of data visualization, a glyph is a visual representation of a piece of data where the attributes of a graphical entity are dictated by one or more attributes of a data record. For example, the width and height of a box could be determined by a student’s score on the midterm and final exam for a course, while the box’s color might indicate the genderof the student.Thedefinitionabove is ratherbroad, as it can cover such visual elements as the markers in a scatterplot, the bars of a histogram, or even an entire line plot. However, a narrower definition would not be sufficient to capture the wide range of data visualization techniques that have been developed over the centuries that are termed glyphs.

Proceedings ArticleDOI
07 Apr 2008
TL;DR: A runtime query unsatisfiability (RunSAT) checking technique that detects optimal points for terminating query evaluation and proposes mechanisms to precompute the query failure conditions to be checked at runtime, guaranteeing a constant-time RunSAT reasoning cost, making the technique highly scalable.
Abstract: Detecting complex patterns in event streams, i.e., complex event processing (CEP), has become increasingly important for modern enterprises to react quickly to critical situations. In many practical cases business events are generated based on pre-defined business logics. Hence constraints, such as occurrence and order constraints, often hold among events. Reasoning using these known constraints enables us to predict the non-occurrences of certain future events, thereby helping us to identify and then terminate the long running query processes that are guaranteed to not lead to successful matches. In this work, we focus on exploiting event constraints to optimize CEP over large volumes of business transaction streams. Since the optimization opportunities arise at runtime, we develop a runtime query unsatisfiability (RunSAT) checking technique that detects optimal points for terminating query evaluation. To assure efficiency of RunSAT checking, we propose mechanisms to precompute the query failure conditions to be checked at runtime. This guarantees a constant-time RunSAT reasoning cost, making our technique highly scalable. We realize our optimal query termination strategies by augmenting the query with Event-Condition-Action rules encoding the pre-computed failure conditions. This results in an event processing solution compatible with state-of-the-art CEP architectures. Extensive experimental results demonstrate that significant performance gains are achieved, while the optimization overhead is small.

Journal ArticleDOI
TL;DR: In this paper, the feasibility of using a warm-mix asphalt additive, Sasobit H8, in successfully recycling hot mix asphalt with 75% RAP at a lower temperature was investigated.
Abstract: The use of reclaimed asphalt pavement (RAP) helps save natural resources and money. The percentage of RAP that can be utilized successfully in hot-mix recycling is primarily dictated by practical considerations. To avoid deterioration of the aged binder, RAP should not be exposed to relatively high temperatures. This study investigated the feasibility of using a warm-mix asphalt (WMA) additive, Sasobit H8, in successfully recycling hot-mix asphalt (HMA) with 75% RAP at a lower temperature. A control HMA was prepared with extracted aggregates and PG (performance grade) 64-28 binder at 150°C. Another HMA was produced with PG 52-28 binder at 135°C. Two WMA mixes were prepared with Sasobit H8 at 125°C, one with PG 52-28 and the other with PG 42-42 binder. Samples with design asphalt content were compacted by using 75 gyrations of the Superpave gyratory compactor. Their voids, tensile strength at -10°C, rutting potential at 60°C, and moduli at 0°C, 25°C, and 40°C (at different times) were determined and compar...

Book
25 Aug 2008
TL;DR: Data envelopment analysis (DEA) is a data-oriented approach for performance evaluation and benchmarking when trade-offs on multiple performance measures and benchmarks are not completely available as discussed by the authors.
Abstract: Data envelopment analysis (DEA) is a data-oriented approach for performance evaluation and benchmarking when trade-offs on multiple performance measures and benchmarks are not completely available. There are relative efficiencies and inefficiencies in using DEA in classifying funds of hedge funds (FOF). FOFs provide absolute returns and have non-linear returns because of long-short positions, derivatives, and option-like fee contracts, resulting in significant skewness and kurtosis. Thus the inclusion of FOFs in investor portfolios calls for performance appraisal methodologies that are appropriate for handling the asymmetric returns they produce, such as DEA, and not comparison to traditional passive and active benchmarks. While selecting FOFs can be arduous, DEA can shed light on and further validate the process.

Book ChapterDOI
23 Jun 2008
TL;DR: How help can both scaffold the current problem attempt as well as teach the student knowledge that will transfer to later problems is described, suggesting that the Bayesian Evaluation and Assessment framework is the strongest of the three.
Abstract: Most ITS have a means of providing assistance to the student, either on student request or when the tutor determines it would be effective Presumably, such assistance is included by the ITS designers since they feel it benefits the students However, whether--and how--help helps students has not been a well studied problem in the ITS community In this paper we present three approaches for evaluating the efficacy of the Reading Tutor's help: creating experimental trials from data, learning decomposition, and Bayesian Evaluation and Assessment, an approach that uses dynamic Bayesian networks We have found that experimental trials and learning decomposition both find a negative benefit for help---that is, help hurts! However, the Bayesian Evaluation and Assessment framework finds that help both promotes student long-term learning and provides additional scaffolding on the current problem We discuss why these approaches give divergent results, and suggest that the Bayesian Evaluation and Assessment framework is the strongest of the three In addition to introducing Bayesian Evaluation and Assessment, a method for simultaneously assessing students and evaluating tutorial interventions, this paper describes how help can both scaffold the current problem attempt as well as teach the student knowledge that will transfer to later problems

OtherDOI
01 Jan 2008
TL;DR: In this paper, the FFT Algorithm with Radix-2 Decimation-in-Frequency (DIF) algorithm is presented, which is based on the Radix 2 Bit Reversal algorithm.
Abstract: This chapter contains sections titled: Introduction Development of the FFT Algorithm with Radix-2 Decimation-in-Frequency FFT Algorithm with Radix-2 Decimation-in-Time FFT Algorithm with Radix-2 Bit Reversal for Unscrambling Development of the FFT Algorithm with Radix-4 Inverse Fast Fourier Transform Programming Examples References

Journal ArticleDOI
TL;DR: These pilot studies suggest that both lower PWS and lower FSS may contribute to continued plaque progression and should be taken into consideration in future investigations of diseases related to atherosclerosis.

Journal ArticleDOI
TL;DR: The putative neural circuit of aggressive motivation identified with fMRI includes neural substrates contributing to emotional expression, emotional experience and the anterior thalamic nuclei that bridge the motor and cognitive components of aggressive responding.
Abstract: With the advent of functional magnetic resonance imaging (fMRI) in awake animals it is possible to resolve patterns of neuronal activity across the entire brain with high spatial and temporal resolution. Synchronized changes in neuronal activity across multiple brain areas can be viewed as functional neuroanatomical circuits coordinating the thoughts, memories and emotions for particular behaviors. To this end, fMRI in conscious rats combined with 3D computational analysis was used to identifying the putative distributed neural circuit involved in aggressive motivation and how this circuit is affected by drugs that block aggressive behavior. To trigger aggressive motivation, male rats were presented with their female cage mate plus a novel male intruder in the bore of the magnet during image acquisition. As expected, brain areas previously identified as critical in the organization and expression of aggressive behavior were activated, e.g., lateral hypothalamus, medial basal amygdala. Unexpected was the intense activation of the forebrain cortex and anterior thalamic nuclei. Oral administration of a selective vasopressin V1a receptor antagonist SRX251 or the selective serotonin reuptake inhibitor fluoxetine, drugs that block aggressive behavior, both caused a general suppression of the distributed neural circuit involved in aggressive motivation. However, the effect of SRX251, but not fluoxetine, was specific to aggression as brain activation in response to a novel sexually receptive female was unaffected. The putative neural circuit of aggressive motivation identified with fMRI includes neural substrates contributing to emotional expression (i.e. cortical and medial amygdala, BNST, lateral hypothalamus), emotional experience (i.e. hippocampus, forebrain cortex, anterior cingulate, retrosplenial cortex) and the anterior thalamic nuclei that bridge the motor and cognitive components of aggressive responding. Drugs that block vasopressin neurotransmission or enhance serotonin activity suppress activity in this putative neural circuit of aggressive motivation, particularly the anterior thalamic nuclei.

Journal ArticleDOI
TL;DR: A top-down, rule-based mathematical model to explore the basic principles that coordinate mechanochemical events during animal cell migration, particularly the local-stimulation-global-inhibition model, shows that the mechanism of local stimulation global inhibition can readily account for the behavior of Dictyostelium under a large collection of conditions.

Journal ArticleDOI
TL;DR: There was a link between high adhesion forces with bacteria and a high percentage of dead cells being retained on that substratum (even in the absence of a specific biocidal effect, such as silver), which may suggest that highAdhesion forces can cause stress to the bacteria which contributed to their death.

Book ChapterDOI
TL;DR: Drawing on theories within associated disciplines, three different approaches to theoretical foundations of Information Visualization are presented: data-centric predictive theory, information theory, and scientific modeling.
Abstract: The field of Information Visualization, being related to many other diverse disciplines (for example, engineering, graphics, statistical modeling) suffers from not being based on a clear underlying theory. The absence of a framework for Information Visualization makes the significance of achievements in this area difficult to describe, validate and defend. Drawing on theories within associated disciplines, three different approaches to theoretical foundations of Information Visualization are presented here: data-centric predictive theory, information theory, and scientific modeling. Definitions from linguistic theory are used to provide an over-arching framework for these three approaches.

Journal ArticleDOI
TL;DR: The high affinity of the two transmembrane Cu+ transport sites present in Archaeoglobus fulgidus CopA points to a transport mechanism where backward release of free Cu+ to the cytoplasm is largely prevented.

Journal ArticleDOI
TL;DR: In this paper, three testing methods for predicting the durability of cement-stabilized soils were tested and compared for their correlations and influence factors using a problematic low plastic silt clay from subgrade commonly encountered in Louisiana.
Abstract: Three testing methods for predicting the durability of cement-stabilized soils—the tube suction (TS), 7-day unconfined compression strength (UCS), and wetting–drying durability tests—were tested and compared for their correlations and influence factors using a problematic low plastic silt clay from subgrade commonly encountered in Louisiana. A series of samples was molded at six different cement dosages (2.5, 4.5, 6.5, 8.5, 10.5, and 12.5% by dry weight of the soil) and four different molding moisture contents (15.5, 18.5, 21.5, and 24.5%). The test results indicate that the water–cement ratio of cement-stabilized soil had the dominant influence on the maximum dielectric value (DV), 7-day UCS, and durability of stabilized samples tested, although the dry unit weight of cement-stabilized soil could cause the variation of the results. This study confirms that TS, 7-day UCS, and wetting-drying durability tests are equivalent in predicting durability, and tentative charts to ensuring the durability of cement-...

Book ChapterDOI
23 Jun 2008
TL;DR: This paper uses bottom-up processing to detect small subgroups of students who did benefit from rereading and from massed practice, and shows that different types of practice differ reliably in how efficiently students acquire the skill of reading words quickly and accurately.
Abstract: A basic question of instruction is how much students will actually learn from it. This paper presents an approach called learning decomposition,whichdetermines the relative efficacy of different types of learning opportunities. This approach is a generalization of learning curve analysis, and uses non-linear regression to determine how to weight different types of practice opportunities relative to each other. We analyze 346 students reading 6.9 million words and show that different types of practice differ reliably in how efficiently students acquire the skill of reading words quickly and accurately. Specifically, massed practice is generally not effective for helping students learn words, and rereading the same stories is not as effective as reading a variety of stories. However, we were able to analyze data for individual student's learning and use bottom-up processing to detect small subgroups of students who did benefit from rereading (11 students) and from massed practice (5 students). The existence of these has two implications: 1) one size fits all instruction is adequate for perhaps 95% of the student population using computer tutors, but as a community we can do better and 2) the ITS community is well poised to study what type of instruction is optimal for the individual.

Journal ArticleDOI
TL;DR: These patients have a better prognosis than children with BSGs reported in the literature and call for larger prospective studies to fully assess the importance of these factors in the clinical setting and to help stratify patients in future clinical studies.
Abstract: Background Adult brainstem gliomas (BSG) are uncommon and poorly understood with respect to prognostic factors. We retrospectively evaluated the clinical, radiographic, histologic, and treatment features from 101 adults with presumed or biopsy proven BSG to determine prognostic factors. Patients and Methods We reviewed the records of patients diagnosed from 1987–2005. We used Cox proportional hazard models to determine prognostic factors. Results These 50 male and 51 female patients ranged in age from 18 to 79 years at diagnosis (median 36 years) with follow-ups from 1 to 261 months (median 47 months). The overall survival for all patients at 5 and 10 years was 58% and 41%, respectively, with a median survival of 85 months (range 1–228). Out of 24 candidate prognosis factors, we selected seven covariates for proportional hazards model by Lasso procedure: age of diagnosis, ethnicity, need for corticosteroids, tumor grade, dysphagia, tumor location, and karnofsky performance status (KPS). Univariate analysis showed that these seven factors are significantly associated with survival. Multivariate analysis showed that four covariates significantly increased hazard for survival: ethnicity, tumor location, age of diagnosis, and tumor grade. Conclusions In this study, we identified four prognostic factors that were significantly associated with survival in adults with BSGs. Overall, these patients have a better prognosis than children with BSGs reported in the literature. These results call for larger prospective studies to fully assess the importance of these factors in the clinical setting and to help stratify patients in future clinical studies.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate a methodology for the conceptual synthesis of compliant mechanisms based on a building block approach, which is intuitive and provides key insight into how individual building blocks contribute to the overall function.
Abstract: Ellipsoids this paper, we investigate a methodology for the conceptual synthesis of compliant mechanisms based on a building block approach. The building block approach is intuitive and provides key insight into how individual building blocks contribute to the overall function. We investigate the basic kinematic behavior of individual building blocks and relate this to the behavior of a design composed of building blocks. This serves to not only generate viable solutions but also to augment the understanding of the designer. Once a feasible concept is thus generated, known methods for size and geometry optimization may be employed to fine-tune performance. The key enabler of the building block synthesis is the method of capturing kinematic behavior using compliance ellipsoids. The mathematical model of the compliance ellipsoids facilitates the characterization of the building blocks, transformation of problem specifications, decomposition into subproblems, and the ability to search for alternate solutions. The compliance ellipsoids also give insight into how individual building blocks contribute to the overall kinematic function. The effectiveness and generality of the methodology are demonstrated through two synthesis examples. Using only a limited set of building blocks, the methodology is capable of addressing generic kinematic problem specifications for compliance at a single point and for a single-input, single-output compliant mechanism. A rapid prototype of the latter demonstrates the validity of the conceptual solution. DOI: 10.1115/1.2821387

Journal ArticleDOI
TL;DR: In this article, a multifunctional membrane reactor with Pd membrane walls for the hydrogen separation and a high performance Ru-carbon catalyst has been used for catalytic ammonia decomposition for the generation of high purity CO x free hydrogen.

Journal ArticleDOI
TL;DR: In this article, an adaptive detection observer with a time varying threshold is proposed that provides additional robustness with respect to false declarations of faults and minimizes the fault detection time, and an adaptive diagnostic observer is subsequently utilized in an automated control reconfiguration scheme that accommodates the component and actuator faults.
Abstract: A class of nonlinear distributed processes with component and actuator faults is presented. An adaptive detection observer with a time varying threshold is proposed that provides additional robustness with respect to false declarations of faults and minimizes the fault detection time. Additionally, an adaptive diagnostic observer is proposed that is subsequently utilized in an automated control reconfiguration scheme that accommodates the component and actuator faults. An integrated optimal actuator location and fault accommodation scheme is provided in which the actuator locations are chosen in order to provide additional robustness with respect to actuator and component faults. Simulation studies of the Kuramoto-Sivashinsky nonlinear partial differential equation are included to demonstrate the proposed fault detection and accommodation scheme. © 2008 American Institute of Chemical Engineers AIChE J, 2008