scispace - formally typeset
Search or ask a question

Showing papers by "Worcester Polytechnic Institute published in 1998"


Journal ArticleDOI
TL;DR: The mental models concept should be “unbundled” and the term “mental models” should be used more narrowly to initiate a dialogue through which the system dynamics community might achieve a shared understanding of mental models.
Abstract: Although “mental models” are of central importance to system dynamics research and practice, the field has yet to develop an unambiguous and agreed upon definition of them. To begin to address this problem, existing definitions and descriptions of mental models in system dynamics and several literatures related to cognitive science were reviewed and compared. Available definitions were found to be overly brief, general, and vague, and different authors were found to markedly disagree on the basic characteristics of mental models. Based on this review, we concluded that in order to reduce the amount of confusion in the literature, the mental models concept should be “unbundled” and the term “mental models” should be used more narrowly. To initiate a dialogue through which the system dynamics community might achieve a shared understanding of mental models, we propose a new definition of “mental models of dynamic systems” accompanied by an extended annotation that explains the definitional choices made and suggests terms for other cognitive structures left undefined by narrowing the mental model concept. Suggestions for future research that could improve the field's ability to further define mental models are discussed. © 1998 John Wiley & Sons, Ltd.

455 citations


Patent
24 Aug 1998
TL;DR: A hydrogen gas extraction module includes an intermediate layer bonded between a porous metal substrate and a membrane layer that is selectively permeable to hydrogen as discussed by the authors, where the metal substrate includes a substantial concentration of a first metal at a surface of the substrate, and the intermediate layer includes an oxide of this first metal.
Abstract: A hydrogen gas-extraction module includes an intermediate layer bonded between a porous metal substrate and a membrane layer that is selectively permeable to hydrogen. The metal substrate includes a substantial concentration of a first metal at a surface of the metal substrate, and the intermediate layer includes an oxide of this first metal. In one embodiment, where the module is designed to selectively extract hydrogen at high temperatures, the porous metal substrate comprises stainless steel, and the membrane layer includes palladium or a palladium/silver alloy. A method for fabricating a hydrogen gas-extraction membrane includes reacting the porous metal substrate with an oxidizing agent to form a ceramic intermediate layer on a surface of the porous metal substrate and covering the ceramic coating with the membrane layer that is selectively permeable to hydrogen.

355 citations


Journal ArticleDOI
TL;DR: In this article, a defect-free Pd membrane was prepared by an electroless plating technique on porous stainless-steel tubes and the effective surface of Pd membranes was up to 75 cm2.
Abstract: Defect-free Pd membranes were prepared by an electroless plating technique on porous stainless-steel tubes. The effective surface of Pd membranes was up to 75 cm2. The helium flux was not detected at room temperature and pressure difference of 3 atm. At 350°C hydrogen permeances of up to were obtained. A t a pressure difference of 1 atm and 350°C, selectivity coefficients as high as J(H2/JN2) = 5,000 were observed. Fluxes of gases, impermeable through the Pd, were quantitatively described as a combination of Knudsen diffusion and viscous flow through the gaps created in the Pd layer at high temperatures. The stability of the prepared membranes at temperatures up to 700°C was investigated. The membranes were stable at 350°C over a period of 1,100 h, with no significant changes in the steady-state hydrogen flux and with a recrystallization texture and aggregation of Pd grains.

326 citations


Journal ArticleDOI
TL;DR: A framework for statistical modeling of the wideband characteristics of the frequency-selective fading multipath indoor radio channel for geolocation applications and the effects of external walls on estimating the location of the DLOS path are presented.
Abstract: A framework for statistical modeling of the wideband characteristics of the frequency-selective fading multipath indoor radio channel for geolocation applications is presented. Multipath characteristics of the channel are divided into three classes according to availability and the strength of the direct line of sight (DLOS) path with respect to the other paths. Statistics of the error in estimating the time of arrival of the DLOS path in a building is related to the receiver's sensitivity and dynamic range. The effects of external walls on estimating the location of the DLOS path are analyzed.

218 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proved that any graphG of ordern and minimum degree of at leastk/k+1n contains the kth power of a Hamiltonian cycle.
Abstract: Paul Seymour conjectured that any graphG of ordern and minimum degree of at leastk/k+1n contains thekth power of a Hamiltonian cycle Here, we prove this conjecture for sufficiently largen

174 citations


Journal ArticleDOI
TL;DR: In this article, a beam-to-core size ratio exceeding 175 may be achieved for optical vortex-vortex interactions within a laser beam by minimizing the size of the core.
Abstract: An optical-vortex filament is characterized by a dark core of vanishing size and fluidlike propagation dynamics in the near-field region. This type of phase singularity does not naturally occur as an eigenmode of a cylindrically symmetric system, but it can be easily formed by computer-generated holography. The size of the core is an important attribute affecting vortex–vortex interactions within a laser beam. Here we demonstrate a means to minimize the core size, and we experimentally show that a beam-to-core size ratio exceeding 175 may be readily achieved.

158 citations


Journal ArticleDOI
TL;DR: Tests of hypotheses derived from the model demonstrate that task–technology fit, computed using methods for computing strategic fit, is associated with increased tool utilization and provides direction for the development of better maintenance support tools.

156 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate constitutive relations, interblock permeability, nonlinear algebraic system approximation methods, and two time integration approaches for nonuniform porous media and show that integral permeability approach approximated by Hermite polynomials is recommended and shown to be robust and economical for a set of test problems, which correspond to sand, loam, and clay loam media.
Abstract: Capillary pressure–saturation-relative permeability relations described using the van Genuchten [1980] and Mualem [1976] models for nonuniform porous media lead to numerical convergence difficulties when used with Richards' equation for certain auxiliary conditions. These difficulties arise because of discontinuities in the derivative of specific moisture capacity and relative permeability as a function of capillary pressure. Convergence difficulties are illustrated using standard numerical approaches to simulate such problems. We investigate constitutive relations, interblock permeability, nonlinear algebraic system approximation methods, and two time integration approaches. An integral permeability approach approximated by Hermite polynomials is recommended and shown to be robust and economical for a set of test problems, which correspond to sand, loam, and clay loam media.

147 citations


Journal ArticleDOI
TL;DR: In this article, the equilibrium position of a low-index particle in an optical-vortex trap was experimentally measured for two different systems: a buoyant hollow glass sphere in water and a density-matched water droplet in acetophenone.
Abstract: The equilibrium position of a low-index particle in an optical-vortex trap was experimentally measured for two different systems: a buoyant hollow glass sphere in water and a density-matched water droplet in acetophenone. Vortex traps are the only known static, single-beam configurations allowing three-dimensional trapping of such particles in the size range of 2–50 μm. The trap consists of a strongly focused Gaussian laser beam containing a holographically produced optical vortex. Using experimental and theoretical techniques, we also explored changes in the trapping efficiency owing to the vortex core size, the relative refractive index, and the numerical aperture of the focusing objective.

147 citations


Journal ArticleDOI
TL;DR: A new sensitivity analysis approach for the basic DEA models, such as, those proposed by Charnes, Cooper and Rhodes (CCR), Banker, Charne and Cooper (BCC) and additive models, when variations in the data are simultaneously considered for all DMUs.
Abstract: In data envelopment analysis (DEA) efficient decision making units (DMUs) are of primary importance as they define the efficient frontier. The current paper develops a new sensitivity analysis approach for the basic DEA models, such as, those proposed by Charnes, Cooper and Rhodes (CCR), Banker, Charnes and Cooper (BCC) and additive models, when variations in the data are simultaneously considered for all DMUs. By means of modified DEA models, in which the specific DMU under examination is excluded from the reference set, we are able to determine what perturbations of the data can be tolerated before efficient DMUs become inefficient. Our approach generalises the usual sensitivity analysis approach developed in which perturbations of the data are only applied to the test DMU while all the remaining DMUs remain fixed. In our framework data are allowed to vary simultaneously for all DMUs across different subsets of inputs and outputs. We study the relations of the infeasibility of modified DEA models employed and the robustness of DEA models. It is revealed that the infeasibility means stability. The empirical applications demonstrate that DEA efficiency classifications are robust with respect to possible data errors, particularly in the convex DEA case.

134 citations


Journal ArticleDOI
TL;DR: Using this non-invasive method of imaging brain activity in conscious animals, it is now possible to perform developmental studies in animal models of neurological and psychiatric disorders.

Journal ArticleDOI
TL;DR: Elastic anisotropy factors were derived for each of the three modes of propagation from the special in-plane phonon-focusing considerations arising when wave vectors are constrained to the symmetry planes of orthorhombic, tetragonal, and hexagonal crystals as discussed by the authors.
Abstract: Elastic anisotropy factors are derived for each of the three modes of propagation from the special in-plane phonon-focusing considerations arising when wave vectors are constrained to the symmetry planes of orthorhombic, tetragonal, and hexagonal crystals. Elastic anisotropy factors for the pure transverse and quasi-transverse modes depend upon the symmetry plane, whereas anisotropy factors for the quasilongitudinal mode depend both upon a symmetry plane and a symmetry axis. These anisotropy factors provide a convenient measure of in-plane phonon focusing.

Journal ArticleDOI
01 Apr 1998
TL;DR: In this article, a piggyback server invalidation (PSI) mechanism is proposed for maintaining stronger cache coherency in Web proxy caches while reducing overall costs, where the proxy client invalidates cached entries and can extend the lifetime of entries not on the list.
Abstract: We present a piggyback server invalidation (PSI) mechanism for maintaining stronger cache coherency in Web proxy caches while reducing overall costs. The basic idea is for servers to piggyback on a reply to a proxy client, the list of resources that have changed since the last access by the client. The proxy client invalidates cached entries on the list and can extend the lifetime of entries not on the list. This continues our prior work on piggyback cache validation (PCV) where we focused on piggybacking validation requests from the proxy cache to the server. Trace-driven simulation of PSI on two large, independent proxy log data sets, augmented with data from several server logs, shows PSI provides close to strong cache coherency while reducing the request traffic compared to existing cache coherency techniques. The best overall performance is obtained when the PSI and PCV techniques are combined. Compared to the best TTL-based policy, this hybrid policy reduces the average cost (considering response latency, request messages and bandwidth) by 7–9%, reduces the staleness ratio by 82–86%, yielding a staleness ratio of 0.001. © 1998 Published by Elsevier Science B.V. All rights reserved.

Journal Article
TL;DR: Trace-driven simulation of PSI on two large, independent proxy log data sets, augmented with data from several server logs, shows PSI provides close to strong cache coherency while reducing the request traffic compared to existing cache co herency techniques.

Journal ArticleDOI
TL;DR: In this article, the authors used computational fluid dynamics (CFD) to obtain values for the dimensionless wall heat transfer coefficient, Nuw and the radial effective thermal conductivity ratio, kr/kf, using air as the fluid.
Abstract: Fluid flow and heat transfer in a fixed bed of tube to particle ratio 2.86 were studied by solving the 3D Navier–Stokes and energy equations using a commercial finite element code, ANSYS/FLOTRAN. The geometry model consisted of an arrangement of eight spheres. Boundary conditions were given at the wall and the inlet, similar to an experimental setup, to determine the velocity and temperature at various locations. The main objective of this study was to use computational fluid dynamics (CFD) to obtain values for the dimensionless wall heat transfer coefficient, Nuw and the radial effective thermal conductivity ratio, kr/kf, using air as the fluid. Nuw and kr/kf were evaluated from the calculated temperatures at different locations in the bed by fitting these with the analytical solution of the usual two-dimensional pseudo-homogeneous model, using a non-linear least squares analysis. Results were obtained for Reynolds numbers in the range 9–1450. The Nuw and kr/kf values showed reasonable qualitative agreement with both experimental values and values predicted by a model-matching theory based on experimental measurements. Studies were also carried out to verify the depth, pressure and wall temperature dependence of the effective heat parameters. The effect of the temperature profile measurement position above the bed on the estimated parameters, was investigated.

Journal ArticleDOI
01 Apr 1998-Methods
TL;DR: The use of replication-incompetent retroviral vectors for the analysis of lineal relationships in developing vertebrate tissues and the strategies and current methods in use in the laboratory are described.

Journal ArticleDOI
TL;DR: This paper provides an algorithmic version of the blow-up lemma, which helps to find bounded degree spanning subgraphs in e-regular graphs and can be parallelized and implemented in NC5.
Abstract: Recently we developed a new method in graph theory based on the regularity lemma. The method is applied to find certain spanning subgraphs in dense graphs. The other main general tool of the method, besides the regularity lemma, is the so-called blow-up lemma (Komlos, Sarkozy, and Szemeredi [Combinatorica,17, 109–123 (1997)]. This lemma helps to find bounded degree spanning subgraphs in e-regular graphs. Our original proof of the lemma is not algorithmic, it applies probabilistic methods. In this paper we provide an algorithmic version of the blow-up lemma. The desired subgraph, for an n-vertex graph, can be found in time O(nM(n)), where M(n)=O(n2.376) is the time needed to multiply two n by n matrices with 0, 1 entires over the integers. We show that the algorithm can be parallelized and implemented in NC5. © 1998 John Wiley & Sons, Inc. Random Struct. Alg., 12, 297–312, 1998

Book ChapterDOI
17 Aug 1998
TL;DR: This work designed, implemented and compared various architecture options of DES, using the Data Encryption Standard as an example algorithm, and found that it could achieve encryption rates beyond 400 Mbit/s using a standard Xilinx FPGA.
Abstract: Most modern security protocols and security applications are defined to be algorithm independent, that is, they allow a choice from a set of cryptographic algorithms for the same function. Although an algorithm switch is rather difficult with traditional hardware, i.e., ASIC, implementations, Field Programmable Gate Arrays (FPGAs) offer a promising solution. Similarly, an ASIC-based key search machine is in general only applicable to one specific encryption algorithm. However, a key-search machine based on FPGAs can also be algorithm independent and thus be applicable to a wide variety of ciphers. We researched the feasibility of a universal key-search machine using the Data Encryption Standard (DES) as an example algorithm. We designed, implemented and compared various architecture options of DES with strong emphasis on high-speed performance. Techniques like pipelining and loop unrolling were used and their Effectiveness for DES on FPGAs investigated. The most interesting result is that we could achieve encryption rates beyond 400 Mbit/s using a standard Xilinx FPGA. This result is by a factor of about 30 faster than software implementations while we are still maintaining flexibility. A DES cracker chip based on this design could search 6.29 million keys per second.

Journal ArticleDOI
01 Sep 1998
TL;DR: In this article, the basic definition of the configuration task is revisited to show some of its flaws, and to point out how it shapes thinking about the problem, and the reasoning processes used to produce a configuration.
Abstract: This paper is intended to serve as part of the context in which the other papers in this special issue should be read. Its main goal is to revisit the basic definition of the configuration task, on which many people depend, to show some of its flaws, and to point out how it shapes thinking about the problem. We are concerned about characterizing the reasoning processes used to produce a configuration.

Journal ArticleDOI
TL;DR: In this article, numerical solutions of a modified $(3+1)$-dimensional nonlinear Schrodinger equation, accounting for the Raman effect, linear and nonlinear shock terms, thirdorder dispersion, and initial temporal third-order phase modulation are presented.
Abstract: The propagation of intense femtosecond pulses in a nonlinear, dispersive bulk medium is investigated numerically in the regime where the combined effects of diffraction, normal dispersion, and cubic nonlinearity lead to pulse splitting. We present numerical solutions of a modified $(3+1)$-dimensional nonlinear Schr\"odinger equation, accounting for the Raman effect, linear and nonlinear shock terms, third-order dispersion, and initial temporal third-order phase modulation. The calculated results are found to be in good agreement with experimental measurements.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The proposed SERF framework succeeds in giving the user the flexibility to define the semantics of their choice, the extensibilty of defining new complex transformations, and the power of re-using these transformations through the notion of templates.
Abstract: With current database technology trends, there is an increasing need to specify and handle complex schema changes. The existing support for schema evolution in current OODB systems is limited to a pre-defined taxonomy of schema evolution operations with fixed semantics. Given the variety of types, complexity, and semantics of transformations, it is sheer impossible to a-priori provide a complete set of all complex changes that are going to meet all user’s needs. This paper is the first effort to successfully address this open problem by providing an extensible framework for schema transformations. Our proposed SERF framework succeeds in giving the user the flexibility to define the semantics of their choice, the extensibilty of defining new complex transformations, and the power of re-using these transformations through the notion of templates. To verify the feasibility of our SERF approach we have implemented one realization of our concepts in a working prototype system, called OQL-SERF, based on OQL, ODMG MetaData and Java’s binding of ODL. We have also conducted a thorough case study demonstrating that our SERF approach can handle all the schema evolution operations we identified from the literature to validate the completeness of our approach.

Journal ArticleDOI
01 May 1998
TL;DR: This work investigates maintenance programmers' choices about using software tools for their maintenance tasks using task–technology fit and fitness-for-use models and indicates that the fit between maintenance tool functionality and the needs of maintenance tasks is associated with tool use.
Abstract: Software tools to support programmers and maintainers have been touted as potential solutions to the software development and maintenance crisis. Use of these tools is predicted to increase programmer productivity while simultaneously increasing the quality of the resulting software. Unfortunately, programmer use of these tools is lower than expected. We investigate maintenance programmers' choices about using software tools for their maintenance tasks using task–technology fit and fitness-for-use models. Our results indicate that the fit between maintenance tool functionality and the needs of maintenance tasks is associated with tool use. Two related factors, however, have stronger effects. First, the results are stronger for intention to use than for actual use. Specifically, while higher fit between tool and task is highly associated with intention to use, this intention may not lead to actual use. Second, maintainers' control over their environment affects usage. The more control maintenance programmers have over their resources and opportunities to use tools, the more likely they are to choose to use them. These results should concern maintenance managers who have acquired or are acquiring tools to increase productivity and quality, but are not realizing the benefits of their technology investments. © 1998 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: For any e ≥ 0 and positive integer k, there is an n 0 such that, if G has order n ≥ n0 and minimum degree at least $(\frac{k}{k+1} + \epsilon )n), then G contains the kth power of a Hamilton cycle as discussed by the authors.
Abstract: Paul Seymour conjectured that any graph G of order n and minimum degree at least $\frac{k}{k+1}n$ contains the kth power of a Hamilton cycle. We prove the following approximate version. For any e ≥ 0 and positive integer k, there is an n0 such that, if G has order n ≥ n0 and minimum degree at least $(\frac{k}{k+1} + \epsilon )n$, then G contains the kth power of a Hamilton cycle. © 1998 John Wiley & Sons, Inc. J. Graph Theory 29: 167176, 1998

Journal ArticleDOI
TL;DR: In this article, the authors proposed a suitable informative prior distribution on the relationship between control outcome data and covariates, and derived modified trend test statistics that incorporate historical control information to adjust for covariate effects.
Abstract: Historical data often play an important role in helping interpret the results of a current study. This article is motivated primarily by one specific application: the analysis of data from rodent carcinogenicity studies. By proposing a suitable informative prior distribution on the relationship between control outcome data and covariates, we derive modified trend test statistics that incorporate historical control information to adjust for covariate effects. Frequentist and fully Bayesian methods are presented, and novel computational techniques are developed to compute the test statistics. Several attractive theoretical and computational properties of the proposed priors are derived. In addition, a semiautomatic elicitation scheme for the priors is developed. Our approach is used to modify a widely used prevalence test for carcinogenicity studies. The proposed methodology is applied to data from a National Toxicology Program carcinogenicity experiment and is shown to provide helpful insight on t...

Journal ArticleDOI
TL;DR: In this paper, an integrated inventory model for perishable goods in which backorder is allowed is presented, which considers the impact of marketing strategies such as pricing and advertising as well as backorder decisions on the profitability of the system.

Journal ArticleDOI
15 May 1998-Cancer
TL;DR: This study compared the abilities of all published proposed clinical staging systems to predict time to prostate specific antigen (PSA) failure after radical prostatectomy or external beam radiation therapy for clinically localized prostate carcinoma.
Abstract: BACKGROUND A clinical staging system for localized prostate carcinoma that provides reliable information on which management decisions regarding an individual patient can be based is lacking. This study compared the abilities of all published proposed clinical staging systems to predict time to prostate specific antigen (PSA) failure after radical prostatectomy or external beam radiation therapy for clinically localized prostate carcinoma. METHODS A total of 1441 clinically localized prostate carcinoma patients who were managed with radical prostatectomy at the University of Pennsylvania in Philadelphia (n = 688) or the Brigham and Women's Hospital in Boston (n = 288) or with external beam radiation therapy at the Joint Center for Radiation Therapy in Boston (n = 465) were entered into this study. Patients who received adjuvant or neoadjuvant hormonal or radiation therapy were excluded. Akaike's Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) estimates, which are comparative measures, were calculated for each clinical staging system. Pairwise comparisons of the AIC and SBC estimates for the most predictive clinical staging systems were performed using a formal bootstrap technique with 2000 replications. RESULTS Both the staging system based on the risk score and the staging system based on the calculated volume of prostate carcinoma and PSA (cVCa-PSA) optimized the prediction of time to posttreatment PSA failure. The cVCa-PSA system, however, provided a more clinically useful stratification of outcome. CONCLUSIONS Improved clinical staging for patients with localized prostate carcinoma may be possible with parameters obtained during routine evaluation. Validation by other investigators is underway. Cancer 1998;82:1887-96. © 1998 American Cancer Society.

Journal ArticleDOI
TL;DR: In this paper, a model of a breakaway timber post and soil system used in the breakaway cable terminal (BCT) and the modified eccentric loader terminal (MELT) is presented.
Abstract: The performance of many guardrail terminal systems is dependent on the strength of timber guardrail posts and soil conditions. Accurately simulating the breakaway characteristits of guardrail posts mounted in soils is an important issue concerning researchers in the roadside safety community. Finite-element analysis is one method that can be used to evaluate roadside hardware designs, but good simulations are contingent on developing accurate models of the components. A description is provided of the development of a model of a breakaway timber post and soil system used in the breakaway cable terminal (BCT) and the modified eccentric loader terminal (MELT). The model is described and simulation results are compared with data from physical tests of BCT/MELT posts.

Journal Article
TL;DR: In this paper, a view evolution problem is addressed by a novel solution approach that allows affected view definitions to be dynamically evolved to keep them in synch with evolving information sources (ISs).
Abstract: Current view technology supports only static views in the sense that views become undefined and hence obsolete as soon as the underlying information sources (ISs) undergo capability changes. We propose to address this new view evolution problem - which we call view synchronization- by a novel solution approach that allows affected view definitions to be dynamically evolved to keep them in synch with evolving ISs. We present in this paper a general strategy for the view synchronization process that guided by constraints imposed by the view evolution preferences embedded in the view definition achieves view preservation (i.e., view redefinition). We present the formal correctness, the CVS algorithm, as well as numerous examples to demonstrate the main concepts.

Journal ArticleDOI
TL;DR: In this article, the scale-of-interaction criterion for establishing functional correlations is presented as a criterion for evaluating data acquisition systems and is also used as the basis for simulating interactions with the surface.
Abstract: The scale-of-interaction is presented as a criterion for establishing functional correlations. It is also used in simulating interactions with rough surfaces. Fractal analysis by the patchwork method is discussed and is used as the basis for evaluating data acquisition systems. The patchwork method is also used as the basis for simulating interactions with the surface. The essence of the simulation is to understand macroscopic, apparently continuous, interactions as the sum of a large number of discrete interactions at some appropriately fine scale. The simulations shown here use topographic data acquired from engineering surfaces.

Proceedings ArticleDOI
19 Aug 1998
TL;DR: The ADAPT framework presented in the paper supports both component designers in creating components that can easily be adapted, and application builders in adapting software components.
Abstract: The widespread construction of software systems from pre-existing, independently developed software components will only occur when application builders can adapt software components to suit their needs. We propose that software components provide two interfaces-one for behavior and one for adapting that behavior as needed. The ADAPT framework presented in the paper supports both component designers in creating components that can easily be adapted, and application builders in adapting software components. The motivating example, using Java-Beans, shows how adaptation, not customization, is the key to component based software.