scispace - formally typeset
Search or ask a question

Showing papers by "Worcester Polytechnic Institute published in 2001"


Journal ArticleDOI
TL;DR: In this paper, a front-tracking method for multiphase flows is presented, which is based on writing one set of governing equations for the whole computational domain and treating the different phases as one fluid with variable material properties.

2,011 citations



Proceedings ArticleDOI
01 Jan 2001
TL;DR: It was found that the time spent on a pages, the amount of scrolling on a page and the combination of time and scrolling had a strong correlation with explicit interest, while individual scrolling methods and mouse-clicks were ineffective in predicting explicit interest.
Abstract: Recommender systems provide personalized suggestions about items that users will find interesting. Typically, recommender systems require a user interface that can ``intelligently'' determine the interest of a user and use this information to make suggestions. The common solution, ``explicit ratings'', where users tell the system what they think about a piece of information, is well-understood and fairly precise. However, having to stop to enter explicit ratings can alter normal patterns of browsing and reading. A more ``intelligent'' method is to useimplicit ratings, where a rating is obtained by a method other than obtaining it directly from the user. These implicit interest indicators have obvious advantages, including removing the cost of the user rating, and that every user interaction with the system can contribute to an implicit rating.Current recommender systems mostly do not use implicit ratings, nor is the ability of implicit ratings to predict actual user interest well-understood. This research studies the correlation between various implicit ratings and the explicit rating for a single Web page. A Web browser was developed to record the user's actions (implicit ratings) and the explicit rating of a page. Actions included mouse clicks, mouse movement, scrolling and elapsed time. This browser was used by over 80 people that browsed more than 2500 Web pages.Using the data collected by the browser, the individual implicit ratings and some combinations of implicit ratings were analyzed and compared with the explicit rating. We found that the time spent on a page, the amount of scrolling on a page and the combination of time and scrolling had a strong correlation with explicit interest, while individual scrolling methods and mouse-clicks were ineffective in predicting explicit interest.

768 citations


Journal ArticleDOI
TL;DR: In this paper, the influence of casting defects on the room temperature fatigue performance of a Sr-modified A356-T6 casting alloy has been studied using unnotched polished cylindrical specimens.

510 citations


Proceedings ArticleDOI
01 Nov 2001
TL;DR: How CDNs are commonly used on the Web and a methodology to study how well they perform are defined and use of a DNS lookup in the critical path of a resource retrieval does not generally result in better server choices being made relative to client response time in either average or worst case situations.
Abstract: Content distribution networks (CDNs) are a mechanism to deliver content to end users on behalf of origin Web sites. Content distribution offloads work from origin servers by serving some or all of the contents of Web pages. We found an order of magnitude increase in the number and percentage of popular origin sites using CDNs between November 1999 and December 2000.In this paper we discuss how CDNs are commonly used on the Web and define a methodology to study how well they perform. A performance study was conducted over a period of months on a set of CDN companies employing the techniques of DNS redirection and URL rewriting to balance load among their servers. Some CDNs generally provide better results than others when we examine results from a set of clients. The performance of one CDN company clearly improved between the two testing periods in our study due to a dramatic increase in the number of distinct servers employed in its network. More generally, the results indicate that use of a DNS lookup in the critical path of a resource retrieval does not generally result in better server choices being made relative to client response time in either average or worst case situations.

345 citations


Journal ArticleDOI
TL;DR: This contribution investigates the significance of FPGA implementations of the Advanced Encryption Standard candidate algorithms, with a strong focus on high-throughput implementations, which are required to support security for current and future high bandwidth applications.
Abstract: The technical analysis used in determining which of the potential Advanced Encryption Standard candidates was selected as the Advanced Encryption Algorithm includes efficiency testing of both hardware and software implementations of candidate algorithms. Reprogrammable devices such as field-programmable gate arrays (FPGAs) are highly attractive options for hardware implementations of encryption algorithms, as they provide cryptographic algorithm agility, physical security, and potentially much higher performance than software solutions. This contribution investigates the significance of FPGA implementations of the Advanced Encryption Standard candidate algorithms. Multiple architectural implementation options are explored for each algorithm. A strong focus is placed on high-throughput implementations, which are required to support security for current and future high bandwidth applications. Finally, the implementations of each algorithm will be compared in an effort to determine the most suitable candidate for hardware implementation within commercially available FPGAs.

284 citations


Journal ArticleDOI
TL;DR: The newer analytical methods described in this paper make it possible to determineranges within which all data may be varied for any DMU before a reclassification from efficient to inefficient status occurs.
Abstract: This paper surveys recently developed analytical methods for studying the sensitivity of DEA results to variations in the data. The focus is on the stability of classification of DMUs (Decision Making Units) into efficient and inefficient performers. Early work on this topic concentrated on developing solution methods and algorithms for conducting such analyses after it was noted that standard approaches for conducting sensitivity analyses in linear programming could not be used in DEA. However, some of the recent work we cover has bypassed the need for such algorithms. Evolving from early work that was confined to studying data variations in only one input or output for only one DMU at a time, the newer methods described in this paper make it possible to determine ranges within which all data may be varied for any DMU before a reclassification from efficient to inefficient status (or vice versa) occurs. Other coverage involves recent extensions which include methods for determining ranges of data variation that can be allowed when all data are varied simultaneously for all DMUs. An initial section delimits the topics to be covered. A final section suggests topics for further research.

252 citations


Journal ArticleDOI
TL;DR: In this paper, a quantitative study of the interactions between microstructural features such as secondary dendrite arm spacing (SDAS), eutectic structure, matrix strength, and fatigue behavior of two Al-7% Si-Mg casting alloys with magnesium contents of 0.4% and 0.7%, respectively, has been conducted.

252 citations


Journal ArticleDOI
TL;DR: In this article, a simple filtering procedure for stabilizing the spectral element method (SEM) for the unsteady advection-diffusion and Navier-Stokes equations is presented.
Abstract: We present a simple filtering procedure for stabilizing the spectral element method (SEM) for the unsteady advection–diffusion and Navier–Stokes equations. A number of example applications are presented, along with basic analysis for the advection–diffusion case.

241 citations


Journal ArticleDOI
TL;DR: It is shown that super-efficiency score can be decomposed into two data perturbation components of a particular test frontier decision making unit (DMU) and the remaining DMUs and the sensitivity analysis approach developed in this paper can be applied to DMUs on the entire frontier and to all basic DEA models.

238 citations


Journal ArticleDOI
TL;DR: The evolution of the contemporary theory for the formation and chemical modification of the aluminum-silicon eutectic is reviewed in this article, where the work of Hellawell et al., which laid the foundation for the present day understanding of this technologically important reaction, is critically examined and certain inconsistencies are shown to exist between the contemporary theories and more current concepts.

Journal ArticleDOI
TL;DR: In this paper, a quantitative comparison between CFD results and heat transfer experimental data is presented for a model geometry of 44 solid spheres in a tube with tube-to-particle diameter ratio equal to 2.

Journal ArticleDOI
TL;DR: In this article, the authors present a tool that is becoming more realistic for use in the description of the detailed flow fields within fixed beds of low tube-to-particle diameter ratio (N).
Abstract: Computational fluid dynamics (CFD) is a tool that is becoming more realistic for use in the description of the detailed flow fields within fixed beds of low tube-to-particle diameter ratio (N). The...

Journal ArticleDOI
TL;DR: In this article, the Gibbs sampler is used to estimate the parameters of a generalised linear mixed model with nonignorable missing response data and with nonmonotone patterns of missing data in the response variable.
Abstract: SUMMARY We propose a method for estimating parameters in the generalised linear mixed model with nonignorable missing response data and with nonmonotone patterns of missing data in the response variable. We develop a Monte Carlo EM algorithm for estimating the parameters in the model via the Gibbs sampler. For the normal random effects model, we derive a novel analytical form for the E- and M-steps, which is facilitated by integrating out the random effects. This form leads to a computationally feasible and extremely efficient Monte Carlo EM algorithm for computing maximum likelihood estimates and standard errors. In addition, we propose a very general joint multinomial model for the missing data indicators, which can be specified via a sequence of one-dimensional conditional distributions. This multinomial model allows for an arbitrary correlation structure between the missing data indicators, and has the potential of reducing the number of nuisance parameters. Real datasets from the International Breast Cancer Study Group and an environmental study involving dyspnoea in cotton workers are presented to illustrate the proposed methods.

Journal ArticleDOI
TL;DR: Applications such as the detection of an astronomical object, forward-scattered radiation, and incoherent light are described whereby signal enhancements of at least 7 orders of magnitude may be achieved.
Abstract: I propose to use as a window the dark core of an optical vortex to examine a weak background signal hidden in the glare of a bright coherent source. Applications such as the detection of an astronomical object, forward-scattered radiation, and incoherent light are described whereby signal enhancements of at least 7 orders of magnitude may be achieved.

Journal ArticleDOI
TL;DR: This contribution proposes arithmetic architectures which are optimized for modern field programmable gate arrays (FPGAs) that perform modular exponentiation with very long integers, at the heart of many practical public-key algorithms such as RSA and discrete logarithm schemes.
Abstract: It is widely recognized that security issues will play a crucial role in the majority of future computer and communication systems. Central tools for achieving system security are cryptographic algorithms. This contribution proposes arithmetic architectures which are optimized for modern field programmable gate arrays (FPGAs). The proposed architectures perform modular exponentiation with very long integers. This operation is at the heart of many practical public-key algorithms such as RSA and discrete logarithm schemes. We combine a high-radix Montgomery modular multiplication algorithm with a new systolic array design. The designs are flexible, allowing any choice of operand and modulus. The new architecture also allows the use of high radices. Unlike previous approaches, we systematically implement and compare several variants of our new architecture for different bit lengths. We provide absolute area and timing measures for each architecture. The results allow conclusions about the feasibility and time-space trade-offs of our architecture for implementation on commercially available FPGAs. We found that 1,024-bit RSA decryption can be done in 3.1 ms with our fastest architecture.

Journal ArticleDOI
TL;DR: It is shown that if H has a k-coloring with color-class sizes h1 ⩽h2⩽⋯⦽hk, then the conjecture is true with c(H)=hk+hk−1−1.

Book ChapterDOI
14 May 2001
TL;DR: This work proposes a new elliptic curve processor architecture for the computation of point multiplication for curves defined over fields GF(p) that is a scalable architecture in terms of area and speed specially suited for memory-rich hardware platforms such a field programmable gate arrays (FPGAs).
Abstract: This work proposes a new elliptic curve processor architecture for the computation of point multiplication for curves defined over fields GF(p). This is a scalable architecture in terms of area and speed specially suited for memory-rich hardware platforms such a field programmable gate arrays (FPGAs). This processor uses a new type of high-radix Montgomery multiplier that relies on the precomputation of frequently used values and on the use of multiple processing engines.

Journal ArticleDOI
TL;DR: Calculations of end-to-end distances based on equilibrium and projected conformations confirmed that the xanthan chain conformation on the mica surface was at equilibrium and was therefore representative of the conformation ofxanthan in solution, a useful complement to solution-based methods for determining physical-chemical properties of biopolymers.

Journal ArticleDOI
TL;DR: Results show that OEFs when used with the new inversion and multiplication algorithms provide a substantial performance increase over other reported methods.
Abstract: This contribution focuses on a class of Galois field used to achieve fast finite field arithmetic which we call an Optimal Extension Field (OEF), first introduced in [3]. We extend this work by presenting an adaptation of Itoh and Tsujii's algorithm for finite field inversion applied to OEFs. In particular, we use the facts that the action of the Frobenius map in GF (pm) can be computed with only m-1 subfield multiplications and that inverses in GF (p) may be computed cheaply using known techniques. As a result, we show that one extension field inversion can be computed with a logarithmic number of extension field multiplications. In addition, we provide new extension field multiplication formulas which give a performance increase. Further, we provide an OEF construction algorithm together with tables of Type I and Type II OEFs along with statistics on the number of pseudo-Mersenne primes and OEFs. We apply this new work to provide implementation results using these methods to construct elliptic curve cryptosystems on both DEC Alpha workstations and Pentium-class PCs. These results show that OEFs when used with our new inversion and multiplication algorithms provide a substantial performance increase over other reported methods.

Proceedings ArticleDOI
01 Nov 2001
TL;DR: Analysis from a wide-scale empirical study of RealVideo traffic from several Internet servers to many geographically diverse users finds typical RealVideos to have high quality, achieving an average frame rate of 10 frames per second and very smooth playout, but very few videos achieve full-motion frame rates.
Abstract: The tremendous increase in computer power and bandwidth connectivity has fueled the growth of streaming video over the Internet to the desktop. While there have been large scale empirical studies of Internet, Web and multimedia traffic, the performance of popular Internet streaming video technologies and the impact of streaming video on the Internet is still largely unkown. This paper presents analysis from a wide-scale empirical study of RealVideo traffic from several Internet servers to many geographically diverse users. We find typical RealVideos to have high quality, achieving an average frame rate of 10 frames per second and very smooth playout, but very few videos achieve full-motion frame rates. Overall video performance is most influenced by the bandwidth of the end-user connection to the Internet, but high-bandwidth Internet connections are pushing the video performance bottleneck closer to the server.

Proceedings ArticleDOI
15 May 2001
TL;DR: In this article, the effect of voltage distortion and imbalance (VDI) on the thermal aging of low voltage induction motors was investigated. But the study was based on a detailed thermal modeling of actual motors in the 2 to 200 HP range.
Abstract: This paper reports the effect of voltage distortion and imbalance (VDI), on the thermal aging of the insulation of low voltage induction motors. The study is based on a detailed thermal modeling of actual motors in the 2 to 200 HP range. The dollar value of the useful life lost was estimated for different VDI conditions. Two important conclusions were reached: First; voltage subharmonics have a dramatic effect on motor thermal aging. Second; the overall cost of motor loss of life due to harmonic pollution and voltage imbalance, in the US today, is estimated to be in the range of 1 to 2 billion dollars per year.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the structure and performance of the PEM fuel cell considered as a membrane reactor and develop an analytical transport reaction model that, despite some assumptions, captures the essential features of the device very well.

Journal ArticleDOI
TL;DR: In this article, the steady flow of Herschel-Bulkley fluids in a canonical three-dimensional expansion was modeled using a regularized continuous constitutive relation, and the flow was obtained numerically using a mixed-Galerkin finite element formulation with a Newton-Raphson iteration procedure coupled to an iterative solver.
Abstract: In this paper we study steady flow of Herschel–Bulkley fluids in a canonical three-dimensional expansion. The fluid behavior was modeled using a regularized continuous constitutive relation, and the flow was obtained numerically using a mixed-Galerkin finite element formulation with a Newton–Raphson iteration procedure coupled to an iterative solver. Results for the topology of the yielded and unyielded regions, and recirculation zones as a function of the Reynolds and Bingham numbers and the power-law exponent, are presented and discussed for a 2:1 and a 4:1 expansion ratio. The results reveal the strong interplay between the Bingham and Reynolds numbers and their influence on the formation and break up of stagnant zones in the corner of the expansion and on the size and location of core regions.

Journal ArticleDOI
TL;DR: The directional dependence of transport properties as measured from healthy cancellous bone when considered as a biologic, porous medium is defined to demonstrate the anisotropic and heterogeneous nature of the tissue and encourage the ongoing quantification of parameters within the established poroelastic models.

Journal ArticleDOI
TL;DR: The results indicate that severe stenosis causes considerable compressive stress in the tube wall and critical flow conditions such as negative pressure, high shear stress, and flow separation which may be related to artery compression, plaque cap rupture, platelet activation, and thrombus formation.
Abstract: Severe stenosis may cause critical flow and wall mechanical conditions related to artery fatigue, artery compression, and plaque rupture, which leads directly to heart attack and stroke. The exact mechanism involved is not well understood. In this paper a nonlinear three-dimensional thick-wall model with fluid-wall interactions is introduced to simulate blood flow in carotid arteries with stenosis and to quantify physiological conditions under which wall compression or even collapse may occur. The mechanical properties of the tube wall were selected to match a thick-wall stenosis model made of PVA hydrogel. The experimentally measured nonlinear stress-strain relationship is implemented in the computational model using an incremental linear elasticity approach. The Navier-Stokes equations are used for the fluid model. An incremental boundary iteration method is used to handle the fluid-wall interactions. Our results indicate that severe stenosis causes considerable compressive stress in the tube wall and critical flow conditions such as negative pressure, high shear stress, and flow separation which may be related to artery compression, plaque cap rupture, platelet activation, and thrombus formation. The stress distribution has a very localized pattern and both maximum tensile stress (five times higher than normal average stress) and maximum compressive stress occur inside the stenotic section. Wall deformation, flow rates, and true severities of the stenosis under different pressure conditions are calculated and compared with experimental measurements and reasonable agreement is found.

Journal ArticleDOI
TL;DR: The authors compare the distribution of earnings surprise in the US to those of 12 other countries and find that US managers are relatively more likely to manage earnings surprise due to differences in US corporate governance and legal environments versus those in other countries.

Journal ArticleDOI
TL;DR: A semiparametric cure rate model with a smoothing parameter that controls the degree of parametricity in the right tail of the survival distribution is proposed and it is shown that such a parameter is crucial for these kinds of models and can have an impact on the posterior estimates.
Abstract: We propose methods for Bayesian inference for a new class of semiparametric survival models with a cure fraction. Specifically, we propose a semiparametric cure rate model with a smoothing parameter that controls the degree of parametricity in the right tail of the survival distribution. We show that such a parameter is crucial for these kinds of models and can have an impact on the posterior estimates. Several novel properties of the proposed model are derived. In addition, we propose a class of improper noninformative priors based on this model and examine the properties of the implied posterior. Also, a class of informative priors based on historical data is proposed and its theoretical properties are investigated. A case study involving a melanoma clinical trial is discussed in detail to demonstrate the proposed methodology.

Journal ArticleDOI
TL;DR: In this article, the turnover rate and reaction orders for the combustion of methane in excess oxygen over Pd foils were in agreement with literature results reported for supported catalysts, and the surface area of the foil was measured by isotopic exchange of surface oxygen with 18 O 2, increased by a factor of approximately 2 after reaction.

Proceedings ArticleDOI
09 Nov 2001
TL;DR: This paper develops an integrated solution that automates as much as possible all steps of the document transformation process and introduces an algorithm that can satisfactorily discover acceptable transformations.
Abstract: The advent of web services that use XML-based message exchanges has spurred many efforts that address issues related to inter-enterprise service electronic commerce interactions. Currently emerging standards and technologies enable enterprises to describe and advertise their own Web Services and to discover and determine how to interact with services fronted by other businesses. However, these technologies do not address the problem of how to reconcile structural differences between similar types of documents supported by different enterprises. Transformations between such documents must thus be created manually on a case-by-case basis. In this paper, we explore the problem of how to automate the transformation of XML E-business documents. We develop an integrated solution that automates as much as possible all steps of the document transformation process. One, we propose a set of schema transformation operations that establish semantic relationships between two XML document schemas. Two, we define a model that allows us to compare the cost of performing these operations. Three, we introduce an algorithm that discovers an efficient sequence of operations for transforming a source document schema into a target document schema based on our cost model. The operation sequence then is used to generate an equivalent XSLT transformation script. Experimental results indicate that our algorithm can satisfactorily discover acceptable transformations.