scispace - formally typeset
Search or ask a question

Showing papers by "Worcester Polytechnic Institute published in 1999"


Journal ArticleDOI
TL;DR: The authors' integrated IT utilization model is an extension of TAM to include TTF constructs and provides more explanatory power than either model alone, which should lead to a better understanding of choices about using IT.

1,607 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the performance of the top 55 U.S. commercial banks via a two-stage production process that separates profitability and marketability and uncovered substantial performance inefficiency in both dimensions.
Abstract: Utilizing recent developments in data envelopment analysis (DEA), this paper examines the performance of the top 55 U.S. commercial banks via a two-stage production process that separates profitability and marketability. Substantial performance inefficiency is uncovered in both dimensions. Relatively large banks exhibit better performance on profitability, whereas smaller banks tend to perform better with respect to marketability. New contextdependent performance measures are defined for profitability and marketability which employ a DEA stratification model and a DEA attractiveness measure. When combined with the original DEA measure, the context-dependent performance measure better characterizes the profitability and marketability of 55 U.S. commercial banks. The new approach identifies areas for improved bank performance over the two-stage production process. The effect of acquisition on efficiency and attractiveness is also examined.

878 citations


Journal ArticleDOI
TL;DR: In this article, a Markov chain Monte Carlo (MCMC) sampling algorithm is used to estimate Bayesian credible and highest probability density (HPD) intervals for parameters of interest and provides a simple Monte Carlo approach to approximate these Bayesian intervals when a sample of the relevant parameters can be generated from their respective marginal posterior distribution using a sample from an importance sampling distribution.
Abstract: This article considers how to estimate Bayesian credible and highest probability density (HPD) intervals for parameters of interest and provides a simple Monte Carlo approach to approximate these Bayesian intervals when a sample of the relevant parameters can be generated from their respective marginal posterior distribution using a Markov chain Monte Carlo (MCMC) sampling algorithm. We also develop a Monte Carlo method to compute HPD intervals for the parameters of interest from the desired posterior distribution using a sample from an importance sampling distribution. We apply our methodology to a Bayesian hierarchical model that has a posterior density containing analytically intractable integrals that depend on the (hyper) parameters. We further show that our methods are useful not only for calculating the HPD intervals for the parameters of interest but also for computing the HPD intervals for functions of the parameters. Necessary theory is developed and illustrative examples—including a si...

844 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce some cheaper and faster variants of the classical additive Schwarz preconditioner (AS) for general sparse linear systems and show that the new methods are superior to AS in terms of both iteration counts and CPU time, as well as the communication cost when implemented on distributed memory computers.
Abstract: We introduce some cheaper and faster variants of the classical additive Schwarz preconditioner (AS) for general sparse linear systems and show, by numerical examples, that the new methods are superior to AS in terms of both iteration counts and CPU time, as well as the communication cost when implemented on distributed memory computers. This is especially true for harder problems such as indefinite complex linear systems and systems of convection-diffusion equations from three-dimensional compressible flows. Both sequential and parallel results are reported.

465 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: A multi-resolution view of the data via hierarchical clustering is developed, and a variation of parallel coordinates is used to convey aggregation information for the resulting clusters.
Abstract: Our ability to accumulate large, complex (multivariate) data sets has far exceeded our ability to effectively process them in searching for patterns, anomalies and other interesting features. Conventional multivariate visualization techniques generally do not scale well with respect to the size of the data set. The focus of this paper is on the interactive visualization of large multivariate data sets based on a number of novel extensions to the parallel coordinates display technique. We develop a multi-resolution view of the data via hierarchical clustering, and use a variation of parallel coordinates to convey aggregation information for the resulting clusters. Users can then navigate the resulting structure until the desired focus region and level of detail is reached, using our suite of navigational and filtering tools. We describe the design and implementation of our hierarchical parallel coordinates system which is based on extending the XmdvTool system. Lastly, we show examples of the tools and techniques applied to large (hundreds of thousands of records) multivariate data sets.

454 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider right-censored survival data for populations with a surviving (cure) fraction and propose a model that is quite different from the standard mixture model for cure rates.
Abstract: We consider Bayesian methods for right-censored survival data for populations with a surviving (cure) fraction. We propose a model that is quite different from the standard mixture model for cure rates. We provide a natural motivation and interpretation of the model and derive several novel properties of it. First, we show that the model has a proportional hazards structure, with the covariates depending naturally on the cure rate. Second, we derive several properties of the hazard function for the proposed model and establish mathematical relationships with the mixture model for cure rates. Prior elicitation is discussed in detail, and classes of noninformative and informative prior distributions are proposed. Several theoretical properties of the proposed priors and resulting posteriors are derived, and comparisons are made to the standard mixture model. A real dataset from a melanoma clinical trial is discussed in detail.

444 citations


Journal ArticleDOI
01 Jan 1999-Infor
TL;DR: In this paper, the authors investigated the infeasibility of super-efficiency data envelopment analysis (DEA) models in which the unit under evaluation is excluded from the reference set.
Abstract: The paper investigates the infeasibility of super-efficiency data envelopment analysis (DEA) models in which the unit under evaluation is excluded from the reference set. Necessary and sufficient conditions are provided for infeasibility of the super-efficiency DEA measures. By the returns to scale (RTS) classifications obtained from the standard DEA model, we can further locate the position of the unit under evaluation when infeasibility occurs. It is shown that the ranking of the total set of efficient DMUs is impossible because of the infeasibility of super-efficiency DEA models. Also we are able to identify the endpoint positions of the extreme efficient units. The results are useful for sensitivity analysis of efficiency classifications.

418 citations


Journal ArticleDOI
TL;DR: A method for estimating parameters in generalized linear models with missing covariates and a non‐ignorable missing data mechanism and sensitivity analyses play an important role in this problem are discussed in detail.
Abstract: We propose a method for estimating parameters in generalized linear models with missing covariates and a non-ignorable missing data mechanism. We use a multinomial model for the missing data indicators and propose a joint distribution for them which can be written as a sequence of one-dimensional conditional distributions, with each one-dimensional conditional distribution consisting of a logistic regression. We allow the covariates to be either categorical or continuous. The joint covariate distribution is also modelled via a sequence of one-dimensional conditional distributions, and the response variable is assumed to be completely observed. We derive the E- and M-steps of the EM algorithm with non-ignorable missing covariate data. For categorical covariates, we derive a closed form expression for the E- and M-steps of the EM algorithm for obtaining the maximum likelihood estimates (MLEs). For continuous covariates, we use a Monte Carlo version of the EM algorithm to obtain the MLEs via the Gibbs sampler. Computational techniques for Gibbs sampling are proposed and implemented. The parametric form of the assumed missing data mechanism itself is not `testable' from the data, and thus the non-ignorable modelling considered here can be viewed as a sensitivity analysis concerning a more complicated model. Therefore, although a model may have `passed' the tests for a certain missing data mechanism, this does not mean that we have captured, even approximately, the correct missing data mechanism. Hence, model checking for the missing data mechanism and sensitivity analyses play an important role in this problem and are discussed in detail. Several simulations are given to demonstrate the methodology. In addition, a real data set from a melanoma cancer clinical trial is presented to illustrate the methods proposed.

285 citations


Journal ArticleDOI
TL;DR: In this article, the lifetime of the lowest excited singlet state of peridinin is found to be strongly dependent on solvent polarity and ranges from 7 ps in the strongly polar solvent trifluoroethanol to 172 ps in nonpolar solvents cyclohexane and benzene.
Abstract: The spectroscopic properties and dynamic behavior of peridinin in several different solvents were studied by steady-state absorption, fluorescence, and transient optical spectroscopy. The lifetime of the lowest excited singlet state of peridinin is found to be strongly dependent on solvent polarity and ranges from 7 ps in the strongly polar solvent trifluoroethanol to 172 ps in the nonpolar solvents cyclohexane and benzene. The lifetimes show no obvious correlation with solvent polarizability, and hydrogen bonding of the solvent molecules to peridinin is not an important factor in determining the dynamic behavior of the lowest excited singlet state. The wavelengths of emission maxima, the quantum yields of fluorescence, and the transient absorption spectra are also affected by the solvent environment. A model consistent with the data and supported by preliminary semiempirical calculations invokes the presence of a charge transfer state in the excited state manifold of peridinin to account for the observat...

210 citations


Journal ArticleDOI
TL;DR: In this paper, the first observation of the simultaneous three-dimensional confinement of both a low-index particle and a high-index particles within a single-beam optical trap by using a strongly focused laser beam containing an optical vortex was reported.
Abstract: We report the first observation of the simultaneous three-dimensional confinement of both a low-index particle and a high-index particle within a single-beam optical trap by using a strongly focused laser beam containing an optical vortex. Experimental and theoretical investigations of the trap stability are described.

208 citations


Proceedings ArticleDOI
14 Apr 1999
TL;DR: This contribution proposes arithmetic architectures which are optimized for modern field programmable gate arrays (FPGAs) and shows that it is possible to implement modular exponentiation at secure bit lengths on a single commercially available FPGA.
Abstract: It is widely recognized that security issues will play a crucial role in the majority of future computer and communication systems. Central tools for achieving system security are cryptographic algorithms. For performance as well as for physical security reasons, it is often advantageous to realize cryptographic algorithms in hardware. In order to overcome the well-known drawback of reduced flexibility that is associated with traditional ASIC solutions, this contribution proposes arithmetic architectures which are optimized for modern field programmable gate arrays (FPGAs). The proposed architectures perform modular exponentiation with very long integers. This operation is at the heart of many practical public-key algorithms such as RSA and discrete logarithm schemes. We combine the Montgomery modular multiplication algorithm with a new systolic array design, which is capable of processing a variable number of bits per array cell. The designs are flexible, allowing any choice of operand and modulus. Unlike previous approaches, we systematically implement and compare several variants of our new architecture for different bit lengths. We provide absolute area and timing measures for each architecture. The results allow conclusions about the feasibility and time-space trade-offs of our architecture for implementation on Xilinx XC4000 series FPGAs. As a major practical result we show that it is possible to implement modular exponentiation at secure bit lengths on a single commercially available FPGA.

Journal ArticleDOI
TL;DR: In this article, a planar laser-induced fluorescence technique was used to determine the penetration, dilution, and structural features of the transverse transverse jets of a fully modulated, incompressible, turbulent transverse jet.
Abstract: Fully-modulated, incompressible, turbulent transverse jets were studied experimentally over a range of pulsing frequencies, duty-cycles, and at two jet-to-crossflow velocity ratios. The jet flow was completely modulated by operating a solenoid valve resulting in the shut off of jet supply during a portion of the cycle. The planar laserinduced fluorescence technique was used to determine the penetration, dilution, and structural features of the pulsed jets. The molecular mixing rate was quantified through a chemical reaction between the jet and crossflow fluids. Short injection times resulted in creation of vortex ring structures whereas long injection times produced axially elongated turbulent puffs, similar to a segment of the steady jet. The latter case resulted in only modest enhancement of the jet penetration depth and dilution. Pulsed jets dominated by vortex ring had penetration depths significantly greater than a steady jet with the same velocity ratio. Penetration of up to about 5 times the steady jet value at 50 jet diameters downstream of the jet exit was observed with 200 ms pulses. Duty-cycle had a significant effect on the performance of pulsed jets with short injection times. Increasing the duty-cycle for a fixed injection time diminished the jet penetration. The dilution and mixing rates of pulsed jets with short injection time were also increased over the steady jet. The greatest reduction in the mixing rate was approximately 50% for well-separated pulses with short injection times.

Proceedings ArticleDOI
01 Oct 1999
TL;DR: It is found that jitter degrades perceptual quality nearly as much as does packet loss, and that perceptual quality degrades sharply even with low 1eveIs of jitter or packet loss as compared to perceptual quality for perfect video.
Abstract: Today’s powerful computers and networks present the opportunity for video across the Internet right to the desktop. However, Internet video often suffers from packet loss and jitter, degrading the user’s perceived quality of the video. Understanding the effects of delay, loss and jitter on media quality is critical for choosing delay buffer sizes and packet repair techniques. While the effects of packet loss on perceptual quality are well-understood, to date there have not been careful user studies measuring the impact of jitter on perceptual quality. The major contributions of this work are carefully designed experiments that measure and compare the impact of both jitter and packet loss on perceptual quality of packet video. We find that jitter degrades perceptual quality nearly as much as does packet loss, and that perceptual quality degrades sharply even with low 1eveIs of jitter or packet loss as compared to perceptual quality for perfect video.

Journal ArticleDOI
TL;DR: A new skewed link model for analyzing binary response data with covariates is proposed using a Bayesian approach and data from a prostate cancer study is used to demonstrate the use of historical data in Bayesian model fitting and comparison of skewed link models.
Abstract: The logit, probit, and student t-links are widely used in modeling dichotomous quantal response data. Most of the commonly used link functions are symmetric, except the complementary log-log link. However, in some applications the overall fit can be significantly improved by the use of an asymmetric link. In this article we propose a new skewed link model for analyzing binary response data with covariates. Introducing a skewed distribution for the underlying latent variable, we develop a class of asymmetric link models for binary response data. Using a Bayesian approach, we first characterize the propriety of the posterior distributions using standard improper priors. We further propose informative priors using historical data from a similar previous study. We examine the proposed method through a large-scale simulation study and use data from a prostate cancer study to demonstrate the use of historical data in Bayesian model fitting and comparison of skewed link models.

Journal ArticleDOI
TL;DR: The results indicate it would be advantageous to use fully conscious animals to maximize BOLD‐based MRI signal in certain behavioral studies using MR spectrometers with modest field strengths.
Abstract: Changes in cortical activity during foot shock were assessed under conscious and propofol-anesthetized conditions using functional magnetic resonance imaging (fMRI). Increases in signal intensity were observed in the contralateral somatosensory cortex in response to electrical shock of the hindpaw under both conditions. These increases in cortical signal ranged from 6% to 26% while awake and from 1% to 6% under propofol anesthesia. In each of the six animals studied, the largest increase in blood oxygenation level-dependent (BOLD)-based signal intensity was observed during consciousness. In three of six animals, propofol anesthesia depressed signal intensity by as much as 10-fold, showing that the level of cortical activity during foot shock is dampened by anesthesia. These results indicate it would be advantageous to use fully conscious animals to maximize BOLD-based MRI signal in certain behavioral studies using MR spectrometers with modest field strengths (1.0-2.0 T).

Journal ArticleDOI
TL;DR: The data suggest that acute ADCav reversal does not always predict tissue recovery from ischemic injury and that temporary focal ischemia for even 8‐minute duration can cause delayed neuronal death that is more severe in the caudoputamen where the initial ADCav decline was greater than in the cortex.
Abstract: Twenty-two rats were subjected to 8, 15, 30, or 60 minutes of temporary middle cerebral artery occlusion (n = 5 per group) or sham occlusion (n = 2) in the magnetic resonance imaging unit. Diffusion-, perfusion-, and T2-weighted imaging were acquired before and during occlusion, and after reperfusion. A coregistration method was used to correlate the acute changes of the average apparent diffusion coefficient (ADCav) with the histology after 72 hours at the same topographic sites. The initially reduced ADCav values recovered completely in both the lateral caudoputamen and upper frontoparietal cortex in the 8-, 15-, and 30-minute groups, partially in the cortex, and not at all in the caudoputamen in the 60-minute group. The histology showed that the caudoputamen was either normal or had mild neuronal injury in the 8-minute group and invariably had some degree of neuronal death in the 15-, 30-, and 60-minute groups, whereas the cortex was either normal or had varying degrees of neuronal injury in all groups. No histological abnormalities were seen in the sham-operated rats. Our data suggest that acute ADCav reversal does not always predict tissue recovery from ischemic injury and that temporary focal ischemia for even 8-minute duration can cause delayed neuronal death that is more severe in the caudoputamen where the initial ADCav decline was greater than in the cortex.

Journal ArticleDOI
TL;DR: A method for estimating parameters for general parametric regression models with an arbitrary number of missing covariates by adapting a Monte Carlo version of the EM algorithm and model the marginal distribution of the covariates as a product of one‐dimensional conditional distributions.
Abstract: We propose a method for estimating parameters for general parametric regression models with an arbitrary number of missing covariates. We allow any pattern of missing data and assume that the missing data mechanism is ignorable throughout. When the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM algorithm by the method of weights proposed in Ibrahim (1990, Journal of the American Statistical Association85, 765–769). We extend this method t o continuous or mixed categorical and continuous covariates, and for arbitrary parametric regression models, by adapting a Monte Carlo version of the EM algorithm as discussed by Wei and Tanner (1990, Journal of the American Statistical Association85, 699–704). In addition, we discuss the Gibbs sampler for sampling from the conditional distribution of the missing covariates given the observed data and show that the appropriate complete conditionals are log‐concave. The log‐concavity property of the conditional distributions will facilitate a straightforward implementation of the Gibbs sampler via the adaptive rejection algorithm of Gilks and Wild (1992, Applied Statistics41, 337–348). We assume the model for the response given the covariates is an arbitrary parametric regression model, such as a generalized linear model, a parametric survival model, or a nonlinear model. We model the marginal distribution of the covariates as a product of one‐dimensional conditional distributions. This allows us a great deal of flexibility in modeling the distribution of the covariates and reduces the number of nuisance parameters that are introduced in the E‐step. We present examples involving both simulated and real data.

Journal ArticleDOI
TL;DR: This article revisited a conceptual definition of "mental models of dynamic systems" proposed for use in system dynamics research by Doyle and Ford and commented on by Lane, with particular attention to the history and appropriate use of the term "cognitive map".
Abstract: This article revisits a conceptual definition of “mental models of dynamic systems” proposed for use in system dynamics research by Doyle and Ford and commented on by Lane. Lane's proposed amendments to the definition are discussed in turn, with particular attention to the history and appropriate use of the term “cognitive map.” A revised definition informed by Lane's commentary is offered. Copyright © 1999 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the hydrothermal crystallization of zeolite MCM-22 in the Na 2 O-Al 2 O 3 -SiO 2 -H 2 Ohexamethyleneimine (HMI) system under rotating and static synthesis conditions.

Journal ArticleDOI
TL;DR: This paper proposes a framework for selecting business process modeling methods based on modeling objectives and uses an expense claim process as an example to illustrate the application of the selection procedure.
Abstract: The techniques for representing and analyzing business processes are referred to as business process modeling. Many business process modeling methods and their associated tools have been used to capture the characteristics of business processes. However, most methods view business processes from different perspectives and have different features and capabilities. Thus, an important research question is how process designers should select appropriate modeling methods for their BPR initiatives. In this paper, we propose a framework for selecting business process modeling methods based on modeling objectives. This framework can serve as the basis for evaluating modeling methods and generating selection procedures. A general selection procedure is also described. We use an expense claim process as an example to illustrate the application of the selection procedure.

Journal ArticleDOI
TL;DR: This work proposes an informative prior distribution for variable selection and proposes novel methods for computing the marginal distribution of the data for the logistic regression model.
Abstract: Summary. Bayesian selection of variables is often difficult to carry out because of the challenge in specifying prior distributions for the regression parameters for all possible models, specifying a prior distribution on the model space and computations. We address these three issues for the logistic regression model. For the first, we propose an informative prior distribution for variable selection. Several theoretical and computational properties of the prior are derived and illustrated with several examples. For the second, we propose a method for specifying an informative prior on the model space, and for the third we propose novel methods for computing the marginal distribution of the data. The new computational algorithms only require Gibbs samples from the full model to facilitate the computation of the prior and posterior model probabilities for all possible models. Several properties of the algorithms are also derived. The prior specification for the first challenge focuses on the observables in that the elicitation is based on a prior prediction yo for the response vector and a quantity ao quantifying the uncertainty in yo. Then, yo and ao are used to specify a prior for the regression coefficients semi-automatically. Examples using real data are given to demonstrate the methodology.

Journal ArticleDOI
TL;DR: In this article, the 3D Navier-Stokes equations were used to simulate the fluid flow and heat transfer in a fixed-bed reactor with 10 solid spheres in a tube with a tube-toparticle ratio of 2.43.

Journal ArticleDOI
TL;DR: In this article, a 3D thin-wall model with flow-structure interactions was introduced and solved using ADINA to investigate the wall deformation and flow properties of blood flow in carotid arteries with symmetric and asymmetric stenoses.

Journal ArticleDOI
TL;DR: A novel class of arithmetic architectures for Galois fields GF(2/sup k/) is described, capable of exploring the time-space trade-off paradigm in a flexible manner and two different approaches to squaring are provided.
Abstract: The article describes a novel class of arithmetic architectures for Galois fields GF(2/sup k/). The main applications of the architecture are public key systems which are based on the discrete logarithm problem for elliptic curves. The architectures use a representation of the field GF(2/sup k/) as GF((2/sup n/)/sup m/), where k=n/spl middot/m. The approach explores bit parallel arithmetic in the subfield GF(2/sup n/) and serial processing for the extension field arithmetic. This mixed parallel-serial (hybrid) approach can lead to fast implementations. As the core module, a hybrid multiplier is introduced and several optimizations are discussed. We provide two different approaches to squaring. We develop exact expressions for the complexity of parallel squarers in composite fields, which can have a surprisingly low complexity. The hybrid architectures are capable of exploring the time-space trade-off paradigm in a flexible manner. In particular, the number of clock cycles for one field multiplication, which is the atomic operation in most public key schemes, can be reduced by a factor of n compared to other known realizations. The acceleration is achieved at the cost of an increased computational complexity. We describe a proof-of-concept implementation of an ASIC for multiplication and squaring in GF((2/sup n/)/sup m/), m variable.

Journal ArticleDOI
TL;DR: In this article, an overview of green supply chain management is presented, which includes a categorization of research and practice in purchasing, manufacturing, outbound, and reverse logistics dimensions.
Abstract: This paper provides an overview of environmental (green) supply chain management. The overview includes a categorization of research and practice in purchasing, manufacturing, outbound, and reverse logistics dimensions.

Journal ArticleDOI
TL;DR: Some Bayesian discretized semiparametric models, incorporating proportional and nonproportional hazards structures, along with associated statistical analyses and tools for model selection using sampling-based methods are presented.
Abstract: Summary. Interval-censored data occur in survival analysis when the survival time of each patient is only known to be within an interval and these censoring intervals differ from patient to patient. For such data, we present some Bayesian discretized semiparametric models, incorporating proportional and nonproportional hazards structures, along with associated statistical analyses and tools for model selection using sampling-based methods. The scope of these methodologies is illustrated through a reanalysis of a breast cancer data set (Finkelstein, 1986, Biometrics42, 845–854) to test whether the effect of covariate on survival changes over time.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: The concept of a structure-based brush, which can be used to perform selection in hierarchically structured data sets, is introduced and its usefulness is validated using two distinct hierarchical visualization techniques, namely hierarchical parallel coordinates and tree-maps.
Abstract: Interactive selection is a critical component in exploratory visualization, allowing users to isolate subsets of the displayed information for highlighting, deleting, analysis, or focussed investigation. Brushing, a popular method for implementing the selection process, has traditionally been performed in either screen space or data space. We introduce the concept of a structure-based brush, which can be used to perform selection in hierarchically structured data sets. Our structure-based brush allows users to navigate hierarchies by specifying focal extents and level-of-detail on a visual representation of the structure. Proximity-based coloring, which maps similar colors to data that are closely related within the structure, helps convey both structural relationships and anomalies. We describe the design and implementation of our structure-based brushing tool. We also validate its usefulness using two distinct hierarchical visualization techniques, namely hierarchical parallel coordinates and tree-maps.

Journal ArticleDOI
17 May 1999
TL;DR: Results from the work indicate that there is potential to reuse more cached resources than is currently being realized due to inaccurate and nonexistent cache directives, and that separating out the dynamic portions of a page into their own resources allows relatively static portions to be cached.
Abstract: This work focuses on characterizing information about Web resources and server responses that is relevant to Web caching. The approach is to study a set of URLs at a variety of sites and gather statistics about the rate and nature of changes compared with the resource type. In addition, we gather response header information reported by the servers with each retrieved resource. Results from the work indicate that there is potential to reuse more cached resources than is currently being realized due to inaccurate and nonexistent cache directives. In terms of implications for caching, the relationships between resources used to compose a page must be considered. Embedded images are often reused, even in pages that change frequently. This result both points to the need to cache such images and to discard them when they are no longer included as part of any page. Finally, while the results show that HTML resources frequently change, these changes can be in a predictable and localized manner. Separating out the dynamic portions of a page into their own resources allows relatively static portions to be cached, while retrieval of the dynamic resources can trigger retrieval of new resources along with any invalidation of already cached resources.

Journal ArticleDOI
TL;DR: In this paper, a semi-parametric approach for variable selection for proportional hazards regression models with right censored data is proposed. But the authors focus on the observables rather than the parameters.
Abstract: The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semi-parametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coefficients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coefficients and the model space, they propose a semi-automatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology. Les auteurs abordent d'un point de vue bayesien le probleme de la selection de variables dans les modeles de regression des risques proportionnels en presence de censure a droite. Ils proposent une approche semi-parametrique dans laquelle la loi a priori du taux de base est non parametrique, mais celle des coefficients de regression est completement parametrique. L'information concernant le taux de base est representee par la loi a priori issue d'un processus gamma discret; quant a la loi a priori des parametres du modele de regression, elle est choisie dans une classe de lois parameriques au moyen d'une procedure semi-automatique centree sur les donnees plutǒt que sur les parametres. La mise a jour de rinformation se fait au moyen d'un algorithme de Monte-Carlo a chaǐne de Markov. Des donnees reelles et simulees permettent d'illustrer la methode.

Journal ArticleDOI
TL;DR: In this article, the electrical behavior of Sn-doped nanocrystalline and microcrystalline TiO 2 has been studied by DC conductivity measurements and impedance spectroscopy.