scispace - formally typeset
Search or ask a question

Showing papers by "Worcester Polytechnic Institute published in 2000"


Journal ArticleDOI
TL;DR: It is shown that UD's have many desirable properties for a wide variety of applications and the global optimization algorithm, threshold accepting, is used to generate UD's with low discrepancy.
Abstract: A uniform design (UD) seeks design points that are uniformly scattered on the domain. It has been popular since 1980. A survey of UD is given in the first portion: The fundamental idea and construction method are presented and discussed and examples are given for illustration. It is shown that UD's have many desirable properties for a wide variety of applications. Furthermore, we use the global optimization algorithm, threshold accepting, to generate UD's with low discrepancy. The relationship between uniformity and orthogonality is investigated. It turns out that most UD's obtained here are indeed orthogonal.

825 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a general class of prior distributions for arbitrary regression models, called power prior distributions, which are based on the idea of raising the likelihood function of the historical data to the power ao, where 0 < ao < 1.
Abstract: We propose a general class of prior distributions for arbitrary regression models. We discuss parametric and semiparametric models. The prior specification for the regression coefficients focuses on observ- able quantities in that the elicitation is based on the availability of his- torical data Do and a scalar quantity ao quantifying the uncertainty in Do. Then Do and ao are used to specify a prior for the regression coeffi- cients in a semiautomatic fashion. The most natural specification of Do arises when the raw data from a similar previous study are available. The availability of historical data is quite common in clinical trials, car- cinogenicity studies, and environmental studies, where large data bases are available from similar previous studies. Although the methodology we present here is quite general, we will focus only on using historical data from similar previous studies to construct the prior distributions. The prior distributions are based on the idea of raising the likelihood function of the historical data to the power ao, where 0 < ao < 1. We call such prior distributions power prior distributions. We examine the power prior for four commonly used classes of regression models. These include generalized linear models, generalized linear mixed models, semipara- metric proportional hazards models, and cure rate models for survival data. For these classes of models, we discuss the construction of the power prior, prior elicitation issues, propriety conditions, model selec- tion, and several other properties. For each class of models, we present real data sets to demonstrate the proposed methodology.

628 citations


Journal ArticleDOI
TL;DR: An overview of issues related to handoff with particular emphasis on hybrid mobile data networks is presented and five architectures for the example hybrid network, based on emulation of GPRS entities within the WLAN, mobile IP, a virtual access point, and a mobility gateway, are described and compared.
Abstract: With the emergence of a variety of mobile data services with variable coverage, bandwidth, and handoff strategies, and the need for mobile terminals to roam among these networks, handoff in hybrid data networks has attracted tremendous attention. This article presents an overview of issues related to handoff with particular emphasis on hybrid mobile data networks. Issues are logically divided into architectural and handoff decision time algorithms. The handoff architectures in high-speed local coverage IEEE 802.11 wireless LANs, and low-speed wide area coverage CDPD and GPRS mobile data networks are described and compared. A survey of traditional algorithms and an example of an advanced algorithm using neural networks for PTO decision time in homogeneous networks are presented. The HO architectural issues related to hybrid networks are discussed through an example of a hybrid network that employs GPRS and IEEE 802.11. Five architectures for the example hybrid network, based on emulation of GPRS entities within the WLAN, mobile IP, a virtual access point, and a mobility gateway (proxy), are described and compared. The mobility gateway and mobile IP approaches are selected for more detailed discussion. The differences in applying a complex algorithm for HO decision time in a homogeneous and a hybrid network are shown through an example.

569 citations


Journal ArticleDOI
TL;DR: In this paper, a wavelet-based approach is proposed for structural damage detection and health monitoring using simulation data generated from a simple structural model subjected to a harmonic excitation, which consists of multiple breakable springs and may suffer irreversible damage when the response exceeds a threshold value or the number of cycles of motion is accumulated beyond their fatigue life.
Abstract: A wavelet-based approach is proposed for structural damage detection and health monitoring. Characteristics of representative vibration signals under the wavelet transformation are examined. The methodology is then applied to simulation data generated from a simple structural model subjected to a harmonic excitation. The model consists of multiple breakable springs, some of which may suffer irreversible damage when the response exceeds a threshold value or the number of cycles of motion is accumulated beyond their fatigue life. In cases of either abrupt or accumulative damages, occurrence of damage and the moment when it occurs can be clearly determined in the details of the wavelet decomposition of these data. Similar results are observed for the real acceleration data of the seismic response recorded on the roof of a building during the 1971 San Fernando earthquake. Effects of noise intensity and damage severity are investigated and presented by a detectability map. Results show the great promise of the wavelet approach for damage detection and structural health monitoring.

474 citations


Journal ArticleDOI
TL;DR: It is shown that the top-ranked companies by revenue do not necessarily have top- ranked performance viewed as being multidimensional, and the reliability of the best-practice frontier is examined.

411 citations


Journal ArticleDOI
TL;DR: Perceived critical mass had the largest total effect (direct and indirect) on intention to use groupware and other relationships postulated in the model were also found to be significant.
Abstract: Groupware technologies have become an important part of the business computing infrastructure in many organizations, but many groupware applications, especially those requiring significant collabor...

345 citations


Journal ArticleDOI
TL;DR: Culture of chondrocyte seeded matrices offers the possibility of rapid in vitro expansion of donor cartilage for the repair of structural defects, tracheal injury, and vascularized tissue damage.
Abstract: As a result of the low yield of cartilage from primary patient harvests and a high demand for autologous cartilage for reconstructive surgery and structural repair, primary explant cartilage must be augmented by tissue engineering techniques. In this study, chondrocytes seeded on PLLA/PGA scaffolds in static culture and a direct perfusion bioreactor were biochemically and histologically analyzed to determine the effects of fluid flow and media pH on matrix assembly. A gradual media pH change was maintained in the bioreactor within 7.4-6.96 over 2 weeks compared to a more rapid decrease from 7.4 to 6.58 in static culture over 3 days. Seeded scaffolds subjected to 1 microm/s flow demonstrated a 118% increase (p < 0.05) in DNA content, a 184% increase (p < 0.05) in GAG content, and a 155% (p < 0.05) increase in hydroxyproline content compared to static culture. Distinct differences were noted in tissue morphology, including more intense staining for proteoglycans by safranin-O and alignment of cells in the direction of media flow. Culture of chondrocyte seeded matrices thus offers the possibility of rapid in vitro expansion of donor cartilage for the repair of structural defects, tracheal injury, and vascularized tissue damage.

275 citations


Journal ArticleDOI
TL;DR: In this article, an understanding and modeling of the transport of ionic species through these ion-exchange membranes is not yet adequately developed, especially for proton transport, which is the focus of this paper.
Abstract: The proton-exchange membrane (PEM) fuel cell has lately emerged as a highly promising power source for a wide range of applications. The solid polymer electrolyte utilized in these fuel cells is typically a polyperfluorosulfonic acid (PFSA) membrane ( e.g., Nafion ® , manufactured by DuPont), that provides excellent performance in the presence of water by virtue of its strong acidity, low permeability of hydrogen and oxygen, and good electrochemical stability in the presence of electrocatalysts. This has allowed the development of low-temperature PEM fuel cells with impressive current densities. These membranes have also been widely utilized in the chlor-alkali industry. However, an understanding and modeling of the transport of ionic species through these ion-exchange membranes is not yet adequately developed, especially for proton transport, which is the focus of this paper. There are numerous studies on the nanostructural aspects of the

255 citations


Journal ArticleDOI
TL;DR: Observations indicate that PTEN acts at multiple sites in the cell, regulating the transition of differentiating neuroblasts to postmitotic neurons, and is not required for astrocytic differentiation.
Abstract: Mutations of phosphatase and tensin homolog deleted on chromosome 10 (PTEN), a protein and lipid phosphatase, have been associated with gliomas, macrocephaly, and mental deficiencies. We have assessed PTEN′s role in the nervous system and find that PTEN is expressed in mouse brain late in development, starting at approximately postnatal day 0. In adult brain, PTEN is preferentially expressed in neurons and is especially evident in Purkinje neurons, olfactory mitral neurons, and large pyramidal neurons. To analyze the function of PTEN in neuronal differentiation, we used two well established model systems—pheochromocytoma cells and cultured CNS stem cells. PTEN is expressed during neurotrophin-induced differentiation and is detected in both the nucleus and cytoplasm. Suppression of PTEN levels with antisense oligonucleotides does not block initiation of neuronal differentiation. Instead, PTEN antisense leads to death of the resulting, immature neurons, probably during neurite extension. In contrast, PTEN is not required for astrocytic differentiation. These observations indicate that PTEN acts at multiple sites in the cell, regulating the transition of differentiating neuroblasts to postmitotic neurons.

228 citations


Book ChapterDOI
17 Aug 2000
TL;DR: The results show that implementations of this architecture executing the projective coordinates version of the Montgomery scalar multiplication algorithm can compute elliptic curve scalar multiplications with arbitrary points in 0.21 msec in the field GF(2167).
Abstract: This work proposes a processor architecture for elliptic curves cryptosystems over fields GF(2m). This is a scalable architecture in terms of area and speed that exploits the abilities of reconfigurable hardware to deliver optimized circuitry for different elliptic curves and finite fields. The main features of this architecture are the use of an optimized bit-parallel squarer, a digit-serial multiplier, and two programmable processors. Through reconfiguration, the squarer and the multiplier architectures can be optimized for any field order or field polynomial. The multiplier performance can also be scaled according to system's needs. Our results show that implementations of this architecture executing the projective coordinates version of the Montgomery scalar multiplication algorithmcan compute elliptic curve scalar multiplications with arbitrary points in 0.21 msec in the field GF(2167). A result that is at least 19 times faster than documented hardware implementations and at least 37 times faster than documented software implementations.

208 citations


Journal Article
TL;DR: In this article, a processor architecture for elliptic curves cryptosystems over fields GF(2 m ) is proposed, which is a scalable architecture in terms of area and speed that exploits the abilities of reconfigurable hardware to deliver optimized circuitry for different elliptic curve and finite fields.
Abstract: This work proposes a processor architecture for elliptic curves cryptosystems over fields GF(2 m ) This is a scalable architecture in terms of area and speed that exploits the abilities of reconfigurable hardware to deliver optimized circuitry for different elliptic curves and finite fields The main features of this architecture are the use of an optimized bit-parallel squarer, a digit-serial multiplier, and two programmable processors Through reconfiguration, the squarer and the multiplier architectures can be optimized for any field order or field polynomial The multiplier performance can also be scaled according to system's needs Our results show that implementations of this architecture executing the projective coordinates version of the Montgomery scalar multiplication algorithm can compute elliptic curve scalar multiplications with arbitrary points in 021 msec in the field GF(2 167 ) A result that is at least 19 times faster than documented hardware implementations and at least 37 times faster than documented software implementations

Journal ArticleDOI
TL;DR: A review of conventional 2D as well as 3D roughness parameters, with particular emphasis on recent international standards and developments, and a discussion on the relevance of the different parameters and quantification methods in terms of functional correlations.

Journal ArticleDOI
TL;DR: In contrast to the on-demand approach to information integration, the approach of tailored information repository construction, commonly referred to as data warehousing, is characterized by the following properties:
Abstract: I n recent years, the number of digital information storage and retrieval systems has increased immensely. These information sources are generally interconnected via some network, and hence the task of integrating data from different sources to serve it up to users is an increasingly important one [10]. Applications that could benefit from this wealth of digital information are thus experiencing a pressing need for suitable integration tools that allow them to make effective use of such distributed and diverse data sets. In contrast to the on-demand approach to information integration, the approach of tailored information repository construction, commonly referred to as data warehousing, is characterized by the following properties:

Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for the presence of (input) congestion was developed and a new measure of congestion was generated to provide the basis for a new unified approach to this and other topics in data envelopment analysis.
Abstract: This paper develops a necessary and sufficient condition for the presence of (input) congestion. Relationships between the two congestion methods presently available are discussed. The equivalence between Fare et al. [12] , [13] and Brockett et al. [2] hold only when the law of variable proportions is applicable. It is shown that the work of Brockett et al. [2] improves upon the work of Fare et al. [12] , [13] in that it not only (1) detects congestion but also (2) determines the amount of congestion and, simultaneously, (3) identifies factors responsible for congestion and distinguishes congestion amounts from other components of inefficiency. These amounts are all obtainable from non-zero slacks in a slightly altered version of the additive model — which we here further extend and modify to obtain additional details. We also generate a new measure of congestion to provide the basis for a new unified approach to this and other topics in data envelopment analysis (DEA).

Journal ArticleDOI
TL;DR: An important aspect of MRI and MRS studies is that techniques and findings are easily translated between systems, and pre-clinical studies using cultured cells or experimental animals have a high connectivity to potential clinical utility.

Journal ArticleDOI
01 Apr 2000-Urology
TL;DR: Two-year PSA failure rates derived from the Cox regression model and bootstrap estimates of the 95% confidence intervals are presented in nomogram format stratified by the preoperative PSA, percentage of positive prostate biopsies, erMRI T-stage, and the biopsy Gleason score.

Journal ArticleDOI
TL;DR: In this article, the potential application of various hydrophobic molecular sieves for the sorption of four model chlorinated volatile organic compounds (CVOCs) from dilute liquid water streams was investigated.

Journal ArticleDOI
TL;DR: Data suggest that a secondary ADC reduction occurs as early as 2.5 hours after reperfusion, evolves in a slow fashion, and is associated with neuronal injury, and renormalization and secondary decline in ADC are not associated with neurological recovery and worsening, respectively.
Abstract: This study was designed to characterize the initial and secondary changes of the apparent diffusion coefficient (ADC) of water with high temporal resolution measurements of ADC values and to correlate ADC changes with functional outcomes. Fourteen rats underwent 30 minutes of temporary middle cerebral artery occlusion (MCAO). Diffusion-, perfusion-, and T2-weighted imaging was performed during MCAO and every 30 minutes for a total of 12 hours after reperfusion (n = 6). Neurological outcomes were evaluated during MCAO, every 30 minutes for a total of 6 hours and at 24 hours after reperfusion (n = 8). The decreased cerebral blood flow during MCAO returned to normal after reperfusion and remained unchanged thereafter. The decreased ADC values during occlusion completely recovered at 1 hour after reperfusion. The renormalized ADC values started to decrease secondarily at 2.5 hours, accompanied by a delayed increase in T2 values. The ADC-defined secondary lesion grew over time and was 52% of the ADC-defined initial lesion at 12 hours. Histological evaluation demonstrated neuronal damage in the regions of secondary ADC decline. Complete resolution of neurological deficits was seen in 1 rat at 1 hour and in 6 rats between 2.5 and 6 hours after reperfusion; no secondary neurological deficits were observed at 24 hours. These data suggest that (1) a secondary ADC reduction occurs as early as 2.5 hours after reperfusion, evolves in a slow fashion, and is associated with neuronal injury; and (2) renormalization and secondary decline in ADC are not associated with neurological recovery and worsening, respectively.

Journal ArticleDOI
TL;DR: A perovskite material (BaCe0.8Gd0.2O3) in powder form, with both electronic and ionic conductivity, was synthesized by the ethylene glycol method as discussed by the authors.

Journal ArticleDOI
TL;DR: In this paper, an in-line digital holographic sensor (DHS) for monitoring and characterizing marine particulates is presented. Butler et al. used a small 10mW diode laser to project a collimated beam through the water column and onto a lensless CCD array.
Abstract: We report an in-line digital holographic sensor (DHS) for monitoring and characterizing marine particulates. This system images individual particles over a deep depth of field (>25 cm) with a resolution of 5 {mu}m. The DHS projects a collimated beam through the water column and onto a lensless CCD array. Some light is diffracted by particulates and forms an object beam; the undeflected remainder constitutes the reference beam. The two beams combine at the CCD array and create an in-line hologram, which is then numerically reconstructed. The DHS eliminates many problems traditionally associated with holography. The CCD recording material considerably lowers the exposure time and eliminates most vibration problems. The laser power needs are low; the DHS uses a small 10-mW diode laser. Rapid numerical reconstruction eliminates photographic processing and optical reconstruction. We successfully operated the DHS underwater on a remotely operated vehicle; our test results include tracing a single particle from one hologram to the next, thus deriving a velocity vector for marine mass transport. We outline our digital holographic reconstruction procedure, and present our graphical user interface and user software tools. The DHS is particularly useful for providing in situ ground-truth measurements for environmental remote sensing. (c) 2000 Societymore » of Photo-Optical Instrumentation Engineers.« less

Journal ArticleDOI
01 Jun 2000
TL;DR: The results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement.
Abstract: Web performance impacts the popularity of a particular Web site or service as well as the load on the network, but there have been no publicly available end-to-end measurements that have focused on a large number of popular Web servers examining the components of delay or the effectiveness of the recent changes to the HTTP protocol. In this paper we report on an extensive study carried out from many client sites geographically distributed around the world to a collection of over 700 servers to which a majority of Web traffic is directed. Our results show that the HTTP/1.1 protocol, particularly with pipelining, is indeed an improvement over existing practice, but that servers serving a small number of objects or closing a persistent connection without explicit notification can reduce or eliminate any performance improvement. Similarly, use of caching and multi-server content distribution can also improve performance if done effectively.

Journal ArticleDOI
TL;DR: In this article, the effect of magnesium and silicon additions to aluminum, free silicon on the SiC substrate, nitrogen gas in the atmosphere, and process temperature on the wetting characteristics of SiC by aluminum alloys were investigated using the sessile drop technique.
Abstract: The effect of magnesium and silicon additions to aluminum, free silicon on the SiC substrate, nitrogen gas in the atmosphere, and process temperature on the wetting characteristics of SiC by aluminum alloys are investigated using the sessile drop technique The contribution of each of these parameters and their interactions to the contact angle, surface tension, and driving force for wetting are determined In addition, an optimized process for enhanced wetting is suggested and validated Results show that the presence of free silicon on the surface of SiC significantly reduces the contact angle between the molten alloy and the substrate The positive effect of silicon on the contact angle is attributed to a chemical reaction in which both SiC and aluminum are active participants The results also indicate that nitrogen gas in the atmosphere positively influences the liquid/vapor surface tension, and the presence of magnesium in the aluminum alloy favorably affects the overall driving force for wetting A mechanism is proposed to explain the beneficial role that the interaction of nitrogen with magnesium plays in enhancing wetting Magnesium significantly reduces the surface tension of aluminum melts but has a low vapor pressure Consequently, it readily volatilizes during holding at the processing temperature and is lost from the alloy It is proposed that a series of chemical reactions in the system Al-Mg-N are responsible for reintroducing magnesium into the melt, thus, maintaining a low melt surface tension Interactions between the aluminum alloy and the silicon carbide substrate that may lead to the dissolution of the substrate and the formation of undesirable reaction products, particularly Al4C3, are examined, and means for mitigating their formation are outlined

Proceedings ArticleDOI
01 Feb 2000
TL;DR: This contribution investigates the significance of an FPGA implementation of Serpent, one of the Advanced Encryption Standard candidate algorithms, and finds that Serpent can be implemented with encryption rates beyond 4 Gbit/s on current FPGAs.
Abstract: With the expiration of the Data Encryption Standard (DES) in 1998, the Advanced Eneryption Standard (AES) development process is well underway. It is hoped that the result of the AES process will be the specification of a new non-classified encryption algorithm that will have the global acceptance achieved by DES as well as the capability of long-term protection of sensitive information. The technical analysis used in determining which of the potential AES candidates will be selected as the Advanced Encryption Algorithm includes efficiency testing of both hardware and software implementations of candidate algorithms. Reprogrammable devices such as Field Programmable Gate Arrays (FPGAs) are highly attractive options for hardware implementations of encryption algorithms as they provide cryptographic algorithm agility, physical security, and potentially much higher performance than software solutions. This contribution investigates the significance of an FPGA implementation of Serpent, one of the Advanced Encryption Standard candidate algorithms. Multiple architecture options of the Serpent algorithm will be explored with a strong focus being placed on a high speed implementation within an FPGA in order to support security for current and future high bandwidth applications. One of the main findings is that Serpent can be implemented with encryption rates beyond 4 Gbit/s on current FPGAs.

Journal ArticleDOI
TL;DR: This paper synthesizes the available sourcing alternatives into four categories, namely multiple sourcing, singleourcing, single/dual hybrid or networkourcing, and global sourcing, and provides a comprehensive review of these purchasing methods based on extensive literature.
Abstract: As more evidence indicates that a corporation is very much defined by its purchases and benefited by its close partnership with the suppliers, the sourcing decision becomes increasingly important in the firm’s growth and profit. This paper synthesizes the available sourcing alternatives into four categories, namely multiple sourcing, single sourcing, single/dual hybrid or network sourcing, and global sourcing, and provides a comprehensive review of these purchasing methods based on extensive literature. Besides the discussion of the pros and cons, the paper focuses on the underlying factors that determine the preference and suitability of each sourcing option. In addition, with the note that numerous companies are switching to do business on a global basis, we attempt to use China as an example to examine global sourcing from the standpoints of both buyer and supplier.

Book ChapterDOI
01 Jan 2000
TL;DR: InformationRat is investigated by building InfoRat, a system that inferences over a design’s rationale in order to detect inconsistencies and to assess the impact of changes.
Abstract: Design Rationale (DR) consists of the decisions made during the design process and the reasons behind them. Because it offers more than just a “snapshot” of the final design decisions, DR is invaluable as an aid for revising, maintaining, documenting, evaluating, and learning the design. Much work has been performed on how DR can be captured and represented but not as much on how it can be used. In this paper, we investigate the use of DR by building InfoRat, a system that inferences over a design’s rationale in order to detect inconsistencies and to assess the impact of changes.

Journal ArticleDOI
TL;DR: A new multispectral (MS) approach is described to characterize cerebral ischemia in a time‐independent fashion and produced an estimate of lesion volume that was highly correlated with postmortem infarct volume, independent of the age of the lesion.
Abstract: A major difficulty in staging and predicting ischemic brain injury by magnetic resonance (MR) imaging is the time-varying nature of the MR parameters within the ischemic lesion. A new multispectral (MS) approach is described to characterize cerebral ischemia in a time-independent fashion. MS analysis of five MR parameters (mean diffusivity, diffusion anisotropy, T2, proton density, and perfusion) was employed to characterize the progression of ischemic lesion in the rat brain following 60 minutes of transient focal ischemia. k-Means (KM) and fuzzy c-means (FCM) classification methods were employed to define the acute and subacute ischemic lesion. KM produced an estimate of lesion volume that was highly correlated with postmortem infarct volume, independent of the age of the lesion. Overall classification rates for KM exceeded FCM at acute and subacute time points as follows: KM, 90.5%, 94.4%, and 95.9%; FCM, 82.4%, 90.6%, and 82.6% (for 45 minutes, 180 minutes, and 24–120 hours post MCAO groups). MS analysis also offers a formal method of combining diffusion and perfusion parameters to provide an estimate of the ischemic penumbra (KM classification rate = 70.3%). J. Magn. Reson. Imaging 2000;12:842–858. © 2000 Wiley-Liss, Inc.

Journal ArticleDOI
TL;DR: In this article, the problem of finding the optimal location of sensors and actuators to achieve reduction of the noise field in an acoustic cavity is investigated, and two control strategies are proposed: linear quadratic tracking where the offending noise is tracked, and the formulation of the harmonic control strategy as a periodic static output feedback control problem.

Journal ArticleDOI
TL;DR: This paper examined whether managers in the U.S. manage earnings, profits and losses surprises to a greater extent than do managers in 12 other countries, and they found that managers in America are more likely to manage earnings and profits than managers in other countries.
Abstract: This paper examines whether managers in the U.S. manage earnings, profits and losses surprises to a greater extent than do managers in 12 other countries. We expect managers in the U.S. to be more likely to manage earnings, profits and losses surprises than do managers in other countries since U.S. managers have greater incentives to monitor current price performance. These incentives include greater equity ownership by top executives, more monitoring by institutional and large shareholders, a larger number of outside directors on their board of directors, a greater threat of external takeovers, and a more litigious environment. Consistent with our expectations, we find that managers in the U.S. manage earnings surprises relatively more than do managers in 12 other countries. More specifically, U.S. managers are more likely to manage profits surprises than do managers in all 12 other countries examined, and they are more likely to manage losses surprises than do managers in all other countries except Japan, the only country requiring managers to forecast earnings. We also expect U.S. managers to be more likely to manage analysts' estimates, given their extensive public relations departments and their greater incentives to manage earnings surprises. Our evidence bears out our contention.

Journal ArticleDOI
TL;DR: The power priors as mentioned in this paper are based on the notion of the availability of historical data and are of great potential use in this context, and demonstrate how to construct these priors and elicit their hyperparameters.

Journal ArticleDOI
TL;DR: The results indicate that the human plasma complement regulators, FH and C4bpalpha, fall into two distinct groups on the basis of their sequence divergence.
Abstract: Evolutionary relationships among members of the regulator of complement activation (RCA) gene cluster were analyzed using neighbor-joining and parsimony methods of phylogenetic tree inference. We investigated the structural and functional similarities among short consensus repeats (SCRs) of the following human proteins: the alpha chain of the C4b-binding protein (C4bpalpha), factor H (FH), factor H-related proteins (FHR-1 through FHR-4), complement receptors type 1 (CR1) and type 2 (CR2), the CR1-like protein (CR1L), membrane cofactor protein (MCP), decay accelerating factor (DAF), and the sand bass proteins, the cofactor protein (SBP1) and its homolog, the cofactor-related protein (SBCRP-1). Also included are the beta chain of the human C4b-binding protein (C4bpbeta) and the b subunit of human blood-clotting factor XIII (FXIIIb). Our results indicate that the human plasma complement regulators, FH and C4bpalpha, fall into two distinct groups on the basis of their sequence divergence. Homology among RCA proteins is in agreement with their chromosomal location, with the exception of C4bpbeta. The evolutionary relationships among individual short consensus repeats are confirmed by the exon/intron structure of the RCA members. Structural similarities among repeats of the RCA proteins correlate with their functional activities and demonstrate the importance of the N-terminal SCRs.