scispace - formally typeset
Search or ask a question

Showing papers by "Drexel University published in 2003"


Journal ArticleDOI
TL;DR: Research on experienced repeat online shoppers shows that consumer trust is as important to online commerce as the widely accepted TAM use-antecedents, perceived usefulness and perceived ease of use, and provides evidence that online trust is built through a belief that the vendor has nothing to gain by cheating.
Abstract: A separate and distinct interaction with both the actual e-vendor and with its IT Web site interface is at the heart of online shopping Previous research has established, accordingly, that online purchase intentions are the product of both consumer assessments of the IT itself-specifically its perceived usefulness and ease-of-use (TAM)-and trust in the e-vendor But these perspectives have been examined independently by IS researchers Integrating these two perspectives and examining the factors that build online trust in an environment that lacks the typical human interaction that often leads to trust in other circumstances advances our understanding of these constructs and their linkages to behavior Our research on experienced repeat online shoppers shows that consumer trust is as important to online commerce as the widely accepted TAM use-antecedents, perceived usefulness and perceived ease of use Together these variable sets explain a considerable proportion of variance in intended behavior The study also provides evidence that online trust is built through (1) a belief that the vendor has nothing to gain by cheating, (2) a belief that there are safety mechanisms built into the Web site, and (3) by having a typical interface, (4) one that is, moreover, easy to use

6,853 citations


Journal ArticleDOI
TL;DR: The performance of this method for removing noise from digital images substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Abstract: We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.

2,439 citations


Journal ArticleDOI
TL;DR: It is suggested that dysregulation of locus coeruleus-noradrenergic neurotransmission may contribute to cognitive and/or arousal dysfunction associated with a variety of psychiatric disorders, including attention-deficit hyperactivity disorder, sleep and arousal disorders, as well as certain affective Disorders, including post-traumatic stress disorder.

2,207 citations


Journal ArticleDOI
K. Eguchi1, Sanshiro Enomoto1, K. Furuno1, J. Goldman1, H. Hanada1, H. Ikeda, Kiyohiro Ikeda1, Kunio Inoue, K. Ishihara1, W. Itoh1, T. Iwamoto1, Tomoya Kawaguchi1, T. Kawashima1, H. Kinoshita1, Yasuhiro Kishimoto, M. Koga, Y. Koseki1, T. Maeda1, T. Mitsui, M. Motoki, K. Nakajima1, M. Nakajima1, T. Nakajima1, Hiroshi Ogawa1, K. Owada1, T. Sakabe1, I. Shimizu, J. Shirai1, F. Suekane, A. Suzuki1, K. Tada1, Osamu Tajima1, T. Takayama1, K. Tamae1, Hideki Watanabe, J. Busenitz2, Z. Djurcic2, K. McKinny2, Dongming Mei2, A. Piepke2, E. Yakushev2, B. E. Berger3, Y. D. Chan3, M. P. Decowski3, D. A. Dwyer3, Stuart J. Freedman3, Y. Fu3, B. K. Fujikawa3, K. M. Heeger3, K. T. Lesko3, K. B. Luk3, Hitoshi Murayama3, D. R. Nygren3, C. E. Okada3, A. W. P. Poon3, H. M. Steiner3, Lindley Winslow3, G. A. Horton-Smith4, R. D. McKeown4, J. Ritter4, B. Tipton4, Petr Vogel4, C. E. Lane5, T. Miletic5, Peter Gorham, G. Guillian, John G. Learned, J. Maricic, S. Matsuno, Sandip Pakvasa, S. Dazeley6, S. Hatakeyama6, M. Murakami6, R. Svoboda6, B. D. Dieterle7, M. DiMauro7, J. A. Detwiler8, Giorgio Gratta8, K. Ishii8, N. Tolich8, Y. Uchida8, M. Batygov9, W. M. Bugg9, H. O. Cohn9, Yuri Efremenko9, Yuri Kamyshkov9, A. Kozlov9, Y. Nakamura9, L. De Braeckeleer10, L. De Braeckeleer11, C. R. Gould11, C. R. Gould10, Hugon J Karwowski11, Hugon J Karwowski10, D. M. Markoff10, D. M. Markoff11, J. A. Messimore10, J. A. Messimore11, Koji Nakamura10, Koji Nakamura11, Ryan Rohm11, Ryan Rohm10, Werner Tornow11, Werner Tornow10, Albert Young11, Albert Young10, Y. F. Wang 
TL;DR: In the context of two-flavor neutrino oscillations with CPT invariance, all solutions to the solar neutrinos problem except for the "large mixing angle" region are excluded.
Abstract: KamLAND has measured the flux of ν _e’s from distant nuclear reactors. We find fewer ν _e events than expected from standard assumptions about ν _e propagation at the 99.95% C.L. In a 162 ton·yr exposure the ratio of the observed inverse β-decay events to the expected number without ν _e disappearance is 0.611±0.085(stat)±0.041(syst) for ν _e energies >3.4 MeV. In the context of two-flavor neutrino oscillations with CPT invariance, all solutions to the solar neutrino problem except for the “large mixing angle” region are excluded.

2,108 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of satisfaction on e-loyalty in the context of electronic commerce and found that consumers' individual level factors and firms' business level factors moderated the relationship between satisfaction and e-satisfaction.
Abstract: The authors investigate the impact of satisfaction on loyalty in the context of electronic commerce Findings of this research indicate that although e-satisfaction has an impact on e-loyalty, this relationship is moderated by (a) consumers' individual level factors and (b) firms' business level factors Among consumer level factors, convenience motivation and purchase size were found to accentuate the impact of e-satisfaction on e-loyalty, whereas inertia suppresses the impact of e-satisfaction on e-loyalty With respect to business level factors, both trust and perceived value, as developed by the company, significantly accentuate the impact of e-satisfaction on e-loyalty © 2003 Wiley Periodicals, Inc

2,011 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the relation between work-family balance and quality of life among professionals employed in public accounting and found that those who invested substantial time in their combined work and family roles, those who spent more time on family than work experienced a higher quality-of-life than balanced individuals.

1,348 citations


Journal ArticleDOI
TL;DR: The definitions of trust are analyzed, the relevant dimensions of trust for an on-line context are identified, and a definition of trust between people and informational or transactional websites is presented.
Abstract: Trust is emerging as a key element of success in the on-line environment. Although considerable research on trust in the offline world has been performed, to date empirical study of on-line trust has been limited. This paper examines on-line trust, specifically trust between people and informational or transactional websites. It begins by analysing the definitions of trust in previous offline and on-line research. The relevant dimensions of trust for an on-line context are identified, and a definition of trust between people and informational or transactional websites is presented. We then turn to an examination of the causes of on-line trust. Relevant findings in the human-computer interaction literature are identified. A model of on-line trust between users and websites is presented. The model identifies three perceptual factors that impact on-line trust: perception of credibility, ease of use and risk. The model is discussed in detail and suggestions for future applications of the model are presented.

1,151 citations


Journal ArticleDOI
TL;DR: In this paper, the authors measured the galaxy luminosity density at z = 0.1 in five optical band passes corresponding to the SDSS bandpasses shifted to match their rest-frame shape.
Abstract: Using a catalog of 147,986 galaxy redshifts and fluxes from the Sloan Digital Sky Survey (SDSS), we measure the galaxy luminosity density at z = 0.1 in five optical bandpasses corresponding to the SDSS bandpasses shifted to match their rest-frame shape at z = 0.1. We denote the bands 0.1u, 0.1g, 0.1r, 0.1i, 0.1z with λeff = (3216, 4240, 5595, 6792, 8111 A), respectively. To estimate the luminosity function, we use a maximum likelihood method that allows for a general form for the shape of the luminosity function, fits for simple luminosity and number evolution, incorporates the flux uncertainties, and accounts for the flux limits of the survey. We find luminosity densities at z = 0.1 expressed in absolute AB magnitudes in a Mpc3 to be (-14.10 ± 0.15, -15.18 ± 0.03, -15.90 ± 0.03, -16.24 ± 0.03, -16.56 ± 0.02) in (0.1u, 0.1g, 0.1r, 0.1i, 0.1z), respectively, for a cosmological model with Ω0 = 0.3, ΩΛ = 0.7, and h = 1 and using SDSS Petrosian magnitudes. Similar results are obtained using Sersic model magnitudes, suggesting that flux from outside the Petrosian apertures is not a major correction. In the 0.1r band, the best-fit Schechter function to our results has * = (1.49 ± 0.04) × 10-2 h3 Mpc-3, M* - 5 log10 h = -20.44 ± 0.01, and α = -1.05 ± 0.01. In solar luminosities, the luminosity density in 0.1r is (1.84 ± 0.04) × 108 h L0.1r,☉ Mpc-3. Our results in the 0.1g band are consistent with other estimates of the luminosity density, from the Two-Degree Field Galaxy Redshift Survey and the Millennium Galaxy Catalog. They represent a substantial change (~0.5 mag) from earlier SDSS luminosity density results based on commissioning data, almost entirely because of the inclusion of evolution in the luminosity function model.

1,138 citations


Journal ArticleDOI
TL;DR: The Sloan Digital Sky Survey (SDSS) has validated and made publicly available its First Data Release as discussed by the authors, which consists of 2099 deg2 of five-band (u, g, r, i, z) imaging data, 186,240 spectra of galaxies, quasars, stars and calibrating blank sky patches selected over 1360 deg 2 of this area.
Abstract: The Sloan Digital Sky Survey (SDSS) has validated and made publicly available its First Data Release. This consists of 2099 deg2 of five-band (u, g, r, i, z) imaging data, 186,240 spectra of galaxies, quasars, stars and calibrating blank sky patches selected over 1360 deg2 of this area, and tables of measured parameters from these data. The imaging data go to a depth of r ≈ 22.6 and are photometrically and astrometrically calibrated to 2% rms and 100 mas rms per coordinate, respectively. The spectra cover the range 3800–9200 A, with a resolution of 1800–2100. This paper describes the characteristics of the data with emphasis on improvements since the release of commissioning data (the SDSS Early Data Release) and serves as a pointer to extensive published and on-line documentation of the survey.

948 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a complete description of the CHOOZ experiment, including the source and detector, the calibration methods and stability checks, the event reconstruction procedures and the Monte Carlo simulation.
Abstract: This final article about the CHOOZ experiment presents a complete description of the $\bar{ u}_e$ source and detector, the calibration methods and stability checks, the event reconstruction procedures and the Monte Carlo simulation. The data analysis, systematic effects and the methods used to reach our conclusions are fully discussed. Some new remarks are presented on the deduction of the confidence limits and on the correct treatment of systematic errors.

898 citations


Journal ArticleDOI
TL;DR: This study describes a free-simulation experiment that compares the degree and relative importance of customer trust in an e-vendor vis-a-vis TAM constructs of the website, between potential and repeat customers, and finds that repeat customers trusted the e-Vendor more, perceived the website to be more useful and easier to use, and were more inclined to purchase from it.
Abstract: An e-vendor's website inseparably embodies an interaction with the vendor and an interaction with the IT website interface. Accordingly, research has shown two sets of unrelated usage antecedents by customers: (1) customer trust in the e-vendor and (2) customer assessments of the IT itself, specifically the perceived usefulness and perceived ease-of-use of the website as depicted in the technology acceptance model (TAM). Research suggests, however, that the degree and impact of trust, perceived usefulness, and perceived ease of use change with experience. Using existing, validated scales, this study describes a free-simulation experiment that compares the degree and relative importance of customer trust in an e-vendor vis-a-vis TAM constructs of the website, between potential (i.e., new) customers and repeat (i.e., experienced) ones. The study found that repeat customers trusted the e-vendor more, perceived the website to be more useful and easier to use, and were more inclined to purchase from it. The data also show that while repeat customers' purchase intentions were influenced by both their trust in the e-vendor and their perception that the website was useful, potential customers were not influenced by perceived usefulness, but only by their trust in the e-vendor. Implications of this apparent trust-barrier and guidelines for practice are discussed.

Journal ArticleDOI
TL;DR: Using photometry and spectroscopy of 183,487 galaxies from the Sloan Digital Sky Survey, the authors presented bivariate distributions of pairs of seven galaxy properties: four optical colors, surface brightness, radial profile shape as measured by the Sersic index, and absolute magnitude.
Abstract: Using photometry and spectroscopy of 183,487 galaxies from the Sloan Digital Sky Survey, we present bivariate distributions of pairs of seven galaxy properties: four optical colors, surface brightness, radial profile shape as measured by the Sersic index, and absolute magnitude. In addition, we present the dependence of local galaxy density (smoothed on 8 h � 1 Mpc scales) on all of these properties. Several classic, well-known relations among galaxy properties are evident at extremely high signal-to-noise ratio: the color- color relations of galaxies, the color-magnitude relations, the magnitude-surface brightness relation, and the dependence of density on color and absolute magnitude. We show that most of the i-band luminosity density in the universe is in the absolute magnitude and surface brightness ranges used: � 23:5 < M0:1i < � 17:0 mag and 17 < l0:1i < 24 mag in 1 arcsec 2 (the notation z b represents the b band shifted blueward by a factor ð1 þ zÞ). Some of the relationships between parameters, in particular the color-magnitude relations, show stronger correlations for exponential galaxies and concentrated galaxies taken separately than for all galaxies taken together. We provide a simple set of fits of the dependence of galaxy properties on luminosity for these two sets of galaxies and other quantitative details of our results. Subject headings: galaxies: fundamental parameters — galaxies: photometry — galaxies: statistics On-line material: ASCII parameter files, color figure, FITS files 1. MOTIVATION There are strong correlations among the measurable physical properties of galaxies. The classification of galaxies along the visual morphological sequence described by Hubble (1936) correlates well with the dominance of their central bulge, their surface brightnesses, and their colors. These properties also correlate with other properties, such as metallicity, emission-line strength, luminosity in visual bands, neutral gas content, and the winding angle of the spiral structure (for a review, see Roberts & Haynes 1994). The surface brightnesses of giant galaxies classified morpho- logically as elliptical are known to be strongly correlated with their sizes (Kormendy 1977; Kormendy & Djorgovski 1989). Galaxy colors (at least of morphologically elliptical galaxies) are known to be strongly correlated with galaxy luminosity (Baum 1959; Faber 1973; Visvanathan & Sandage 1977; Terlevich et al. 2001). The gravitational mass of a galaxy is closely related to the luminosity and other galaxy properties. These galaxy relations manifest them-

Proceedings ArticleDOI
27 Oct 2003
TL;DR: A new, general approach for safeguarding systems against any type of code-injection attack, by creating process-specific randomized instruction sets of the system executing potentially vulnerable software that can serve as a low-overhead protection mechanism, and can easily complement other mechanisms.
Abstract: We describe a new, general approach for safeguarding systems against any type of code-injection attack. We apply Kerckhoff's principle, by creating process-specific randomized instruction sets (e.g., machine instructions) of the system executing potentially vulnerable software. An attacker who does not know the key to the randomization algorithm will inject code that is invalid for that randomized processor, causing a runtime exception. To determine the difficulty of integrating support for the proposed mechanism in the operating system, we modified the Linux kernel, the GNU binutils tools, and the bochs-x86 emulator. Although the performance penalty is significant, our prototype demonstrates the feasibility of the approach, and should be directly usable on a suitable-modified processor (e.g., the Transmeta Crusoe).Our approach is equally applicable against code-injecting attacks in scripting and interpreted languages, e.g., web-based SQL injection. We demonstrate this by modifying the Perl interpreter to permit randomized script execution. The performance penalty in this case is minimal. Where our proposed approach is feasible (i.e., in an emulated environment, in the presence of programmable or specialized hardware, or in interpreted languages), it can serve as a low-overhead protection mechanism, and can easily complement other mechanisms.


Journal ArticleDOI
TL;DR: Behavioral treatment for obesity seeks to identify and modify eating, activity, and thinking habits that contribute to patients' weight problems as discussed by the authors, recognizing that body weight is affected by factors other than behavior, which include genetic, metabolic, and hormonal influences.

Journal ArticleDOI
TL;DR: A multicenter study of screening for trisomies 21 and 18 among patients with pregnancies between 74 and 97 days of gestation, based on maternal age, maternal levels of free β human chorionic gonadotropin and pregnancy-associated plasma protein A, and ultrasonographic measurement of fetal nuchal translucency identified 85.2 percent of the 61 cases of Down's syndrome.
Abstract: Background Screening for aneuploid pregnancies is routinely performed after 15 weeks of gestation and has a sensitivity of approximately 65 percent, with a false positive rate of 5 percent. First-trimester markers of aneuploidy have been developed, but their use in combination has not been adequately evaluated in clinical practice. Methods We conducted a multicenter study of screening for trisomies 21 and 18 among patients with pregnancies between 74 and 97 days of gestation, based on maternal age, maternal levels of free β human chorionic gonadotropin and pregnancy-associated plasma protein A, and ultrasonographic measurement of fetal nuchal translucency. A screening result was considered to be positive for trisomy 21 if the calculated risk was at least 1 in 270 pregnancies and positive for trisomy 18 if the risk was at least 1 in 150. Results Screening was completed in 8514 patients with singleton pregnancies. This approach to screening identified 85.2 percent of the 61 cases of Down's syndrome (95 perc...

Journal ArticleDOI
TL;DR: In this paper, the authors identify four distributional categories into which such data can be put, and focus on regression models for the first category, for proportions observed on the open interval (0, 1).
Abstract: Many types of studies examine the influence of selected variables on the conditional expectation of a proportion or vector of proportions, for example, market shares, rock composition, and so on. We identify four distributional categories into which such data can be put, and focus on regression models for the first category, for proportions observed on the open interval (0, 1). For these data, we identify different specifications used in prior research and compare these specifications using two common samples and specifications of the regressors. Based upon our analysis, we recommend that researchers use either a parametric regression model based upon the beta distribution or a quasi-likelihood regression model developed by Papke and Wooldridge (1997) for these data. Concerning the choice between these two regression models, we recommend that researchers use the parametric regression model unless their sample size is large enough to justify the asymptotic arguments underlying the quasi-likelihood approach.


Journal ArticleDOI
TL;DR: Despite the marked variability in composition, structure, function, and frequency of infection among the various types of nonvalvular cardiovascular devices reviewed in this article, there are several areas of commonality for infection of these devices.
Abstract: More than a century ago, Osler took numerous syndrome descriptions of cardiac valvular infection that were incomplete and confusing and categorized them into the cardiovascular infections known as infective endocarditis. Because he was both a clinician and a pathologist, he was able to provide a meaningful outline of this complex disease. Technical advances have allowed us to better subcategorize infective endocarditis on the basis of microbiological etiology. More recently, the syndromes of infective endocarditis and endarteritis have been expanded to include infections involving a variety of cardiovascular prostheses and devices that are used to replace or assist damaged or dysfunctional tissues (Table 1). Taken together, infections of these novel intracardiac, arterial, and venous devices are frequently seen in medical centers throughout the developed world. In response, the American Heart Association’s Committee on Rheumatic Fever, Endocarditis, and Kawasaki Disease wrote this review to assist and educate clinicians who care for an increasing number of patients with nonvalvular cardiovascular device–related infections. Because timely guidelines1,2 exist that address the prevention and management of intravascular catheter–related infections, these device-related infections are not discussed in the present Statement. View this table: TABLE 1. Nonvalvular Cardiovascular Device–Related Infections This review is divided into two broad sections. The first section examines general principles for the evaluation and management of infection that apply to all nonvalvular cardiovascular devices. Despite the marked variability in composition, structure, function, and frequency of infection among the various types of nonvalvular cardiovascular devices reviewed in this article, there are several areas of commonality for infection of these devices. These include clinical manifestations, microbiology, pathogenesis, diagnosis, treatment, and prevention. The second section addresses each device and describes unique clinical features of infection. Each device is placed into one of 3 categories—intracardiac, arterial, or venous—for discussion. ### Clinical Manifestations The specific signs and symptoms associated with an infection of a …

Journal ArticleDOI
David Gefen1
TL;DR: The data show that online shoppers’ intentions to continue using a website that they last bought at depend not only on PU and PEOU, but also on habit, indicating that habit alone can explain a large proportion of the variance of continued use of a website.
Abstract: According to the Technology Acceptance Model (TAM), behavioral intentions to use a new IT are primarily the product of a rational analysis of its desirable perceived outcomes, namely perceived usefulness (PU) and perceived ease of use (PEOU). But what happens with the continued use of an IT among experienced users? Does habit also kick in as a major factor or is continued use only the product of its desirable outcomes? This study examines this question in the context of experienced online shoppers. The data show that, as hypothesized, online shoppers’ intentions to continue using a website that they last bought at depend not only on PU and PEOU, but also on habit. In fact, habit alone can explain a large proportion of the variance of continued use of a website. Moreover, the explained variance indicates that habit may also be a major predictor of PU and PEOU among experienced shoppers. Implications are discussed.

Journal ArticleDOI
TL;DR: The notion that mothers who communicate with their daughters about sex can affect their daughters' sexual behaviors in positive ways support the design and implementation of family-based approaches to improve parent-adolescent sexual risk communication as one means of reducing HIV-related sexual risk behaviors among inner-city adolescent females.

Journal ArticleDOI
TL;DR: In this paper, a conceptual framework incorporating both the motivational and the resource effects of time constraints on consumers' information processing is developed to understand how time constraints influence consumers' product evaluations over different levels of price information, and the results show that perceptions of quality and monetary sacrifice exhibit different response patterns depending on the time constraints, price levels, and subjects' motivations to process information.
Abstract: This article examines how time constraints influence consumers' product evaluations over different levels of price information. To understand the effects of time constraints (time pressure), a conceptual framework incorporating both the motivational and the resource effects of time constraints on consumers' information processing is developed. Using price as the attribute information to be evaluated, specific hypotheses about the effects of time constraints on the relationship between price and consumers' perceptions of quality and monetary sacrifice are proposed. The results of a replicated experiment show that perceptions of quality and monetary sacrifice exhibit different response patterns depending on the time constraints, price levels, and subjects' motivations to process information. Additional analyses provide insights into how these two perceptions are integrated to form perceptions of value.

Journal ArticleDOI
TL;DR: A recent review as mentioned in this paper highlights the major progress over the last decade on characterization of geochemically heterogeneous soil/sediment organic matter (SOM) and the impacts of SOM heterogeneity on sorption and desorption of hydrophobic organic contaminants (HOCs) under equilibrium and rate limiting conditions.

Journal ArticleDOI
TL;DR: In this paper, the role of deformation twinning in the strain-hardening behavior of high purity, polycrystalline α-titanium in a number of different deformation modes was investigated.

Journal ArticleDOI
Oduola Abiola1, Joe M. Angel2, Philip Avner3, Alexander A. Bachmanov4, John K. Belknap5, Beth Bennett6, Elizabeth P. Blankenhorn7, David A. Blizard8, Valerie J. Bolivar9, Gudrun A. Brockmann10, Kari J. Buck5, Jean Francois Bureau3, William L. Casley11, Elissa J. Chesler12, James M. Cheverud13, Gary A. Churchill, Melloni N. Cook14, John C. Crabbe5, Wim E. Crusio15, Ariel Darvasi16, Gerald de Haan17, Peter Demant18, Rebecca W. Doerge19, Rosemary W. Elliott18, Charles R. Farber20, Lorraine Flaherty9, Jonathan Flint21, Howard K. Gershenfeld22, John P. Gibson23, Jing Gu12, Weikuan Gu12, Heinz Himmelbauer24, Robert Hitzemann5, Hui-Chen Hsu25, Kent W. Hunter26, Fuad A. Iraqi23, Ritsert C. Jansen17, Thomas E. Johnson6, Byron C. Jones8, Gerd Kempermann27, Frank Lammert28, Lu Lu12, Kenneth F. Manly18, Douglas B. Matthews14, Juan F. Medrano20, Margarete Mehrabian29, Guy Mittleman14, Beverly A. Mock26, Jeffrey S. Mogil30, Xavier Montagutelli3, Grant Morahan31, John D. Mountz25, Hiroki Nagase18, Richard S. Nowakowski32, Bruce F. O'Hara33, Alexander V. Osadchuk, Beverly Paigen, Abraham A. Palmer34, Jeremy L. Peirce35, Daniel Pomp36, Michael Rosemann, Glenn D. Rosen37, Leonard C. Schalkwyk1, Ze'ev Seltzer38, Stephen H. Settle39, Kazuhiro Shimomura40, Siming Shou41, James M. Sikela42, Linda D. Siracusa43, Jimmy L. Spearow20, Cory Teuscher44, David W. Threadgill45, Linda A. Toth46, A. A. Toye47, Csaba Vadasz48, Gary Van Zant49, Edward K. Wakeland22, Robert W. Williams12, Huang-Ge Zhang25, Fei Zou45 
TL;DR: This white paper by eighty members of the Complex Trait Consortium presents a community's view on the approaches and statistical analyses that are needed for the identification of genetic loci that determine quantitative traits.
Abstract: This white paper by eighty members of the Complex Trait Consortium presents a community's view on the approaches and statistical analyses that are needed for the identification of genetic loci that determine quantitative traits. Quantitative trait loci (QTLs) can be identified in several ways, but is there a definitive test of whether a candidate locus actually corresponds to a specific QTL?

Journal ArticleDOI
TL;DR: Calomiris and Mason as mentioned in this paper argued that bank distress magnifies the extent of the economic decline during the Depression through changes in the aggregate supply of money and interest rates at the national level.
Abstract: The consequences of bank distress for the economy during the Depression remain an area of unresolved controversy. Since John M. Keynes (1931) and Irving Fisher (1933), macroeconomists have argued that bank distress magnified the extent of the economic decline during the Depression. As the intermediaries controlling money and credit, banks were in a special position to transmit their distress to other sectors. But the mechanism through which banking distress mattered for the economy has been hotly contested. Milton Friedman and Anna J. Schwartz (1963) saw the contraction in the money multiplier—driven, in their view, by panicked depositors’ withdrawals of deposits—as the primary mechanism through which banking distress affected the real economy. They described the mechanism transmitting banking distress to the real sector as operating at the national level through changes in the aggregate supply of money and interest rates. Bank distress reduced the money supply available to the public either through the closure of banks and the consequent freezing of bank deposits, or the withdrawals of deposits by depositors that feared bank failure. Ben S. Bernanke (1983), building on Fisher (1933), emphasized the transmission of monetary shocks via their effects on the balance sheets of borrowers and on the supply of credit by banks. Borrowers’ balance sheets were worsened by debt deflation as the result of fixed dollar debt obligations—borrowers’ net worth and cash flow declined with the rising value of debt service costs relative to income. Borrowers with positive net present value projects, but weak balance sheets, had less internally generated retained earnings to invest and could not qualify for credit. Furthermore, Bernanke argued that the contraction of the money supply produced contraction of nominal income and prices relative to fixed debt service, which weakened borrowers’ balance sheets, and in turn, weakened banks. Not only did firms’ financial distress reduce the number of qualified borrowers, the contraction in banks’ net worth forced a reduction in the supply of bank loans to qualified borrowers. Many firms and individuals relied on banks for credit, and as those banks suffered losses of capital (due to asset value declines) and contractions in deposits (as depositors reacted to bank weakness by withdrawing their funds), even borrowers with viable projects and strong balance sheets experienced a decrease in the effective supply of loanable funds. Bernanke termed the combined weakening of borrowers’ balance sheets and the contraction in bank credit supply a rise in the “cost of credit intermediation.” The scarcity of perfect substitutes for the positive net present value investments of firms with weak balance sheets, and for the credit supplied by existing banks, implies that the weakening of firms’ and banks’ balance sheets, the disappearance of banks, and the contraction in surviving banks’ lending made it more difficult for the economy to channel funds to their best use. Thus, what began as a contraction in aggregate demand became a contraction in aggregate supply, which magnified adverse economic shocks and prolonged and deepened the Depression. The financial distress of firms and banks, and the decline in bank lending, were not only symptoms of the Depression, but means for magnifying the shocks that caused the Depression. Bernanke’s statistical evidence in support of this story is derived from time-series analysis at the national level, in * Calomiris: Graduate School of Business, 601 Uris Hall, Columbia University, 3022 Broadway, New York, NY 10027, and National Bureau of Economic Research (e-mail: cc374@columbia.edu); Mason: Department of Finance, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104 (e-mail: joe.mason@drexel.edu). We thank Valerie Ramey, David Wheelock, Charles Himmelberg, Steve Zeldes, Gary Gorton, two referees, and seminar participants at Columbia University, Wharton, Northwestern University and the 2001 Economic History Association Meetings for helpful comments on an earlier draft. We gratefully acknowledge support from the National Science Foundation, the University of Illinois, and the Federal Reserve Bank of St. Louis.

Book
10 Apr 2003
TL;DR: In this paper, the authors discuss the dangers of political inclusion: Moderation and Bureaucratization, and the dynamics of Democratization in the four countries of the United States, Movements, and Democracy.
Abstract: 1. States, Movements, and Democracy 2. Patterns of Inclusion and Exclusion in the Four Countries 3. Cooptive or Effective Inclusion? Movement Aims and State Imperatives 4. The Perils of Political Inclusion: Moderation and Bureaucratization 5. The Dynamics of Democratization 6. Evaluating Movement Effectiveness and Strategy 7. Ecological Modernization, Risk Society, and the Green State Conclusion

Journal ArticleDOI
01 Sep 2003-Polymer
TL;DR: In this article, the effect of electrospinning parameters on the morphology and fiber diameter of regenerated silk from Bombyx mori was studied, and the effects of electric field and tip-to-collection plate distances of various silk concentrations in formic acid on fiber uniformity, morphology and diameter were measured.

Journal ArticleDOI
TL;DR: Degradable, porous, polymer bioactive glass composite possessing improved mechanical properties and osteointegrative potential compared to degradable polymers of poly(lactic acid-glycolic acid) alone is successfully developed.
Abstract: In the past decade, tissue engineering-based bone grafting has emerged as a viable alternative to biological and synthetic grafts. The biomaterial component is a critical determinant of the ultimate success of the tissue-engineered graft. Because no single existing material possesses all the necessary properties required in an ideal bone graft, our approach has been to develop a three dimensional (3-D), porous composite of polylactide-co-glycolide (PLAGA) and 45S5 bioactive glass (BG) that is biodegradable, bioactive, and suitable as a scaffold for bone tissue engineering (PLAGA-BG composite). The objectives of this study were to examine the mechanical properties of a PLAGA-BG matrix, to evaluate the response of human osteoblast-like cells to the PLAGA-BG composite, and to evaluate the ability of the composite to form a surface calcium phosphate layer in vitro. Structural and mechanical properties of PLAGA-BG were measured, and the formation of a surface calcium phosphate layer was evaluated by surface analysis methods. The growth and differentiation of human osteoblast-like cells on PLAGA-BG were also examined. A hypothesis was that the combination of PLAGA with BG would result in a biocompatible and bioactive composite, capable of supporting osteoblast adhesion, growth and differentiation, with mechanical properties superior to PLAGA alone. The addition of bioactive glass granules to the PLAGA matrix resulted in a structure with higher compressive modulus than PLAGA alone. Moreover, the PLAGA-BA composite was found to be a bioactive material, as it formed surface calcium phosphate deposits in a simulated body fluid (SBF), and in the presence of cells and serum proteins. The composite supported osteoblast-like morphology, stained positively for alkaline phosphatase, and supported higher levels of Type I collagen synthesis than tissue culture polystyrene controls. We have successfully developed a degradable, porous, polymer bioactive glass composite possessing improved mechanical properties and osteointegrative potential compared to degradable polymers of poly(lactic acid-glycolic acid) alone. Future work will focus on the optimization of the composite scaffold for bone tissue-engineering applications and the evaluation of the 3-D composite in an in vivo model.

Journal ArticleDOI
27 Aug 2003-JAMA
TL;DR: Sertraline treatment was generally well tolerated by patients as discussed by the authors, with a 40% decrease in the adjusted CDRS-R (Best Description of Child total score and reported adverse events.
Abstract: ContextThe efficacy, safety, and tolerability of selective serotonin reuptake inhibitors (SSRIs) in the treatment of adults with major depressive disorder (MDD) are well established. Comparatively few data are available on the effects of SSRIs in depressed children and adolescents.ObjectiveTo evaluate the efficacy and safety of sertraline compared with placebo in treatment of pediatric patients with MDD.Design and SettingTwo multicenter randomized, double-blind, placebo-controlled trials were conducted at 53 hospital, general practice, and academic centers in the United States, India, Canada, Costa Rica, and Mexico between December 1999 and May 2001 and were pooled a priori.ParticipantsThree hundred seventy-six children and adolescents aged 6 to 17 years with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition–defined MDD of at least moderate severity.InterventionPatients were randomly assigned to receive a flexible dosage (50-200 mg/d) of sertraline (n = 189) or matching placebo tablets (n = 187) for 10 weeks.Main Outcome MeasuresChange from baseline in the Children's Depression Rating Scale–Revised (CDRS-R) Best Description of Child total score and reported adverse events.ResultsSertraline-treated patients experienced statistically significantly greater improvement than placebo patients on the CDRS-R total score (mean change at week 10, –30.24 vs –25.83, respectively; P = .001; overall mean change, –22.84 vs –20.19, respectively; P = .007). Based on a 40% decrease in the adjusted CDRS-R total score at study end point, 69% of sertraline-treated patients compared with 59% of placebo patients were considered responders (P = .05). Sertraline treatment was generally well tolerated. Seventeen sertraline-treated patients (9%) and 5 placebo patients (3%) prematurely discontinued the study because of adverse events. Adverse events that occurred in at least 5% of sertraline-treated patients and with an incidence of at least twice that in placebo patients included diarrhea, vomiting, anorexia, and agitation.ConclusionThe results of this pooled analysis demonstrate that sertraline is an effective and well-tolerated short-term treatment for children and adolescents with MDD.