scispace - formally typeset
Search or ask a question

Showing papers by "University of Maryland, College Park published in 2001"


Journal ArticleDOI
TL;DR: A management construct cannot be used effectively by practitioners and researchers if a common agreement on its definition is lacking as discussed by the authors, which is the case with the term "supply chain management".
Abstract: A management construct cannot be used effectively by practitioners and researchers if a common agreement on its definition is lacking. Such is the case with the term “supply chain management”—so many definitions are used that there is little consensus on what it means. Thus, the purpose of this paper is to examine the existing research in an effort to understand the concept of “supply chain management.” Various definitions of SCM and “supply chain” are reviewed, categorized, and synthesized. Definitions of supporting constructs of SCM and a framework are then offered to establish a consistent means to conceptualize SCM. Antecedents and consequences of SCM are identified, and the boundaries of SCM in terms of business functions and organizations are proposed. A conceptual model and unified definition of SCM are then presented that indicate the nature, antecedents, and consequences of the phenomena.

4,451 citations


Journal ArticleDOI
TL;DR: The number of people exposed to environmental tobacco smoke in California seems to have decreased over the same time period, where exposure is determined by the reported time spent with a smoker.
Abstract: Because human activities impact the timing, location, and degree of pollutant exposure, they play a key role in explaining exposure variation. This fact has motivated the collection of activity pattern data for their specific use in exposure assessments. The largest of these recent efforts is the National Human Activity Pattern Survey (NHAPS), a 2-year probability-based telephone survey ( n=9386) of exposure-related human activities in the United States (U.S.) sponsored by the U.S. Environmental Protection Agency (EPA). The primary purpose of NHAPS was to provide comprehensive and current exposure information over broad geographical and temporal scales, particularly for use in probabilistic population exposure models. NHAPS was conducted on a virtually daily basis from late September 1992 through September 1994 by the University of Maryland's Survey Research Center using a computer-assisted telephone interview instrument (CATI) to collect 24-h retrospective diaries and answers to a number of personal and exposure-related questions from each respondent. The resulting diary records contain beginning and ending times for each distinct combination of location and activity occurring on the diary day (i.e., each microenvironment). Between 340 and 1713 respondents of all ages were interviewed in each of the 10 EPA regions across the 48 contiguous states. Interviews were completed in 63% of the households contacted. NHAPS respondents reported spending an average of 87% of their time in enclosed buildings and about 6% of their time in enclosed vehicles. These proportions are fairly constant across the various regions of the U.S. and Canada and for the California population between the late 1980s, when the California Air Resources Board (CARB) sponsored a state-wide activity pattern study, and the mid-1990s, when NHAPS was conducted. However, the number of people exposed to environmental tobacco smoke (ETS) in California seems to have decreased over the same time period, where exposure is determined by the reported time spent with a smoker. In both California and the entire nation, the most time spent exposed to ETS was reported to take place in residential locations.

3,400 citations


Book
19 Oct 2001
TL;DR: In this article, a unified approach for the characterization of 2-dimensional (2-3D) moduli is presented. But the approach is not suitable for 3-dimensional moduli.
Abstract: Motivation and overview * PART I Microstructural characterization * Microstructural descriptors * Statistical mechanics of particle systems * Unified approach * Monodisperse spheres * Polydisperse spheres * Anisotropic media * Cell and random-field models * Percolation and clustering * Some continuum percolation results * Local volume fraction fluctuation * computer simulation and image analysis * PART II Microstructure property connections * Local and homogenized equations * Variational Principles * Phase-interchange relations * Exact results * Single-inclusion solutions * Effective medium approximations * Cluster expansions * Exact contrast expansions * Rigorous bounds * Evaluation of bounds * Cross-property relations * Appendix A Equilibrium Hard disk program * Appendix B Interrelations among 2-3D moduli* References * Index

3,021 citations


Journal ArticleDOI
TL;DR: A novel approach for drawing group inferences using ICA of fMRI data is introduced, and its application to a simple visual paradigm that alternately stimulates the left or right visual field is presented.
Abstract: Independent component analysis (ICA) is a promising analysis method that is being increasingly applied to fMRI data. A principal advantage of this approach is its applicability to cognitive paradigms for which detailed models of brain activity are not available. Independent component analysis has been successfully utilized to analyze single-subject fMRI data sets, and an extension of this work would be to provide for group inferences. However, unlike univariate methods (e.g., regression analysis, Kolmogorov-Smirnov statistics), ICA does not naturally generalize to a method suitable for drawing inferences about groups of subjects. We introduce a novel approach for drawing group inferences using ICA of fMRI data, and present its application to a simple visual paradigm that alternately stimulates the left or right visual field. Our group ICA analysis revealed task-related components in left and right visual cortex, a transiently task-related component in bilateral occipital/parietal cortex, and a non-task-related component in bilateral visual association cortex. We address issues involved in the use of ICA as an fMRI analysis method such as: (1) How many components should be calculated? (2) How are these components to be combined across subjects? (3) How should the final results be thresholded and/or presented? We show that the methodology we present provides answers to these questions and lay out a process for making group inferences from fMRI data using independent component analysis.

2,729 citations



Journal ArticleDOI
TL;DR: This article analyzed the capital structure choices of firms in 10 developing countries and provided evidence that these decisions are affected by the same variables as in developed countries, indicating that specific country factors are at work.
Abstract: This study uses a new data set to assess whether capital structure theory is portable across countries with different institutional structures. We analyze capital structure choices of firms in 10 developing countries, and provide evidence that these decisions are affected by the same variables as in developed countries. However, there are persistent differences across countries, indicating that specific country factors are at work. Our findings suggest that although some of the insights from modern finance theory are portable across countries, much remains to be done to understand the impact of different institutional features on capital structure choices. OUR KNOWLEDGE OF CAPITAL STRUCTURES has mostly been derived from data from developed economies that have many institutional similarities. The purpose of this paper is to analyze the capital structure choices made by companies from developing countries that have different institutional structures. The prevailing view, for example Mayer ~1990!, seems to be that financial decisions in developing countries are somehow different. Mayer is the most recent researcher to use aggregate f low of funds data to differentiate between financial systems based on the “Anglo-Saxon” capital markets model and those based on a “Continental-German-Japanese” banking model. However, because Mayer’s data comes from aggregate f low of funds data and not from individual firms, there is a problem with this approach. The differences between private, public, and foreign ownership structures have a profound inf luence on such data, but the differences may tell us little about how profit-oriented firms make their individual financial decisions. This paper uses a new firm-level database to examine the financial structures of firms in a sample of 10 developing countries. Thus, this study helps determine whether the stylized facts we have learned from studies of developed countries apply only to these markets, or whether they have more general applicability. Our focus is on answering three questions:

2,215 citations


Journal ArticleDOI
TL;DR: In this paper, the impact of acquisitions on the subsequent innovation performance of acquiring firms in the chemicals industry is examined, and the authors distinguish between technological acquisitions, acquisitions in which technology is a component of the acquired firm's assets, and non-technological acquisitions: acquisitions that do not involve a technological component.
Abstract: This paper examines the impact of acquisitions on the subsequent innovation performance of acquiring firms in the chemicals industry We distinguish between technological acquisitions, acquisitions in which technology is a component of the acquired firm's assets, and nontechnological acquisitions: acquisitions that do not involve a technological component We develop a framework relating acquisitions to firm innovation performance and develop a set of measures for quantifying the technological inputs a firm obtains through acquisitions We find that within technological acquisitions absolute size of the acquired knowledge base enhances innovation performance, while relative size of the acquired knowledge base reduces innovation output The relatedness of acquired and acquiring knowledge bases has a nonlinear impact on innovation output Nontechnological acquisitions do not have a significant effect on subsequent innovation output Copyright © 2001 John Wiley & Sons, Ltd

2,147 citations


Journal ArticleDOI
TL;DR: In this article, a variety of composite quasar spectra using a homogeneous data set of over 2200 spectra from the Sloan Digital Sky Survey (SDSS) was created, and the median composite covers a restwavelength range from 800 to 8555 A and reaches a peak signal-to-noise ratio of over 300 per 1 A resolution element in the rest frame.
Abstract: We have created a variety of composite quasar spectra using a homogeneous data set of over 2200 spectra from the Sloan Digital Sky Survey (SDSS). The quasar sample spans a redshift range of 0.044 ≤ z ≤ 4.789 and an absolute r' magnitude range of -18.0 to -26.5. The input spectra cover an observed wavelength range of 3800–9200 A at a resolution of 1800. The median composite covers a rest-wavelength range from 800 to 8555 A and reaches a peak signal-to-noise ratio of over 300 per 1 A resolution element in the rest frame. We have identified over 80 emission-line features in the spectrum. Emission-line shifts relative to nominal laboratory wavelengths are seen for many of the ionic species. Peak shifts of the broad permitted and semiforbidden lines are strongly correlated with ionization energy, as previously suggested, but we find that the narrow forbidden lines are also shifted by amounts that are strongly correlated with ionization energy. The magnitude of the forbidden line shifts is 100 km s-1, compared with shifts of up to 550 km s-1 for some of the permitted and semiforbidden lines. At wavelengths longer than the Lyα emission, the continuum of the geometric mean composite is well fitted by two power laws, with a break at ≈5000 A. The frequency power-law index, αν, is -0.44 from ≈1300 to 5000 A and -2.45 redward of ≈5000 A. The abrupt change in slope can be accounted for partly by host-galaxy contamination at low redshift. Stellar absorption lines, including higher order Balmer lines, seen in the composites suggest that young or intermediate-age stars make a significant contribution to the light of the host galaxies. Most of the spectrum is populated by blended emission lines, especially in the range 1500–3500 A, which can make the estimation of quasar continua highly uncertain unless large ranges in wavelength are observed. An electronic table of the median quasar template is available.

1,973 citations


Journal ArticleDOI
TL;DR: In this paper, the AERONET program of spectral aerosol optical depth, precipitable water, and derived Angstrom exponent were analyzed and compiled into a spectral optical properties climatology.
Abstract: Long-term measurements by the AERONET program of spectral aerosol optical depth, precipitable water, and derived Angstrom exponent were analyzed and compiled into an aerosol optical properties climatology. Quality assured monthly means are presented and described for 9 primary sites and 21 additional multiyear sites with distinct aerosol regimes representing tropical biomass burning, boreal forests, midlatitude humid climates, midlatitude dry climates, oceanic sites, desert sites, and background sites. Seasonal trends for each of these nine sites are discussed and climatic averages presented.

1,891 citations


Journal ArticleDOI
TL;DR: The Internet is a critically important research site for sociologists testing theories of technology diffusion and media effects, particularly because it is a medium uniquely capable of integrating modes of communication and forms of content.
Abstract: The Internet is a critically important research site for sociologists testing theories of technology diffusion and media effects, particularly because it is a medium uniquely capable of integrating modes of communication and forms of content. Current research tends to focus on the Internet's implications in five domains: 1) inequality (the “digital divide”); 2) community and social capital; 3) political participation; 4) organizations and other economic institutions; and 5) cultural participation and cultural diversity. A recurrent theme across domains is that the Internet tends to complement rather than displace existing media and patterns of behavior. Thus in each domain, utopian claims and dystopic warnings based on extrapolations from technical possibilities have given way to more nuanced and circumscribed understandings of how Internet use adapts to existing patterns, permits certain innovations, and reinforces particular kinds of change. Moreover, in each domain the ultimate social implications of t...

1,754 citations


Journal ArticleDOI
TL;DR: The authors drew upon strategic management theory, organizational behavior theory, organization theory, and entrepreneurship models to form an integrated model of venture growth including 17 concepts from different domains including strategic management and organizational behavior.
Abstract: We drew upon strategic management theory, organizational behavior theory, organization theory, and entrepreneurship models to form an integrated model of venture growth including 17 concepts from f...

Journal ArticleDOI
TL;DR: The results of a nation-wide, two-wave, longitudinal investigation of the factors driving personal computer (PC) adoption in American homes revealed that the decisions driving adoption and non-adoption were significantly different.
Abstract: While technology adoption in the workplace has been studied extensively, drivers of adoption in homes have been largely overlooked. This paper presents the results of a nation-wide, two-wave, longitudinal investigation of the factors driving personal computer (PC) adoption in American homes. The findings revealed that the decisions driving adoption and non-adoption were significantly different. Adopters were driven by utilitarian outcomes, hedonic outcomes (i.e., fun), and social outcomes (i.e., status) from adoption. Non-adopters, on the other hand, were influenced primarily by rapid changes in technology and the consequent fear of obsolescence. A second wave of data collection conducted six months after the initial survey indicated an asymmetrical relationship between intent and behavior, with those who did not intend to adopt a PC following more closely with their intent than those who intended to adopt one. We present important implications for research on adoption of technologies in homes and the workplace, and also discuss challenges facing the PC industry.

Journal ArticleDOI
01 Feb 2001-Nature
TL;DR: The potential weaknesses of limited character and taxon sampling are addressed in a comprehensive molecular phylogenetic analysis of 64 species sampled across all extant orders of placental mammals, providing new insight into the pattern of the early placental mammal radiation.
Abstract: The precise hierarchy of ancient divergence events that led to the present assemblage of modern placental mammals has been an area of controversy among morphologists, palaeontologists and molecular evolutionists. Here we address the potential weaknesses of limited character and taxon sampling in a comprehensive molecular phylogenetic analysis of 64 species sampled across all extant orders of placental mammals. We examined sequence variation in 18 homologous gene segments (including nearly 10,000 base pairs) that were selected for maximal phylogenetic informativeness in resolving the hierarchy of early mammalian divergence. Phylogenetic analyses identify four primary superordinal clades: (I) Afrotheria (elephants, manatees, hyraxes, tenrecs, aardvark and elephant shrews); (II) Xenarthra (sloths, anteaters and armadillos); (III) Glires (rodents and lagomorphs), as a sister taxon to primates, flying lemurs and tree shrews; and (IV) the remaining orders of placental mammals (cetaceans, artiodactyls, perissodactyls, carnivores, pangolins, bats and core insectivores). Our results provide new insight into the pattern of the early placental mammal radiation.

Journal ArticleDOI
14 Dec 2001-Science
TL;DR: Crown-group Eutheria may have their most recent common ancestry in the Southern Hemisphere (Gondwana), and placental phylogeny is investigated using Bayesian and maximum-likelihood methods and a 16.4-kilobase molecular data set.
Abstract: Molecular phylogenetic studies have resolved placental mammals into four major groups, but have not established the full hierarchy of interordinal relationships, including the position of the root. The latter is critical for understanding the early biogeographic history of placentals. We investigated placental phylogeny using Bayesian and maximum-likelihood methods and a 16.4-kilobase molecular data set. Interordinal relationships are almost entirely resolved. The basal split is between Afrotheria and other placentals, at about 103 million years, and may be accounted for by the separation of South America and Africa in the Cretaceous. Crown-group Eutheria may have their most recent common ancestry in the Southern Hemisphere (Gondwana).

Book ChapterDOI
TL;DR: While standard methods will not eliminate the bias when measurement errors are not classical, one can often use them to obtain bounds on this bias, and it is argued that validation studies allow us to assess the magnitude of measurement errors in survey data, and the validity of the classical assumption.
Abstract: Economists have devoted increasing attention to the magnitude and consequences of measurement error in their data. Most discussions of measurement error are based on the “classical” assumption that errors in measuring a particular variable are uncorrelated with the true value of that variable, the true values of other variables in the model, and any errors in measuring those variables. In this survey, we focus on both the importance of measurement error in standard survey-based economic variables and on the validity of the classical assumption. We begin by summarizing the literature on biases due to measurement error, contrasting the classical assumption and the more general case. We then argue that, while standard methods will not eliminate the bias when measurement errors are not classical, one can often use them to obtain bounds on this bias. Validation studies allow us to assess the magnitude of measurement errors in survey data, and the validity of the classical assumption. In principle, they provide an alternative strategy for reducing or eliminating the bias due to measurement error. We then turn to the work of social psychologists and survey methodologists which identifies the conditions under which measurement error is likely to be important. While there are some important general findings on errors in measuring recall of discrete events, there is less direct guidance on continuous variables such as hourly wages or annual earnings. Finally, we attempt to summarize the validation literature on specific variables: annual earnings, hourly wages, transfer income, assets, hours worked, unemployment, job characteristics like industry, occupation, and union status, health status, health expenditures, and education. In addition to the magnitude of the errors, we also focus on the validity of the classical assumption. Quite often, we find evidence that errors are negatively correlated with true values. The usefulness of validation data in telling us about errors in survey measures can be enhanced if validation data is collected for a random portion of major surveys (rather than, as is usually the case, for a separate convenience sample for which validation data could be obtained relatively easily); if users are more actively involved in the design of validation studies; and if micro data from validation studies can be shared with researchers not involved in the original data collection.

Journal ArticleDOI
TL;DR: This analysis has focused on cation transporter gene families for which initial characterizations have been achieved for individual members, including potassium transporters and channels, sodium transporter, calcium antiporters, cyclic nucleotide-gated channels, cation diffusion facilitator proteins, natural resistance-associated macrophage proteins, and Zn-regulated transporter Fe-regulatedporter-like proteins.
Abstract: Uptake and translocation of cationic nutrients play essential roles in physiological processes including plant growth, nutrition, signal transduction, and development. Approximately 5% of the Arabidopsis genome appears to encode membrane transport proteins. These proteins are classified in 46 unique families containing approximately 880 members. In addition, several hundred putative transporters have not yet been assigned to families. In this paper, we have analyzed the phylogenetic relationships of over 150 cation transport proteins. This analysis has focused on cation transporter gene families for which initial characterizations have been achieved for individual members, including potassium transporters and channels, sodium transporters, calcium antiporters, cyclic nucleotide-gated channels, cation diffusion facilitator proteins, natural resistance-associated macrophage proteins (NRAMP), and Zn-regulated transporter Fe-regulated transporter-like proteins. Phylogenetic trees of each family define the evolutionary relationships of the members to each other. These families contain numerous members, indicating diverse functions in vivo. Closely related isoforms and separate subfamilies exist within many of these gene families, indicating possible redundancies and specialized functions. To facilitate their further study, the PlantsT database (http://plantst.sdsc.edu) has been created that includes alignments of the analyzed cation transporters and their chromosomal locations.

Journal ArticleDOI
TL;DR: It is found that most people use few search terms, few modified queries, view few Web pages, and rarely use advanced search features, and the language of Web queries is distinctive.
Abstract: In studying actual Web searching by the public at large, we analyzed over one million Web queries by users of the Excite search engine. We found that most people use few search terms, few modified queries, view few Web pages, and rarely use advanced search features. A small number of search terms are used with high frequency, and a great many terms are unique; the language of Web queries is distinctive. Queries about recreation and entertainment rank highest. Findings are compared to data from two other large studies of Web queries. This study provides an insight into the public practices and choices in Web searching.

Journal ArticleDOI
TL;DR: In this article, the authors provide new insights into the economic sources of skewness and derive laws that decompose individual risk-neutral distributions into a systematic component and an idiosyncratic component.
Abstract: This article provides several new insights into the economic sources of skewness. First, we document the differential pricing of individual equity options versus the market index, and relate it to variations in return skewness. Second, we show how risk aversion introduces skewness in the risk-neutral density. Third, we derive laws that decompose individual return skewness into a systematic component and an idiosyncratic component. Empirical analysis of OEX options and 30 stocks demonstrates that individual risk-neutral distributions differ from that of the market index by being far less negatively skewed. This paper explains the presence and evolution of risk-neutral skewness over time and in the cross-section of individual stocks.

Journal ArticleDOI
TL;DR: This article used a meta-analysis to examine data from 29 experimental studies and found that on average, respondents overstate their preferences by a factor of about 3 in hypothetical settings, and that the degree of over-revelation is influenced by the distinction between willingness-to-pay and willingness to accept, publicversus private goods, and several elicitation methods.
Abstract: Preferences elicited in hypothetical settings have recently come underscrutiny, causing estimates from the contingent valuation method to bechallenged due to perceived ``hypothetical bias.'' Given that the receivedliterature derives value estimates using heterogeneous experimentaltechniques, understanding the effects of important design parameters onthe magnitude of hypothetical bias is invaluable. In this paper, we addressthis issue statistically by using a meta-analysis to examine data from 29experimental studies. Our empirical findings suggest that on averagesubjects overstate their preferences by a factor of about 3 in hypotheticalsettings, and that the degree of over-revelation is influenced by thedistinction between willingness-to-pay and willingness-to-accept, publicversus private goods, and several elicitation methods.

Journal ArticleDOI
TL;DR: The integration of agent technology and ontologies could significantly affect the use of Web services and the ability to extend programs to perform tasks for users more efficiently and with less human intervention.
Abstract: Many challenges of bringing communicating multi-agent systems to the World Wide Web require ontologies. The integration of agent technology and ontologies could significantly affect the use of Web services and the ability to extend programs to perform tasks for users more efficiently and with less human intervention.

Journal ArticleDOI
TL;DR: It is suggested that promotion cues, relative to prevention cues, produce a riskier response bias and bolster memory search for novel responses and individual differences in regulatory focus influence creative problem solving in a manner analogous to that of incidental promotion and prevention cues.
Abstract: This study tested whether cues associated with promotion and prevention regulatory foci influence creativity. The authors predicted that the "risky," explorative processing style elicited by promotion cues, relative to the risk-averse, perseverant processing style elicited by prevention cues, would facilitate creative thought. These predictions were supported by two experiments in which promotion cues bolstered both creative insight (Experiment 1) and creative generation (Experiment 2) relative to prevention cues. Experiments 3 and 4 provided evidence for the process account of these findings. suggesting that promotion cues, relative to prevention cues, produce a riskier response bias (Experiment 3) and bolster memory search for novel responses (Experiment 4). A final experiment provided evidence that individual differences in regulatory focus influence creative problem solving in a manner analogous to that of incidental promotion and prevention cues.

Journal ArticleDOI
TL;DR: In this paper, the authors applied an approach that decouples surface reflectance spectra from the real-time radiative transfer simulations to calculate the total shortwave albedo, total-, direct-, and diffuse-visible, and near-infrared broadband albedos for several narrowband sensors.

Journal ArticleDOI
TL;DR: This exposition on the Analytic Hierarchy Process (AHP) discusses the three primary functions of the AHP: structuring complexity, measurement on a ratio scale, and synthesis, as well as the principles and axioms underlying these functions.
Abstract: This exposition on the Analytic Hierarchy Process (AHP) has the following objectives: (1) to discuss why AHP is a general methodology for a wide variety of decision and other applications, (2) to present brief descriptions of successful applications of the AHP, and (3) to elaborate on academic discourses relevant to the efficacy and applicability of the AHP vis-a-vis competing methodologies. We discuss the three primary functions of the AHP: structuring complexity, measurement on a ratio scale, and synthesis, as well as the principles and axioms underlying these functions. Two detailed applications are presented in a linked document athttp://mdm.gwu.edu/FormanGass.pdf.

Journal ArticleDOI
TL;DR: This work estimates lead, copper, cadmium, and zinc loadings from various sources in a developed area utilizing information available in the literature, in conjunction with controlled experimental and sampling investigations.

Proceedings ArticleDOI
01 May 2001
TL;DR: An elegant and remarkably simple algorithm is analyzed that is optimal in a much stronger sense than FA, and is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability sense, but over every database.
Abstract: Assume that each object in a database has m grades, or scores, one for each of m attributes. For example, an object can have a color grade, that tells how red it is, and a shape grade, that tells how round it is. For each attribute, there is a sorted list, which lists each object and its grade under that attribute, sorted by grade (highest grade first). There is some monotone aggregation function, or combining rule, such as min or average, that combines the individual grades to obtain an overall grade.To determine objects that have the best overall grades, the naive algorithm must access every object in the database, to find its grade under each attribute. Fagin has given an algorithm (“Fagin's Algorithm”, or FA) that is much more efficient. For some distributions on grades, and for some monotone aggregation functions, FA is optimal in a high-probability sense.We analyze an elegant and remarkably simple algorithm (“the threshold algorithm”, or TA) that is optimal in a much stronger sense than FA. We show that TA is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability sense, but over every database. Unlike FA, which requires large buffers (whose size may grow unboundedly as the database size grows), TA requires only a small, constant-size buffer.We distinguish two types of access: sorted access (where the middleware system obtains the grade of an object in some sorted list by proceeding through the list sequentially from the top), and random access (where the middleware system requests the grade of object in a list, and obtains it in one step). We consider the scenarios where random access is either impossible, or expensive relative to sorted access, and provide algorithms that are essentially optimal for these cases as well.

Journal ArticleDOI
TL;DR: In this paper, the authors use 3D numerical magnetohydrodynamic simulations to follow the evolution of cold, turbulent, gaseous systems with parameters chosen to represent conditions in giant molecular clouds (GMCs).
Abstract: We use three-dimensional (3D) numerical magnetohydrodynamic simulations to follow the evolution of cold, turbulent, gaseous systems with parameters chosen to represent conditions in giant molecular clouds (GMCs). We present results of three model cloud simulations in which the mean magnetic field strength is varied (B0 = 1.4-14 ?G for GMC parameters), but an identical initial turbulent velocity field is introduced. We describe the energy evolution, showing that (1) turbulence decays rapidly, with the turbulent energy reduced by a factor 2 after 0.4-0.8 flow crossing times (~2-4 Myr for GMC parameters), and (2) the magnetically supercritical cloud models gravitationally collapse after time ?6 Myr, while the magnetically subcritical cloud does not collapse. We compare density, velocity, and magnetic field structure in three sets of model snapshots with matched values of the Mach number ? 9,7,5. We show that the distributions of volume density and column density are both approximately log-normal, with mean mass-weighted volume density a factor 3-6 times the unperturbed value, but mean mass-weighted column density only a factor 1.1-1.4 times the unperturbed value. We introduce a spatial binning algorithm to investigate the dependence of kinetic quantities on spatial scale for regions of column density contrast (ROCs) on the plane of the sky. We show that the average velocity dispersion for the distribution of ROCs is only weakly correlated with scale, similar to mean size-line width distributions for clumps within GMCs. We find that ROCs are often superpositions of spatially unconnected regions that cannot easily be separated using velocity information; we argue that the same difficulty may affect observed GMC clumps. We suggest that it may be possible to deduce the mean 3D size-line width relation using the lower envelope of the 2D size-line width distribution. We analyze magnetic field structure and show that in the high-density regime n 103 cm-3, total magnetic field strengths increase with density with logarithmic slope ~1/3-2/3. We find that mean line-of-sight magnetic field strengths may vary widely across a projected cloud and are not positively correlated with column density. We compute simulated interstellar polarization maps at varying observer orientations and determine that the Chandrasekhar-Fermi formula multiplied by a factor ~0.5 yields a good estimate of the plane-of sky magnetic field strength, provided the dispersion in polarization angles is 25?.

Journal ArticleDOI
TL;DR: The authors found that bullying and victimization are prevalent problems in the area of adolescent peer relationships, with 30.9% of the students reporting being victimized three or more times in the past year and 7.4% reported bullying three ormore times over the past one year.
Abstract: Bullying and victimization are prevalent problems in the area of adolescent peer relationships. Middle school students (N = 4,263) in one Maryland school district completed surveys covering a range of problem behaviors and psychosocial variables. Overall,30.9% of the students reported being victimized three or more times in the past year and 7.4% reported bullying three or more times over the past year. More than one half of the bullies also reported being victimized. Those bully/victims were found to score less favorably than either bullies or victims on all the measured psychosocial and behavioral variables. Results of a discriminant function analysis demonstrated that a group of psychosocial and behavioral predictors—including problem behaviors, attitudes toward deviance, peer influences, depressive symptoms, school-related functioning, and parenting—formed a linear separation between the comparison group (never bullied or victimized), the victim group, the bully group, and the bully/victim group.

Journal ArticleDOI
TL;DR: In this article, the authors presented photometric observations of an apparent Type Ia supernova (SN Ia) at a redshift of 1.7, the farthest SN observed to date.
Abstract: We present photometric observations of an apparent Type Ia supernova (SN Ia) at a redshift of ~1.7, the farthest SN observed to date. The supernova, SN 1997ff, was discovered in a repeat observation by the Hubble Space Telescope (HST) of the Hubble Deep Field-North (HDF-N) and serendipitously monitored with NICMOS on HST throughout the Thompson et al. Guaranteed-Time Observer (GTO) campaign. The SN type can be determined from the host galaxy type: an evolved, red elliptical lacking enough recent star formation to provide a significant population of core-collapse supernovae. The classification is further supported by diagnostics available from the observed colors and temporal behavior of the SN, both of which match a typical SN Ia. The photometric record of the SN includes a dozen flux measurements in the I, J, and H bands spanning 35 days in the observed frame. The redshift derived from the SN photometry, z = 1.7 ± 0.1, is in excellent agreement with the redshift estimate of z = 1.65 ± 0.15 derived from the U300B450V606I814J110J125H160H165Ks photometry of the galaxy. Optical and near-infrared spectra of the host provide a very tentative spectroscopic redshift of 1.755. Fits to observations of the SN provide constraints for the redshift-distance relation of SNe Ia and a powerful test of the current accelerating universe hypothesis. The apparent SN brightness is consistent with that expected in the decelerating phase of the preferred cosmological model, ΩM ≈ 1/3,ΩΛ ≈ . It is inconsistent with gray dust or simple luminosity evolution, candidate astrophysical effects that could mimic previous evidence for an accelerating universe from SNe Ia at z ≈ 0.5. We consider several sources of potential systematic error, including gravitational lensing, supernova misclassification, sample selection bias, and luminosity calibration errors. Currently, none of these effects alone appears likely to challenge our conclusions. Additional SNe Ia at z > 1 will be required to test more exotic alternatives to the accelerating universe hypothesis and to probe the nature of dark energy.

Journal ArticleDOI
Y. Fukuda1, M. Ishitsuka1, Yoshitaka Itow1, Takaaki Kajita1, J. Kameda1, K. Kaneyuki1, K. Kobayashi1, Yusuke Koshio1, M. Miura1, S. Moriyama1, Masayuki Nakahata1, S. Nakayama1, A. Okada1, N. Sakurai1, Masato Shiozawa1, Yoshihiro Suzuki1, H. Takeuchi1, Y. Takeuchi1, T. Toshito1, Y. Totsuka1, Shoichi Yamada1, Shantanu Desai2, M. Earl2, E. Kearns2, M. D. Messier2, Kate Scholberg2, Kate Scholberg3, J. L. Stone2, L. R. Sulak2, C. W. Walter2, M. Goldhaber4, T. Barszczak5, David William Casper5, W. Gajewski5, W. R. Kropp5, S. Mine5, D. W. Liu5, L. R. Price5, M. B. Smy5, Henry W. Sobel5, M. R. Vagins5, Todd Haines5, D. Kielczewska5, K. S. Ganezer6, W. E. Keig6, R. W. Ellsworth7, S. Tasaka8, A. Kibayashi, John G. Learned, S. Matsuno, D. Takemori, Y. Hayato, T. Ishii, Takashi Kobayashi, Koji Nakamura, Y. Obayashi, Y. Oyama, A. Sakai, Makoto Sakuda, M. Kohama9, Atsumu Suzuki9, T. Inagaki10, Tsuyoshi Nakaya10, K. Nishikawa10, E. Blaufuss11, S. Dazeley11, R. Svoboda11, J. A. Goodman12, G. Guillian12, G. W. Sullivan12, D. Turcan12, Alec Habig13, J. Hill14, C. K. Jung14, K. Martens15, K. Martens14, Magdalena Malek14, C. Mauger14, C. McGrew14, E. Sharkey14, B. Viren14, C. Yanagisawa14, C. Mitsuda16, K. Miyano16, C. Saji16, T. Shibata16, Y. Kajiyama17, Y. Nagashima17, K. Nitta17, M. Takita17, Minoru Yoshida17, Heekyong Kim18, Soo-Bong Kim18, J. Yoo18, H. Okazawa, T. Ishizuka19, M. Etoh20, Y. Gando20, Takehisa Hasegawa20, Kunio Inoue20, K. Ishihara20, Tomoyuki Maruyama20, J. Shirai20, A. Suzuki20, Masatoshi Koshiba1, Y. Hatakeyama21, Y. Ichikawa21, M. Koike21, Kyoshi Nishijima21, H. Fujiyasu22, Hirokazu Ishino22, M. Morii22, Y. Watanabe22, U. Golebiewska23, S. C. Boyd24, A. L. Stachyra24, R. J. Wilkes24, B. Lee 
TL;DR: Solar neutrino measurements from 1258 days of data from the Super-Kamiokande detector are presented and the recoil electron energy spectrum is consistent with no spectral distortion.
Abstract: Solar neutrino measurements from 1258days of data from the Super-Kamiokande detector are presented. The measurements are based on recoil electrons in the energy range 5.0{endash}20.0MeV. The measured solar neutrino flux is 2.32{+-}0.03(stat){sup +0.08}{sub {minus}0.07}(syst){times}10{sup 6} cm{sup {minus}2}s{sup {minus}1} , which is 45.1{+-}0.5(stat ){sup +1.6}{sub {minus}1.4}(syst) % of that predicted by the BP2000 SSM. The day vs night flux asymmetry ({Phi}{sub n}{minus}{Phi}{sub d})/ {Phi}{sub average} is 0.033{+-}0.022(stat){sup +0.013}{sub {minus}0.012}(syst) . The recoil electron energy spectrum is consistent with no spectral distortion. For the hep neutrino flux, we set a 90% C.L.upper limit of 40{times}10{sup 3} cm{sup {minus}2}s{sup {minus}1} , which is 4.3times the BP2000 SSM prediction.

Journal ArticleDOI
TL;DR: In this article, a general covariant model of the aether was studied, in which local Lorentz invariance is broken by a dynamical unit timelike vector field.
Abstract: We study a generally covariant model in which local Lorentz invariance is broken by a dynamical unit timelike vector field ${u}^{a}$---the ``aether.'' Such a model makes it possible to study the gravitational and cosmological consequences of preferred frame effects, such as ``variable speed of light'' or high frequency dispersion, while preserving a generally covariant metric theory of gravity. In this paper we restrict attention to an action for an effective theory of the aether which involves only the antisymmetrized derivative ${\ensuremath{ abla}}_{[a}{u}_{b]}.$ Without matter this theory is equivalent to a sector of the Einstein-Maxwell-charged dust system. The aether has two massless transverse excitations, and the solutions of the model include all vacuum solutions of general relativity (as well as other solutions). However, the aether generally develops gradient singularities which signal a breakdown of this effective theory. Including the symmetrized derivative in the action for the aether field may cure this problem.