scispace - formally typeset
Search or ask a question

Showing papers by "University of Warsaw published in 2010"


Journal ArticleDOI
TL;DR: The Boruta package provides a convenient interface to the Boruta algorithm, implementing a novel feature selection algorithm for finding emph{all relevant variables}.
Abstract: This article describes a R package Boruta, implementing a novel feature selection algorithm for finding emph{all relevant variables}. The algorithm is designed as a wrapper around a Random Forest classification algorithm. It iteratively removes the features which are proved by a statistical test to be less relevant than random probes. The Boruta package provides a convenient interface to the algorithm. The short description of the algorithm and examples of its application are presented.

2,832 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the Standard Model as an effective low-energy theory, higher dimensional interaction terms appear in the Lagrangian and performed their classification once again from the outset.
Abstract: When the Standard Model is considered as an effective low-energy theory, higher dimensional interaction terms appear in the Lagrangian. Dimension-six terms have been enumerated in the classical article by Buchmuller and Wyler [3]. Although redundance of some of those operators has been already noted in the literature, no updated complete list has been published to date. Here we perform their classification once again from the outset. Assuming baryon number conservation, we find 15 + 19 + 25 = 59 independent operators (barring flavour structure and Hermitian conjugations), as compared to 16 + 35 + 29 = 80 in ref. [3]. The three summed numbers refer to operators containing 0, 2 and 4 fermion fields. If the assumption of baryon number conservation is relaxed, 5 new operators arise in the four-fermion sector.

2,090 citations


Journal ArticleDOI
M. Punturo, M. R. Abernathy1, Fausto Acernese2, Benjamin William Allen3, Nils Andersson4, K. G. Arun5, Fabrizio Barone2, B. Barr1, M. Barsuglia6, M. G. Beker7, N. Beveridge1, S. Birindelli8, Suvadeep Bose9, L. Bosi, S. Braccini, C. Bradaschia, Tomasz Bulik10, Enrico Calloni, G. Cella, E. Chassande Mottin6, Simon Chelkowski11, Andrea Chincarini, John A. Clark12, E. Coccia13, C. N. Colacino, J. Colas, A. Cumming1, L. Cunningham1, E. Cuoco, S. L. Danilishin14, Karsten Danzmann3, G. De Luca, R. De Salvo15, T. Dent12, R. De Rosa, L. Di Fiore, A. Di Virgilio, M. Doets7, V. Fafone13, Paolo Falferi16, R. Flaminio17, J. Franc17, F. Frasconi, Andreas Freise11, Paul Fulda11, Jonathan R. Gair18, G. Gemme, A. Gennai11, A. Giazotto, Kostas Glampedakis19, M. Granata6, Hartmut Grote3, G. M. Guidi20, G. D. Hammond1, Mark Hannam21, Jan Harms22, D. Heinert23, Martin Hendry1, Ik Siong Heng1, Eric Hennes7, Stefan Hild1, J. H. Hough, Sascha Husa24, S. H. Huttner1, Gareth Jones12, F. Y. Khalili14, Keiko Kokeyama11, Kostas D. Kokkotas19, Badri Krishnan24, M. Lorenzini, Harald Lück3, Ettore Majorana, Ilya Mandel25, Vuk Mandic22, I. W. Martin1, C. Michel17, Y. Minenkov13, N. Morgado17, Simona Mosca, B. Mours26, H. Müller–Ebhardt3, P. G. Murray1, Ronny Nawrodt1, John Nelson1, Richard O'Shaughnessy27, Christian D. Ott15, C. Palomba, A. Paoli, G. Parguez, A. Pasqualetti, R. Passaquieti28, D. Passuello, L. Pinard17, Rosa Poggiani28, P. Popolizio, Mirko Prato, P. Puppo, D. S. Rabeling7, P. Rapagnani29, Jocelyn Read24, Tania Regimbau8, H. Rehbein3, Stuart Reid1, Luciano Rezzolla24, F. Ricci29, F. Richard, A. Rocchi, Sheila Rowan1, Albrecht Rüdiger3, Benoit Sassolas17, Bangalore Suryanarayana Sathyaprakash12, Roman Schnabel3, C. Schwarz, Paul Seidel, Alicia M. Sintes24, Kentaro Somiya15, Fiona C. Speirits1, Kenneth A. Strain1, S. E. Strigin14, P. J. Sutton12, S. P. Tarabrin14, Andre Thüring3, J. F. J. van den Brand7, C. van Leewen7, M. van Veggel1, C. Van Den Broeck12, Alberto Vecchio11, John Veitch11, F. Vetrano20, A. Viceré20, Sergey P. Vyatchanin14, Benno Willke3, Graham Woan1, P. Wolfango30, Kazuhiro Yamamoto3 
TL;DR: The third-generation ground-based observatory Einstein Telescope (ET) project as discussed by the authors is currently in its design study phase, and it can be seen as the first step in this direction.
Abstract: Advanced gravitational wave interferometers, currently under realization, will soon permit the detection of gravitational waves from astronomical sources. To open the era of precision gravitational wave astronomy, a further substantial improvement in sensitivity is required. The future space-based Laser Interferometer Space Antenna and the third-generation ground-based observatory Einstein Telescope (ET) promise to achieve the required sensitivity improvements in frequency ranges. The vastly improved sensitivity of the third generation of gravitational wave observatories could permit detailed measurements of the sources' physical parameters and could complement, in a multi-messenger approach, the observation of signals emitted by cosmological sources obtained through other kinds of telescopes. This paper describes the progress of the ET project which is currently in its design study phase.

1,497 citations


Journal ArticleDOI
J. Abadie1, B. P. Abbott1, R. Abbott1, M. R. Abernathy2  +719 moreInstitutions (79)
TL;DR: In this paper, Kalogera et al. presented an up-to-date summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo.
Abstract: We present an up-to-date, comprehensive summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo. Astrophysical estimates for compact-binary coalescence rates depend on a number of assumptions and unknown model parameters and are still uncertain. The most confident among these estimates are the rate predictions for coalescing binary neutron stars which are based on extrapolations from observed binary pulsars in our galaxy. These yield a likely coalescence rate of 100 Myr−1 per Milky Way Equivalent Galaxy (MWEG), although the rate could plausibly range from 1 Myr−1 MWEG−1 to 1000 Myr−1 MWEG−1 (Kalogera et al 2004 Astrophys. J. 601 L179; Kalogera et al 2004 Astrophys. J. 614 L137 (erratum)). We convert coalescence rates into detection rates based on data from the LIGO S5 and Virgo VSR2 science runs and projected sensitivities for our advanced detectors. Using the detector sensitivities derived from these data, we find a likely detection rate of 0.02 per year for Initial LIGO–Virgo interferometers, with a plausible range between 2 × 10−4 and 0.2 per year. The likely binary neutron–star detection rate for the Advanced LIGO–Virgo network increases to 40 events per year, with a range between 0.4 and 400 per year.

1,011 citations


Journal ArticleDOI
TL;DR: The results strongly supported earlier findings on parenting stress in parents of children with autism and shed interesting light on the relationship between coping styles and parental stress.
Abstract: Background The study examined the profile of stress in mothers and fathers of preschool children with autism, Down syndrome and typically developing children. A further aim was to assess the association between parenting stress and coping style. Methods A total of 162 parents were examined using Holroyd's 66-item short form of Questionnaire of Resources and Stress for Families with Chronically Ill or Handicapped Members and the Coping Inventory for Stressful Situations by Endler and Parker. Results and Conclusions The results indicated a higher level of stress in parents of children with autism. Additionally, an interaction effect was revealed between child diagnostic group and parent's gender for two scales of parenting stress: dependency and management and limits of family opportunities. Mothers of children with autism scored higher than fathers in parental stress; no such differences were found in the group of parents of children with Down syndrome and typically developing children. It was also found that parents of children with autism differed from parents of typically developing children in social diversion coping. Emotion-oriented coping was the predictor for parental stress in the samples of parents of children with autism and Down syndrome, and task-oriented coping was the predictor of parental stress in the sample of parents of typically developing children. The results strongly supported earlier findings on parenting stress in parents of children with autism. They also shed interesting light on the relationship between coping styles and parental stress.

722 citations


Journal ArticleDOI
TL;DR: The pre-print version of the Published Article can be accessed from the link below - Copyright @ 2010 Springer Verlag as discussed by the authors, which can be viewed as a preprint of the published article.
Abstract: This is the pre-print version of the Published Article, which can be accessed from the link below - Copyright @ 2010 Springer Verlag

717 citations


Journal ArticleDOI
F. D. Aaron1, Halina Abramowicz2, I. Abt3, Leszek Adamczyk4  +538 moreInstitutions (69)
TL;DR: In this article, a combination of the inclusive deep inelastic cross sections measured by the H1 and ZEUS Collaborations in neutral and charged current unpolarised e(+/-)p scattering at HERA during the period 1994-2000 is presented.
Abstract: A combination is presented of the inclusive deep inelastic cross sections measured by the H1 and ZEUS Collaborations in neutral and charged current unpolarised e(+/-)p scattering at HERA during the period 1994-2000. The data span six orders of magnitude in negative four-momentum-transfer squared, Q(2), and in Bjorken x. The combination method used takes the correlations of systematic uncertainties into account, resulting in an improved accuracy. The combined data are the sole input in a NLO QCD analysis which determines a new set of parton distributions, HERAPDF1.0, with small experimental uncertainties. This set includes an estimate of the model and parametrisation uncertainties of the fit result.

624 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments and calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates.
Abstract: We present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments. In particular, we calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates (e.g., Hurley et al. and Vink et al.). Our calculations show that the highest mass BHs observed in the Galaxy M bh ~ 15 M ☉ in the high metallicity environment (Z = Z ☉ = 0.02) can be explained with stellar models and the wind mass loss rates adopted here. To reach this result we had to set luminous blue variable mass loss rates at the level of ~10–4 M ☉ yr–1 and to employ metallicity-dependent Wolf-Rayet winds. With such winds, calibrated on Galactic BH mass measurements, the maximum BH mass obtained for moderate metallicity (Z = 0.3 Z ☉ = 0.006) is M bh,max = 30 M ☉. This is a rather striking finding as the mass of the most massive known stellar BH is M bh = 23-34 M ☉ and, in fact, it is located in a small star-forming galaxy with moderate metallicity. We find that in the very low (globular cluster-like) metallicity environment the maximum BH mass can be as high as M bh,max = 80 M ☉ (Z = 0.01 Z ☉ = 0.0002). It is interesting to note that X-ray luminosity from Eddington-limited accretion onto an 80 M ☉ BH is of the order of ~1040 erg s–1 and is comparable to luminosities of some known ultra-luminous X-ray sources. We emphasize that our results were obtained for single stars only and that binary interactions may alter these maximum BH masses (e.g., accretion from a close companion). This is strictly a proof-of-principle study which demonstrates that stellar models can naturally explain even the most massive known stellar BHs.

599 citations


Journal ArticleDOI
08 Jul 2010-Nature
TL;DR: The results open the way towards the fabrication of solid state triggered sources of entangled photon pairs, with an overall efficiency of 80%, by coupling an optical cavity in the form of a ‘photonic molecule’ to a single quantum dot.
Abstract: Entangled photon pairs are essential components for practical quantum information applications. Two different approaches for producing entanglement are available: parametric conversion in a nonlinear optical medium, or radiative decay of electron–hole pairs trapped in a semiconductor quantum dot. The first approach has a low intrinsic efficiency; the second suffers from poor collection efficiency. In general, collection of emitted photons from quantum dots is often improved by coupling them to an optical cavity, but this is not straightforward to implement for entangled photon pairs. Dousse et al. have now constructed a suitable optical cavity in the form of a 'photonic molecule' — two connecting identical microcavities that are deterministically coupled to the optically active modes of a pre-selected quantum dot. They show that entangled photon pairs are emitted into two cavity modes, with a rate of 0.12 per excitation pulse. The authors believe that improvements in the fabrication of the device should enable triggered sources of entangled photon pairs, with an overall (creation and collection) efficiency of 80%. Quantum information science requires a source of entangled photon pairs, but existing sources suffer from a low intrinsic efficiency or poor extraction efficiency. Collecting emitted photons from quantum dots can be improved by coupling the dots to an optical cavity, but this is not easy for entangled photon pairs. Now, a suitable optical cavity has been made in the form of a 'photonic' molecule — two identical, connecting microcavities that are deterministically coupled to the optically active modes of a pre-selected quantum dot. A source of triggered entangled photon pairs is a key component in quantum information science1; it is needed to implement functions such as linear quantum computation2, entanglement swapping3 and quantum teleportation4. Generation of polarization entangled photon pairs can be obtained through parametric conversion in nonlinear optical media5,6,7 or by making use of the radiative decay of two electron–hole pairs trapped in a semiconductor quantum dot8,9,10,11. Today, these sources operate at a very low rate, below 0.01 photon pairs per excitation pulse, which strongly limits their applications. For systems based on parametric conversion, this low rate is intrinsically due to the Poissonian statistics of the source12. Conversely, a quantum dot can emit a single pair of entangled photons with a probability near unity but suffers from a naturally very low extraction efficiency. Here we show that this drawback can be overcome by coupling an optical cavity in the form of a ‘photonic molecule’13 to a single quantum dot. Two coupled identical pillars—the photonic molecule—were etched in a semiconductor planar microcavity, using an optical lithography method14 that ensures a deterministic coupling to the biexciton and exciton energy states of a pre-selected quantum dot. The Purcell effect ensures that most entangled photon pairs are emitted into two cavity modes, while improving the indistinguishability of the two optical recombination paths15,16. A polarization entangled photon pair rate of 0.12 per excitation pulse (with a concurrence of 0.34) is collected in the first lens. Our results open the way towards the fabrication of solid state triggered sources of entangled photon pairs, with an overall (creation and collection) efficiency of 80%.

582 citations


Journal ArticleDOI
TL;DR: In this paper, the authors reported a consistent curvilinear, inverse U-shaped relationship between scale of place and percentage of variance of place attachment predicted by three groups of factors: physical (type of housing, size of building, upkeep and personalization of house precincts, etc.), social (neighborhood ties and sense of security in the residence place), and socio-demographic (age, education, gender, length of residence, family size).

492 citations


Journal ArticleDOI
TL;DR: In this article, the authors measured the transverse momentum and pseudorapidity distributions in proton-proton collisions at root s = 7 TeV with the inner tracking system of the CMS detector at the LHC.
Abstract: Charged-hadron transverse-momentum and pseudorapidity distributions in proton-proton collisions at root s = 7 TeV are measured with the inner tracking system of the CMS detector at the LHC. The charged-hadron yield is obtained by counting the number of reconstructed hits, hit pairs, and fully reconstructed charged-particle tracks. The combination of the three methods gives a charged-particle multiplicity per unit of pseudorapidity dN(ch)/d eta vertical bar(vertical bar eta vertical bar<0.5) = 5.78 +/- 0.01(stat) +/- 0.23(stat) for non-single-diffractive events, higher than predicted by commonly used models. The relative increase in charged-particle multiplicity from root s = 0.9 to 7 TeV is [66.1 +/- 1.0(stat) +/- 4.2(syst)]%. The mean transverse momentum is measured to be 0.545 +/- 0.005(stat) +/- 0.015(syst) GeV/c. The results are compared with similar measurements at lower energies.

Book ChapterDOI
11 Oct 2010
TL;DR: This paper uses Lyndon words and introduces the Lyndon structure of runs as a useful tool when computing powers and presents an efficient algorithm for testing primitivity of factors of a string and computing their primitive roots.
Abstract: A breakthrough in the field of text algorithms was the discovery of the fact that the maximal number of runs in a string of length n is O(n) and that they can all be computed in O(n) time. We study some applications of this result. New simpler O(n) time algorithms are presented for a few classical string problems: computing all distinct kth string powers for a given k, in particular squares for k = 2, and finding all local periods in a given string of length n. Additionally, we present an efficient algorithm for testing primitivity of factors of a string and computing their primitive roots. Applications of runs, despite their importance, are underrepresented in existing literature (approximately one page in the paper of Kolpakov & Kucherov, 1999). In this paper we attempt to fill in this gap. We use Lyndon words and introduce the Lyndon structure of runs as a useful tool when computing powers. In problems related to periods we use some versions of the Manhattan skyline problem.

Journal ArticleDOI
TL;DR: This work carries out state-of-the-art optimization of a nuclear energy density of Skyrme type in the framework of the Hartree-Fock-Bogoliubov (HFB) theory, with new model-based, derivative-free optimization algorithm.
Abstract: We carry out state-of-the-art optimization of a nuclear energy density of Skyrme type in the framework of the Hartree-Fock-Bogoliubov theory. The particle-hole and particle-particle channels are optimized simultaneously, and the experimental data set includes both spherical and deformed nuclei. The new model-based, derivative-free optimization algorithm used in this work has been found to be significantly better than standard optimization methods in terms of reliability, speed, accuracy, and precision. The resulting parameter set unedf0 results in good agreement with experimental masses, radii, and deformations and seems to be free of finite-size instabilities. An estimate of the reliability of the obtained parameterization is given, based on standard statistical methods. We discuss new physics insights offered by the advanced covariance analysis.


Journal ArticleDOI
Andrew Gould1, Subo Dong2, B. S. Gaudi1, Andrzej Udalski3  +146 moreInstitutions (43)
TL;DR: In this paper, the authors presented the first measurement of the planet frequency beyond the "snow line," for the planet-to-star mass-ratio interval during 2005-2008 microlensing events during the survey-plus-follow-up high-magnification channel.
Abstract: We present the first measurement of the planet frequency beyond the "snow line," for the planet-to-star mass-ratio interval –4.5 200) microlensing events during 2005-2008. The sampled host stars have a typical mass M_(host) ~ 0.5 M_⊙, and detection is sensitive to planets over a range of planet-star-projected separations (s ^(–1)_(max)R_E, s_(max)R_E), where R_E ~ 3.5 AU(M_(host)/M_⊙)^(1/2) is the Einstein radius and s_(max) ~ (q/10^(–4.3))^(1/3). This corresponds to deprojected separations roughly three times the "snow line." We show that the observations of these events have the properties of a "controlled experiment," which is what permits measurement of absolute planet frequency. High-magnification events are rare, but the survey-plus-follow-up high-magnification channel is very efficient: half of all high-mag events were successfully monitored and half of these yielded planet detections. The extremely high sensitivity of high-mag events leads to a policy of monitoring them as intensively as possible, independent of whether they show evidence of planets. This is what allows us to construct an unbiased sample. The planet frequency derived from microlensing is a factor 8 larger than the one derived from Doppler studies at factor ~25 smaller star-planet separations (i.e., periods 2-2000 days). However, this difference is basically consistent with the gradient derived from Doppler studies (when extrapolated well beyond the separations from which it is measured). This suggests a universal separation distribution across 2 dex in planet-star separation, 2 dex in mass ratio, and 0.3 dex in host mass. Finally, if all planetary systems were "analogs" of the solar system, our sample would have yielded 18.2 planets (11.4 "Jupiters," 6.4 "Saturns," 0.3 "Uranuses," 0.2 "Neptunes") including 6.1 systems with two or more planet detections. This compares to six planets including one two-planet system in the actual sample, implying a first estimate of 1/6 for the frequency of solar-like systems.

Journal ArticleDOI
TL;DR: In this paper, a fast method for modeling and classifying non-periodic continuously varying sources (quasars, aperiodic stellar variability) is presented, where the location of common variability classes in the parameter space of the model is discussed.
Abstract: Robust fast methods to classify variable light curves in large sky surveys are becoming increasingly important. While it is relatively straightforward to identify common periodic stars and particular transient events (supernovae, novae, microlensing events), there is no equivalent for non-periodic continuously varying sources (quasars, aperiodic stellar variability). In this paper, we present a fast method for modeling and classifying such sources. We demonstrate the method using ~86, 000 variable sources from the OGLE-II survey of the LMC and ~2700 mid-IR-selected quasar candidates from the OGLE-III survey of the LMC and SMC. We discuss the location of common variability classes in the parameter space of the model. In particular, we show that quasars occupy a distinct region of variability space, providing a simple quantitative approach to the variability selection of quasars.

Journal ArticleDOI
TL;DR: An improved version of the algorithm for identification of the full set of truly important variables in an information system is presented, an extension of the random forest method which utilises the importance measure generated by the original algorithm.
Abstract: Machine learning methods are often used to classify objects described by hundreds of attributes; in many applications of this kind a great fraction of attributes may be totally irrelevant to the classification problem. Even more, usually one cannot decide a priori which attributes are relevant. In this paper we present an improved version of the algorithm for identification of the full set of truly important variables in an information system. It is an extension of the random forest method which utilises the importance measure generated by the original algorithm. It compares, in the iterative fashion, the importances of original attributes with importances of their randomised copies. We analyse performance of the algorithm on several examples of synthetic data, as well as on a biologically important problem, namely on identification of the sequence motifs that are important for aptameric activity of short RNA sequences.

Journal ArticleDOI
TL;DR: In this article, the authors prove an invariance principle for multilinear polynomials with low influences and bounded degree, and show that under mild conditions the distribution of such polynomial functions is essentially invariant for all product spaces.
Abstract: In this paper, we study functions with low influences on product probability spaces. The analysis of Boolean functions f {-1, 1}/sup n/ /spl rarr/ {-1, 1} with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known non-linear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly "smoothed"; this extension is essential for our applications to "noise stability "-type problems. In particular; as applications of the invariance principle we prove two conjectures: the "Majority Is Stablest" conjecture [29] from theoretical computer science, which was the original motivation for this work, and the "It Ain't Over Till It's Over" conjecture [27] from social choice theory. The "Majority Is Stablest" conjecture and its generalizations proven here, in conjunction with the "Unique Games Conjecture" and its variants, imply a number of (optimal) inapproximability results for graph problems.

Journal ArticleDOI
R. A. Wendell1, C. Ishihara2, K. Abe2, Y. Hayato2, T. Iida2, M. Ikeda2, K. Iyogi2, J. Kameda2, Ken-ichiro Kobayashi2, Yusuke Koshio2, Y. Kozuma2, M. Miura2, Shigetaka Moriyama2, Masayuki Nakahata2, Shoei Nakayama2, Y. Obayashi2, H. Ogawa2, Hiroyuki Sekiya2, Masato Shiozawa2, Yasunari Suzuki2, Atsushi Takeda2, Y. Takenaga2, Y. Takeuchi2, Koh Ueno2, K. Ueshima, Hiroshi Watanabe, S. Yamada2, Tsutomu Yokozawa2, S. Hazama2, H. Kaji2, Takaaki Kajita2, K. Kaneyuki2, T. McLachlan2, Ko Okumura2, Yasuhiro Shimizu2, N. Tanimoto2, Mark R. Vagins3, Mark R. Vagins2, Frédéric Dufour4, E. Kearns4, E. Kearns2, Michael Litos4, J. L. Raaf4, J. L. Stone2, J. L. Stone4, L. R. Sulak4, W. Wang4, W. Wang5, M. Goldhaber6, K. Bays3, David William Casper3, J. P. Cravens3, W. R. Kropp3, S. Mine3, C. Regis3, Michael B. Smy2, Michael B. Smy3, H. W. Sobel3, H. W. Sobel2, K. S. Ganezer7, John Hill7, W. E. Keig7, J. S. Jang8, J. Y. Kim8, I. T. Lim8, Justin Albert1, M. Fechner1, Kate Scholberg1, Kate Scholberg2, C. W. Walter1, C. W. Walter2, S. Tasaka9, J. G. Learned, S. Matsuno, Y. Watanabe10, Takehisa Hasegawa, T. Ishida, T. Ishii, T. Kobayashi, T. Nakadaira, K. Nakamura2, K. Nishikawa, H. Nishino, Yuichi Oyama, K. Sakashita, T. Sekiguchi, T. Tsukamoto, Atsumu Suzuki11, A. Minamino12, Tsuyoshi Nakaya2, Tsuyoshi Nakaya12, Y. Fukuda13, Yoshitaka Itow14, G. Mitsuka14, Toshiyuki Tanaka14, C. K. Jung15, G. D. Lopez15, C. McGrew15, C. Yanagisawa15, N. Tamura16, Hirokazu Ishino, A. Kibayashi17, S. Mino17, T. Mori17, Makoto Sakuda17, H. Toyota17, Y. Kuno18, Minoru Yoshida18, S. B. Kim19, B. S. Yang19, T. Ishizuka20, H. Okazawa20, Y. Choi21, Kyoshi Nishijima22, Y. Yokosawa22, Masatoshi Koshiba2, Masashi Yokoyama2, Y. Totsuka2, Song Chen23, Y. Heng23, Zishuo Yang23, Huaqiao Zhang23, D. Kielczewska24, P. Mijakowski24, K. Connolly25, M. Dziomba25, E. Thrane25, E. Thrane26, R. J. Wilkes25 
TL;DR: In this article, a search for nonzero {theta}{sub 13} and deviations of sin{sup 2{theta}}{sub 23} from 0.04(0.09) and 1.9(1.5) was conducted.
Abstract: We present a search for nonzero {theta}{sub 13} and deviations of sin{sup 2{theta}}{sub 23} from 0.5 in the oscillations of atmospheric neutrino data from Super-Kamiokande I, II, and III. No distortions of the neutrino flux consistent with nonzero {theta}{sub 13} are found and both neutrino mass hierarchy hypotheses are in agreement with the data. The data are best fit at {Delta}m{sup 2}=2.1x10{sup -3} eV{sup 2}, sin{sup 2{theta}}{sub 13}=0.0, and sin{sup 2{theta}}{sub 23}=0.5. In the normal (inverted) hierarchy {theta}{sub 13} and {Delta}m{sup 2} are constrained at the one-dimensional 90% C.L. to sin{sup 2{theta}}{sub 13}<0.04(0.09) and 1.9(1.7)x10{sup -3}<{Delta}m{sup 2}<2.6(2.7)x10{sup -3} eV{sup 2}. The atmospheric mixing angle is within 0.407{<=}sin{sup 2{theta}}{sub 23{<=}}0.583 at 90% C.L.

Journal Article
TL;DR: The upper bound of 0.5 n on the maximal number of runs in a string of length n has been shown in this article, and the lower bound is 0.406 n.
Abstract: A run is an inclusion maximal occurrence in a string (as a subinterval) of a repetition v with a period p such that 2p≤|v|. The maximal number of runs in a string of length n has been thoroughly studied, and is known to be between 0.944 n and 1.029 n. In this paper we investigate cubic runs, in which the shortest period p satisfies 3p≤|v|. We show the upper bound of 0.5 n on the maximal number of such runs in a string of length n, and construct an infinite sequence of words over binary alphabet for which the lower bound is 0.406 n.

Journal ArticleDOI
TL;DR: In this paper, the authors show that a low-metallicity environment significantly boosts the formation of double compact object binaries with at least one BH, and that if a future instrument increased the sensitivity to ~50-100 Mpc, a detection of GWs would be expected within the first year of observation.
Abstract: Data from the Sloan Digital Sky Survey (~300,000 galaxies) indicate that recent star formation (within the last 1 billion years) is bimodal: half of the stars form from gas with high amounts of metals (solar metallicity) and the other half form with small contribution of elements heavier than helium (~10%-30% solar). Theoretical studies of mass loss from the brightest stars derive significantly higher stellar-origin black hole (BH) masses (~30-80 M ☉) than previously estimated for sub-solar compositions. We combine these findings to estimate the probability of detecting gravitational waves (GWs) arising from the inspiral of double compact objects. Our results show that a low-metallicity environment significantly boosts the formation of double compact object binaries with at least one BH. In particular, we find the GW detection rate is increased by a factor of 20 if the metallicity is decreased from solar (as in all previous estimates) to a 50-50 mixture of solar and 10% solar metallicity. The current sensitivity of the two largest instruments to neutron star-neutron star (NS-NS) binary inspirals (VIRGO: ~9 Mpc; LIGO: ~18) is not high enough to ensure a first detection. However, our results indicate that if a future instrument increased the sensitivity to ~50-100 Mpc, a detection of GWs would be expected within the first year of observation. It was previously thought that NS-NS inspirals were the most likely source for GW detection. Our results indicate that BH-BH binaries are ~25 times more likely sources than NS-NS systems and that we are on the cusp of GW detection.

Journal ArticleDOI
Takahiro Sumi1, D. P. Bennett2, Ian A. Bond3, Andrzej Udalski4, V. Batista, Martin Dominik5, Martin Dominik6, P. Fouqué7, D. Kubas, Andrew Gould8, Bruce Macintosh9, K. H. Cook9, Subo Dong10, L. Skuljan3, Arnaud Cassan, Fumio Abe1, C. S. Botzler11, Akihiko Fukui1, K. Furusawa1, John B. Hearnshaw12, Yoshitaka Itow1, Kisaku Kamiya1, P. M. Kilmartin, A. V. Korpela13, W. Lin3, C. H. Ling3, Kimiaki Masuda1, Yutaka Matsubara1, N. Miyake1, Yasushi Muraki14, M. Nagaya1, Takahiro Nagayama1, Kouji Ohnishi, Teppei Okumura1, Y. C. Perrott11, Nicholas J. Rattenbury11, To. Saito15, Takashi Sako1, D. J. Sullivan13, Winston L. Sweatman3, P. J. Tristram, Philip Yock11, J. P. Beaulieu16, Andrew A. Cole17, Ch. Coutures8, M. F. Duran18, J. G. Greenhill17, Francisco Jablonski19, U. Marboeuf, Eder Martioli19, Ettore Pedretti5, Ondřej Pejcha8, Patricio Rojo18, Michael D. Albrow12, S. Brillant, M. F. Bode20, D. M. Bramich21, Martin Burgdorf22, Martin Burgdorf23, J. A. R. Caldwell, H. Calitz24, E. Corrales16, S. Dieters16, S. Dieters17, D. Dominis Prester25, J. Donatowicz26, K. M. Hill16, K. M. Hill17, M. Hoffman24, Keith Horne5, U. G. Jørgensen27, N. Kains5, Stephen R. Kane28, J. B. Marquette16, R. M. Martin, P. J. Meintjes24, J. W. Menzies, K. R. Pollard12, Kailash C. Sahu29, Colin Snodgrass, Iain A. Steele20, Rachel Street30, Yiannis Tsapras30, Joachim Wambsganss31, Andrew Williams, M. Zub31, Michał K. Szymański4, M. Kubiak4, Grzegorz Pietrzyński32, Grzegorz Pietrzyński4, Igor Soszyński4, O. Szewczyk32, Łukasz Wyrzykowski, Krzysztof Ulaczyk4, William H. Allen, G. W. Christie, Darren L. DePoy33, B. S. Gaudi8, C. Han34, J. Janczak8, C.-U. Lee35, Jennie McCormick, F. Mallia, B. Monard, Tim Natusch36, Byeong-Gon Park35, Richard W. Pogge8, R. Santallo 
TL;DR: The OGLE-2007-BLG-368Lb with a planet-star mass ratio of q = [9.5 ± 2.1] × 10^(-5] via gravitational microlensing was discovered in real-time thanks to the high cadence of the Microlensing Observations in Astrophysics survey and intensive followup observations.
Abstract: We present the discovery of a Neptune-mass planet OGLE-2007-BLG-368Lb with a planet-star mass ratio of q = [9.5 ± 2.1] × 10^(-5) via gravitational microlensing. The planetary deviation was detected in real-time thanks to the high cadence of the Microlensing Observations in Astrophysics survey, real-time light-curve monitoring and intensive follow-up observations. A Bayesian analysis returns the stellar mass and distance at M_l = 0.64^(+0.21)_(–0.26) M_☉ and D_l = 5.9^(+0.9)_(–1.4) kpc, respectively, so the mass and separation of the planet are M_p = 20^(+7)_(–8) M_⊕ and a = 3.3^(+1.4)_(–0.8) AU, respectively. This discovery adds another cold Neptune-mass planet to the planetary sample discovered by microlensing, which now comprises four cold Neptune/super-Earths, five gas giant planets, and another sub-Saturn mass planet whose nature is unclear. The discovery of these 10 cold exoplanets by the microlensing method implies that the mass ratio function of cold exoplanets scales as dN_(pl)/d log q ∝ q^(–0.7±0.2) with a 95% confidence level upper limit of n < –0.35 (where dN_(pl)/d log q ∝ q^n). As microlensing is most sensitive to planets beyond the snow-line, this implies that Neptune-mass planets are at least three times more common than Jupiters in this region at the 95% confidence level.

Journal ArticleDOI
TL;DR: In this paper, the advantages and limitations of heavy metals sorption on three different carbon materials: activated carbon, carbon nanotubes, and carbon-encapsulated magnetic nanoparticles.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the nature of V1309 Sco and find that the progenitor was a contact binary with an orbital period of ~1.4 days.
Abstract: Stellar mergers are expected to take place in numerous circumstences in the evolution of stellar systems. In particular, they are considered as a plausible origin of stellar eruptions of the V838 Mon type. V1309 Sco is the most recent eruption of this type in our Galaxy. The object was discovered in September 2008. Our aim is to investigate the nature of V1309 Sco. V1309 Sco has been photometrically observed in course of the OGLE project since August 2001. We analyse these observations in different ways. In particular, periodogram analyses were done to investigate the nature of the observed short term variability of the progenitor. We find out that the progenitor of V1309 Sco was a contact binary with an orbital period of ~1.4 day. This period was decreasing with time. Similarly the light curve of the binary was also evolving, indicating that the system evolved toward its merger. The violent phase of the merger, marked by the systematic brightenning of the object, started in March 2008, i.e. half a year before the outburst discovery. We also investigate the observations of V1309 Sco during the outburst and the decline and show that they can be fully accounted for within the merger hypothesis. For the first time in the literature we show, from direct observations, that contact binaries indeed end up by merging into a single object, as it was suggested in numerous theoretical studies of these systems. Our study also shows that stellar mergers indeed result in eruptions of the V838 Mon type.

Journal ArticleDOI
TL;DR: In this article, the red clump (RC) is split into two components along several sightlines toward the Galactic bulge, and the fainter component is the one that more closely follows the distance-longitude relation of the bulge RC.
Abstract: The red clump (RC) is found to be split into two components along several sightlines toward the Galactic bulge. This split is detected with high significance toward the areas (-3.5 < l < 1, b < -5) and (l, b) = (0, + 5.2), i.e., along the bulge minor axis and at least 5 deg off the plane. The fainter (hereafter 'main') component is the one that more closely follows the distance-longitude relation of the bulge RC. The main component is {approx}0.5 mag fainter than the secondary component and with an overall approximately equal population. For sightlines further from the plane, the difference in brightness increases, and more stars are found in the secondary component than in the main component. The two components have very nearly equal (V - I) color.

Journal ArticleDOI
TL;DR: The data suggest that three different ribonucleases can serve as catalytic subunits for the exosome in human cells, and this work demonstrates the association of two different Dis3p homologs—hDIS3 and hDIS3L—with the humanExosome core.
Abstract: The eukaryotic RNA exosome is a ribonucleolytic complex involved in RNA processing and turnover. It consists of a nine-subunit catalytically inert core that serves a structural function and participates in substrate recognition. Best defined in Saccharomyces cerevisiae, enzymatic activity comes from the associated subunits Dis3p (Rrp44p) and Rrp6p. The former is a nuclear and cytoplasmic RNase II/R-like enzyme, which possesses both processive exo- and endonuclease activities, whereas the latter is a distributive RNase D-like nuclear exonuclease. Although the exosome core is highly conserved, identity and arrangements of its catalytic subunits in different vertebrates remain elusive. Here, we demonstrate the association of two different Dis3p homologs—hDIS3 and hDIS3L—with the human exosome core. Interestingly, these factors display markedly different intracellular localizations: hDIS3 is mainly nuclear, whereas hDIS3L is strictly cytoplasmic. This compartmental distribution reflects the substrate preferences of the complex in vivo. Both hDIS3 and hDIS3L are active exonucleases; however, only hDIS3 has retained endonucleolytic activity. Our data suggest that three different ribonucleases can serve as catalytic subunits for the exosome in human cells.

Journal ArticleDOI
07 Jan 2010-Nature
TL;DR: Well-preserved and securely dated tetrapod tracks from Polish marine tidal flat sediments of early Middle Devonian (Eifelian stage) age are presented, forcing a radical reassessment of the timing, ecology and environmental setting of the fish–tetrapod transition, as well as the completeness of the body fossil record.
Abstract: The fossil record of the earliest tetrapods (vertebrates with limbs rather than paired fins) consists of body fossils and trackways. The earliest body fossils of tetrapods date to the Late Devonian period (late Frasnian stage) and are preceded by transitional elpistostegids such as Panderichthys and Tiktaalik that still have paired fins. Claims of tetrapod trackways predating these body fossils have remained controversial with regard to both age and the identity of the track makers. Here we present well-preserved and securely dated tetrapod tracks from Polish marine tidal flat sediments of early Middle Devonian (Eifelian stage) age that are approximately 18 million years older than the earliest tetrapod body fossils and 10 million years earlier than the oldest elpistostegids. They force a radical reassessment of the timing, ecology and environmental setting of the fish-tetrapod transition, as well as the completeness of the body fossil record.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the single-mode approximation is not valid for arbitrary states, finding corrections to previous studies beyond such approximations in the bosonic and fermionic cases.
Abstract: We address the validity of the single-mode approximation that is commonly invoked in the analysis of entanglement in noninertial frames and in other relativistic quantum-information scenarios. We show that the single-mode approximation is not valid for arbitrary states, finding corrections to previous studies beyond such approximations in the bosonic and fermionic cases. We also exhibit a class of wave packets for which the single-mode approximation is justified subject to the peaking constraints set by an appropriate Fourier transform.

Journal ArticleDOI
TL;DR: The utility of a custom‐designed, exon‐targeted oligonucleotide array to detect intragenic copy‐number changes in patients with various clinical phenotypes is demonstrated.
Abstract: Array comparative genomic hybridization (aCGH) is a powerful tool for the molecular elucidation and diagnosis of disorders resulting from genomic copy-number variation (CNV). However, intragenic deletions or duplications--those including genomic intervals of a size smaller than a gene--have remained beyond the detection limit of most clinical aCGH analyses. Increasing array probe number improves genomic resolution, although higher cost may limit implementation, and enhanced detection of benign CNV can confound clinical interpretation. We designed an array with exonic coverage of selected disease and candidate genes and used it clinically to identify losses or gains throughout the genome involving at least one exon and as small as several hundred base pairs in size. In some patients, the detected copy-number change occurs within a gene known to be causative of the observed clinical phenotype, demonstrating the ability of this array to detect clinically relevant CNVs with subkilobase resolution. In summary, we demonstrate the utility of a custom-designed, exon-targeted oligonucleotide array to detect intragenic copy-number changes in patients with various clinical phenotypes.

Journal ArticleDOI
TL;DR: This paper introduces the concept of fuzzy decision reducts, dependent on an increasing attribute subset measure, and presents a generalization of the classical rough set framework for data-based attribute selection and reduction using fuzzy tolerance relations.