scispace - formally typeset
Search or ask a question

Showing papers by "National Research University – Higher School of Economics published in 2015"


Journal ArticleDOI
Mohsen Naghavi1, Haidong Wang1, Rafael Lozano1, Adrian Davis2  +728 moreInstitutions (294)
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as discussed by the authors, the authors used the GBD 2010 methods with some refinements to improve accuracy applied to an updated database of vital registration, survey, and census data.

5,792 citations


Journal ArticleDOI
Theo Vos1, Ryan M Barber1, Brad Bell1, Amelia Bertozzi-Villa1  +686 moreInstitutions (287)
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as mentioned in this paper, the authors estimated the quantities for acute and chronic diseases and injuries for 188 countries between 1990 and 2013.

4,510 citations


Journal ArticleDOI
Christina Fitzmaurice1, Christina Fitzmaurice2, Daniel Dicker1, Daniel Dicker2, Amanda W Pain2, Hannah Hamavid2, Maziar Moradi-Lakeh2, Michael F. MacIntyre2, Michael F. MacIntyre3, Christine Allen2, Gillian M. Hansen2, Rachel Woodbrook2, Charles D.A. Wolfe2, Randah R. Hamadeh4, Ami R. Moore5, A. Werdecker6, Bradford D. Gessner, Braden Te Ao, Brian J. McMahon7, Chante Karimkhani8, Chuanhua Yu9, Graham S Cooke10, David C. Schwebel11, David O. Carpenter12, David M. Pereira13, Denis Nash, Dhruv S. Kazi14, Diego De Leo15, Dietrich Plass16, Kingsley N. Ukwaja17, George D. Thurston, Kim Yun Jin18, Edgar P. Simard19, Edward J Mills20, Eun-Kee Park21, Ferrán Catalá-López22, Gabrielle deVeber, Carolyn C. Gotay23, Gulfaraz Khan24, H. Dean Hosgood25, Itamar S. Santos26, Janet L Leasher27, Jasvinder A. Singh28, James Leigh12, Jost B. Jonas29, Juan R. Sanabria30, Justin Beardsley31, Justin Beardsley32, Kathryn H. Jacobsen33, Ken Takahashi34, Richard C. Franklin, Luca Ronfani35, Marcella Montico36, Luigi Naldi36, Marcello Tonelli, Johanna M. Geleijnse37, Max Petzold38, Mark G. Shrime39, Mark G. Shrime40, Mustafa Z. Younis41, Naohiro Yonemoto42, Nicholas J K Breitborde, Paul S. F. Yip43, Farshad Pourmalek44, Paulo A. Lotufo24, Alireza Esteghamati27, Graeme J. Hankey45, Raghib Ali46, Raimundas Lunevicius33, Reza Malekzadeh47, Robert P. Dellavalle45, Robert G. Weintraub48, Robert G. Weintraub49, Robyn M. Lucas50, Robyn M. Lucas51, Roderick J Hay52, David Rojas-Rueda, Ronny Westerman, Sadaf G. Sepanlou53, Sandra Nolte, Scott B. Patten54, Scott Weichenthal37, Semaw Ferede Abera55, Seyed-Mohammad Fereshtehnejad56, Ivy Shiue57, Tim Driscoll58, Tim Driscoll59, Tommi J. Vasankari29, Ubai Alsharif, Vafa Rahimi-Movaghar54, Vasiliy Victorovich Vlassov45, W. S. Marcenes60, Wubegzier Mekonnen61, Yohannes Adama Melaku62, Yuichiro Yano56, Al Artaman63, Ismael Campos, Jennifer H MacLachlan41, Ulrich O Mueller, Daniel Kim53, Matias Trillini64, Babak Eshrati65, Hywel C Williams66, Kenji Shibuya67, Rakhi Dandona68, Kinnari S. Murthy69, Benjamin C Cowie69, Azmeraw T. Amare, Carl Abelardo T. Antonio70, Carlos A Castañeda-Orjuela71, Coen H. Van Gool, Francesco Saverio Violante, In-Hwan Oh72, Kedede Deribe73, Kjetil Søreide62, Kjetil Søreide74, Luke D. Knibbs75, Luke D. Knibbs76, Maia Kereselidze77, Mark Green78, Rosario Cardenas79, Nobhojit Roy80, Taavi Tillmann57, Yongmei Li81, Hans Krueger82, Lorenzo Monasta24, Subhojit Dey36, Sara Sheikhbahaei, Nima Hafezi-Nejad45, G Anil Kumar45, Chandrashekhar T Sreeramareddy69, Lalit Dandona83, Haidong Wang69, Haidong Wang2, Stein Emil Vollset2, Ali Mokdad84, Ali Mokdad75, Joshua A. Salomon2, Rafael Lozano41, Theo Vos2, Mohammad H. Forouzanfar2, Alan D. Lopez2, Christopher J L Murray51, Mohsen Naghavi2 
University of Washington1, Institute for Health Metrics and Evaluation2, Iran University of Medical Sciences3, King's College London4, Arabian Gulf University5, University of North Texas6, Auckland University of Technology7, Alaska Native Tribal Health Consortium8, Columbia University9, Wuhan University10, Imperial College London11, University of Alabama at Birmingham12, University at Albany, SUNY13, City University of New York14, University of California, San Francisco15, Griffith University16, Environment Agency17, New York University18, Southern University College19, Emory University20, University of Ottawa21, Kosin University22, University of Toronto23, University of British Columbia24, United Arab Emirates University25, Albert Einstein College of Medicine26, University of São Paulo27, Nova Southeastern University28, University of Sydney29, Heidelberg University30, Cancer Treatment Centers of America31, Case Western Reserve University32, University of Oxford33, George Mason University34, James Cook University35, University of Trieste36, University of Calgary37, Wageningen University and Research Centre38, University of the Witwatersrand39, University of Gothenburg40, Harvard University41, Jackson State University42, University of Arizona43, University of Hong Kong44, Tehran University of Medical Sciences45, University of Western Australia46, Aintree University Hospitals NHS Foundation Trust47, University of Colorado Denver48, Veterans Health Administration49, Royal Children's Hospital50, University of Melbourne51, Australian National University52, University of Marburg53, Charité54, Health Canada55, College of Health Sciences, Bahrain56, Karolinska Institutet57, University of Edinburgh58, Northumbria University59, National Research University – Higher School of Economics60, Queen Mary University of London61, Addis Ababa University62, Northwestern University63, Northeastern University64, Mario Negri Institute for Pharmacological Research65, Arak University of Medical Sciences66, University of Nottingham67, University of Tokyo68, Public Health Foundation of India69, University of Groningen70, University of the Philippines Manila71, University of Bologna72, Kyung Hee University73, Brighton and Sussex Medical School74, University of Bergen75, Stavanger University Hospital76, University of Queensland77, National Centre for Disease Control78, University of Sheffield79, Universidad Autónoma Metropolitana80, University College London81, Genentech82, Universiti Tunku Abdul Rahman83, Norwegian Institute of Public Health84
TL;DR: To estimate mortality, incidence, years lived with disability, years of life lost, and disability-adjusted life-years for 28 cancers in 188 countries by sex from 1990 to 2013, the general methodology of the Global Burden of Disease 2013 study was used.
Abstract: Importance Cancer is among the leading causes of death worldwide. Current estimates of cancer burden in individual countries and regions are necessary to inform local cancer control strategies. Objective To estimate mortality, incidence, years lived with disability (YLDs), years of life lost (YLLs), and disability-adjusted life-years (DALYs) for 28 cancers in 188 countries by sex from 1990 to 2013. Evidence Review The general methodology of the Global Burden of Disease (GBD) 2013 study was used. Cancer registries were the source for cancer incidence data as well as mortality incidence (MI) ratios. Sources for cause of death data include vital registration system data, verbal autopsy studies, and other sources. The MI ratios were used to transform incidence data to mortality estimates and cause of death estimates to incidence estimates. Cancer prevalence was estimated using MI ratios as surrogates for survival data; YLDs were calculated by multiplying prevalence estimates with disability weights, which were derived from population-based surveys; YLLs were computed by multiplying the number of estimated cancer deaths at each age with a reference life expectancy; and DALYs were calculated as the sum of YLDs and YLLs. Findings In 2013 there were 14.9 million incident cancer cases, 8.2 million deaths, and 196.3 million DALYs. Prostate cancer was the leading cause for cancer incidence (1.4 million) for men and breast cancer for women (1.8 million). Tracheal, bronchus, and lung (TBL) cancer was the leading cause for cancer death in men and women, with 1.6 million deaths. For men, TBL cancer was the leading cause of DALYs (24.9 million). For women, breast cancer was the leading cause of DALYs (13.1 million). Age-standardized incidence rates (ASIRs) per 100 000 and age-standardized death rates (ASDRs) per 100 000 for both sexes in 2013 were higher in developing vs developed countries for stomach cancer (ASIR, 17 vs 14; ASDR, 15 vs 11), liver cancer (ASIR, 15 vs 7; ASDR, 16 vs 7), esophageal cancer (ASIR, 9 vs 4; ASDR, 9 vs 4), cervical cancer (ASIR, 8 vs 5; ASDR, 4 vs 2), lip and oral cavity cancer (ASIR, 7 vs 6; ASDR, 2 vs 2), and nasopharyngeal cancer (ASIR, 1.5 vs 0.4; ASDR, 1.2 vs 0.3). Between 1990 and 2013, ASIRs for all cancers combined (except nonmelanoma skin cancer and Kaposi sarcoma) increased by more than 10% in 113 countries and decreased by more than 10% in 12 of 188 countries. Conclusions and Relevance Cancer poses a major threat to public health worldwide, and incidence rates have increased in most countries since 1990. The trend is a particular threat to developing nations with health systems that are ill-equipped to deal with complex and expensive cancer treatments. The annual update on the Global Burden of Cancer will provide all stakeholders with timely estimates to guide policy efforts in cancer prevention, screening, treatment, and palliation.

2,375 citations


Journal ArticleDOI
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as mentioned in this paper provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.

1,656 citations


Posted Content
TL;DR: This paper converts the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved.
Abstract: Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.

588 citations


Journal ArticleDOI
TL;DR: VDJtools is reported, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API.
Abstract: Despite the growing number of immune repertoire sequencing studies, the field still lacks software for analysis and comprehension of this high-dimensional data. Here we report VDJtools, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API. Using TCR datasets for a large cohort of unrelated healthy donors, twins, and multiple sclerosis patients we demonstrate that VDJtools greatly facilitates the analysis and leads to sound biological conclusions. VDJtools software and documentation are available at https://github.com/mikessh/vdjtools.

428 citations


Proceedings Article
07 Dec 2015
TL;DR: In this paper, the authors converted the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved.
Abstract: Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train [17] format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks [21] we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.

363 citations


Journal ArticleDOI
TL;DR: It is shown that different types of neural oscillators and cross-frequency interactions yield distinct signatures in neural dynamics, including neural representations of multiple environmental items, communication over distant areas, internal clocking of neural processes, and modulation of neural processing based on temporal predictions.

326 citations


Journal ArticleDOI
TL;DR: This material is based upon work supported by the National Science Foundation, Riksbankens Jubileumsfond, the Swedish Research Council, and the University of Gothenburg as well as internal grants from the Vice-Chancellor's office, the Dean of the College of Social Sciences and the Department of Political Science at University of Gethenburg.
Abstract: This material is based upon work supported by the National Science Foundation (SES-1423944, PI: Daniel Pemstein), Riksbankens Jubileumsfond (Grant M13-0559:1, PI: Staffan I. Lindberg), the Swedish Research Council (2013.0166, PI: Staffan I. Lindberg and Jan Teorell), the Knut and Alice Wallenberg Foundation (PI: Staffan I. Lindberg), and the University of Gothenburg (E 2013/43); as well as internal grants from the Vice-Chancellor’s office, the Dean of the College of Social Sciences, and the Department of Political Science at University of Gothenburg. Marquardt acknowledges research support from the Russian Academic Excellence Project ‘5-100.’ We performed simulations and other computational tasks using resources provided by the Notre Dame Center for Research Computing (CRC) through the High Performance Computing section and the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre in Sweden (SNIC 2016/1-382, SNIC 2017/1-406 and 2017/1-68). We specifically acknowledge the assistance of In-Saeng Suh at CRC and Johan Raber and Peter Mu nger at SNIC in facilitating our use of their respective systems.

262 citations


Journal ArticleDOI
TL;DR: stcR is a new R package, representing a platform for the advanced analysis of T cell receptor repertoires after primary TR sequences extraction from raw sequencing reads, which includes diversity measures, shared T cell receptors sequences identification, gene usage statistics computation and other widely used methods.
Abstract: The Immunoglobulins (IG) and the T cell receptors (TR) play the key role in antigen recognition during the adaptive immune response. Recent progress in next-generation sequencing technologies has provided an opportunity for the deep T cell receptor repertoire profiling. However, a specialised software is required for the rational analysis of massive data generated by next-generation sequencing. Here we introduce tcR, a new R package, representing a platform for the advanced analysis of T cell receptor repertoires, which includes diversity measures, shared T cell receptor sequences identification, gene usage statistics computation and other widely used methods. The tool has proven its utility in recent research studies. tcR is an R package for the advanced analysis of T cell receptor repertoires after primary TR sequences extraction from raw sequencing reads. The stable version can be directly installed from The Comprehensive R Archive Network ( http://cran.r-project.org/mirrors.html ). The source code and development version are available at tcR GitHub ( http://imminfo.github.io/tcr/ ) along with the full documentation and typical usage examples.

211 citations


Journal ArticleDOI
TL;DR: The rs9349379[G] allele was previously shown to be associated with lower risk of migraine and increased risk of myocardial infarction, and the mechanisms underlying this pleiotropy might provide important information on the biological underpinnings of these disabling conditions.
Abstract: Cervical artery dissection (CeAD), a mural hematoma in a carotid or vertebral artery, is a major cause of ischemic stroke in young adults although relatively uncommon in the general population (incidence of 2.6/100,000 per year). Minor cervical traumas, infection, migraine and hypertension are putative risk factors, and inverse associations with obesity and hypercholesterolemia are described. No confirmed genetic susceptibility factors have been identified using candidate gene approaches. We performed genome-wide association studies (GWAS) in 1,393 CeAD cases and 14,416 controls. The rs9349379[G] allele (PHACTR1) was associated with lower CeAD risk (odds ratio (OR) = 0.75, 95% confidence interval (CI) = 0.69-0.82; P = 4.46 × 10(-10)), with confirmation in independent follow-up samples (659 CeAD cases and 2,648 controls; P = 3.91 × 10(-3); combined P = 1.00 × 10(-11)). The rs9349379[G] allele was previously shown to be associated with lower risk of migraine and increased risk of myocardial infarction. Deciphering the mechanisms underlying this pleiotropy might provide important information on the biological underpinnings of these disabling conditions.

Journal ArticleDOI
TL;DR: The first formal treatment of different forms of MI and their consequences for the validity of multi-group/multi-time comparisons is attributable to Meredith (1993), as well as a recent book by Millsap (2011) containing a general systematic treatment of the topic of MI.
Abstract: Multi-item surveys are frequently used to study scores on latent factors, like human values, attitudes, and behavior. Such studies often include a comparison, between specific groups of individuals or residents of different countries, either at one or multiple points in time (i.e., a cross-sectional or a longitudinal comparison or both). If latent factor means are to be meaningfully compared, the measurement structures of the latent factor and their survey items should be stable, that is “invariant.” As proposed by Mellenbergh (1989), “measurement invariance” (MI) requires that the association between the items (or test scores) and the latent factors (or latent traits) of individuals should not depend on group membership or measurement occasion (i.e., time). In other words, if item scores are (approximately) multivariate normally distributed, conditional on the latent factor scores, the expected values, the covariances between items, and the unexplained variance unrelated to the latent factors should be equal across groups. Many studies examining MI of survey scales have shown that the MI assumption is very hard to meet. In particular, strict forms of MI rarely hold. With “strict” we refer to a situation in which measurement parameters are exactly the same across groups or measurement occasions, that is an enforcement of zero tolerance with respect to deviations between groups or measurement occasions. Often, researchers just ignore MI issues and compare latent factor means across groups or measurement occasions even though the psychometric basis for such a practice does not hold. However, when a strict form of MI is not established and one must conclude that respondents attach different meanings to survey items, this makes it impossible to make valid comparisons between latent factor means. As such, the potential bias caused by measurement non-invariance obstructs the comparison of latent factor means (if strict MI does not hold) or regression coefficients (if less strict forms of MI do not hold). Traditionally, MI is tested for in a multiple group confirmatory factor analysis (MGCFA) with groups defined by unordered categorical (i.e., nominal) between-subject variables. In MGCFA, MI is tested at each constraint of the latent factor model using a series of nested (latent) factor models. This traditional way of testing for MI originated with Joreskog (1971), who was the first scholar to thoroughly discuss the invariance of latent factor (or measurement) structures. Additionally, Sorbom (1974, 1978) pioneered the specification and estimation of latent factor means using a multi-group SEM approach in LISREL (Joreskog and Sorbom, 1996). Following these contributions the multi-group specification of latent factor structures has become widespread in all major SEM software programs (e.g., AMOS Arbuckle, 2006, EQS Bender and Wu, 1995, LAVAAN Rosseel, 2012, Mplus Muthen and Muthen, 2013, STATA STATA, 2015, and OpenMx Boker et al., 2011). Shortly thereafter, Byrne et al. (1989) introduced the distinction between full and partial MI. Although their introduction was of great value, the first formal treatment of different forms of MI and their consequences for the validity of multi-group/multi-time comparisons is attributable to Meredith (1993). So far, a tremendous amount of papers dealing with MI have been published. The literature on MI published in the 20th century is nicely summarized by Vandenberg and Lance (2000). Noteworthy is also the overview of applications in cross-cultural studies provided by Davidov et al. (2014), as well as a recent book by Millsap (2011) containing a general systematic treatment of the topic of MI. The traditional MGCFA approach to MI-testing is described by, for example, Byrne (2004), Chen et al. (2005), Gregorich (2006), van de Schoot et al. (2012), Vandenberg (2002) and Wicherts and Dolan (2010). Researchers entering the field of MI are recommended to first consult Meredith (1993) and Millsap (2011) before reading other valuable academic works. Recent developments in statistics have provided new analytical tools for assessing MI. The aim of this special issue is to provide a forum for a discussion of MI, covering some crucial “themes”: (1) ways to assess and deal with measurement non-invariance; (2) Bayesian and IRT methods employing the concept of approximate MI; and (3) new or adjusted approaches for testing MI to fit increasingly complex statistical models and specific characteristics of survey data.

Journal ArticleDOI
TL;DR: It is confirmed that the damaged tracts link areas that in contemporary neuroscience are considered functionally engaged for tasks related to emotion and decision-making, language production, and declarative memory in Phineas Gage, Leborgne, and Molaison.
Abstract: On the 50th anniversary of Norman Geschwind's seminal paper entitled 'Disconnexion syndrome in animal and man', we pay tribute to his ideas by applying contemporary tractography methods to understand white matter disconnection in 3 classic cases that made history in behavioral neurology. We first documented the locus and extent of the brain lesion from the computerized tomography of Phineas Gage's skull and the magnetic resonance images of Louis Victor Leborgne's brain, Broca's first patient, and Henry Gustave Molaison. We then applied the reconstructed lesions to an atlas of white matter connections obtained from diffusion tractography of 129 healthy adults. Our results showed that in all 3 patients, disruption extended to connections projecting to areas distant from the lesion. We confirmed that the damaged tracts link areas that in contemporary neuroscience are considered functionally engaged for tasks related to emotion and decision-making (Gage), language production (Leborgne), and declarative memory (Molaison). Our findings suggest that even historic cases should be reappraised within a disconnection framework whose principles were plainly established by the associationist schools in the last 2 centuries.


Journal ArticleDOI
TL;DR: To determine the effect of visual inspection on sample size required for studies of MRI‐derived cortical thickness, the number of subjects required to show group differences was calculated and significant differences observed across imaging sites, between visually approved/disapproved subjects, and across regions with different sizes suggest that these measures should be used with caution.
Abstract: In the last decade, many studies have used automated processes to analyze magnetic resonance imaging (MRI) data such as cortical thickness, which is one indicator of neuronal health. Due to the convenience of image processing software (e.g., FreeSurfer), standard practice is to rely on automated results without performing visual inspection of intermediate processing. In this work, structural MRIs of 40 healthy controls who were scanned twice were used to determine the test-retest reliability of FreeSurfer-derived cortical measures in four groups of subjects-those 25 that passed visual inspection (approved), those 15 that failed visual inspection (disapproved), a combined group, and a subset of 10 subjects (Travel) whose test and retest scans occurred at different sites. Test-retest correlation (TRC), intraclass correlation coefficient (ICC), and percent difference (PD) were used to measure the reliability in the Destrieux and Desikan-Killiany (DK) atlases. In the approved subjects, reliability of cortical thickness/surface area/volume (DK atlas only) were: TRC (0.82/0.88/0.88), ICC (0.81/0.87/0.88), PD (0.86/1.19/1.39), which represent a significant improvement over these measures when disapproved subjects are included. Travel subjects' results show that cortical thickness reliability is more sensitive to site differences than the cortical surface area and volume. To determine the effect of visual inspection on sample size required for studies of MRI-derived cortical thickness, the number of subjects required to show group differences was calculated. Significant differences observed across imaging sites, between visually approved/disapproved subjects, and across regions with different sizes suggest that these measures should be used with caution.

Journal ArticleDOI
TL;DR: In this article, a duality-invariant pseudo-action formulation for the U-duality group SO(5, 5) is presented, which after reduction gives the maximal D = 6 supergravity.
Abstract: We construct Exceptional Field Theory for the group SO(5, 5) based on the extended (6+16)-dimensional spacetime, which after reduction gives the maximal D = 6 supergravity. We present both a true action and a duality-invariant pseudo-action formulations. All the fields of the theory depend on the complete extended spacetime. The U-duality group SO(5, 5) is made a geometric symmetry of the theory by virtue of introducing the generalised Lie derivative that incorporates a duality transformation. Tensor hierarchy appears as a natural consequence of the algebra of generalised Lie derivatives that are viewed as gauge transformations. Upon truncating different subsets of the extra coordinates, maximal supergravities in D = 11 and D = 10 (type IIB) can be recovered from this theory.

Journal ArticleDOI
TL;DR: In this paper, the authors define the triangulated category of relative singularities of a closed subscheme in a scheme, and prove a version of the Thomason-Trobaugh-Neeman localization theorem for coherent matrix factorizations.
Abstract: We define the triangulated category of relative singularities of a closed subscheme in a scheme. When the closed subscheme is a Cartier divisor, we consider matrix factorizations of the related section of a line bundle, and their analogues with locally free sheaves replaced by coherent ones. The appropriate exotic derived category of coherent matrix factorizations is then identified with the triangulated category of relative singularities, while the similar exotic derived category of locally free matrix factorizations is its full subcategory. The latter category is identified with the kernel of the direct image functor corresponding to the closed embedding of the zero locus and acting between the conventional (absolute) triangulated categories of singularities. Similar results are obtained for matrix factorizations of infinite rank; and two different “large” versions of the triangulated category of relative singularities, corresponding to the approaches of Orlov and Krause, are identified in the case of a Cartier divisor. A version of the Thomason–Trobaugh–Neeman localization theorem is proven for coherent matrix factorizations and disproven for locally free matrix factorizations of finite rank. Contravariant (coherent) and covariant (quasicoherent) versions of the Serre–Grothendieck duality theorems for matrix factorizations are established, and pull-backs and push-forwards of matrix factorizations are discussed at length. A number of general results about derived categories of the second kind for curved differential graded modules (CDG-modules) over quasicoherent CDG-algebras are proven on the way. Hochschild (co)homology of matrix factorization categories are discussed in an appendix.

Journal ArticleDOI
11 Sep 2015-Voluntas
TL;DR: In this article, the authors call attention to shortcomings in the prevailing market failure/government failure theories of the nonprofit sector that have obscured recognition of key features of the sector that make cooperation with the state a natural and necessary path to effectiveness.
Abstract: This paper challenges widespread philosophical and conceptual theories of the nonprofit sector and the state that question, or leave little conceptual room for, extensive cooperation between nonprofit organizations and government. To do so, the paper calls attention to shortcomings in the prevailing market failure/government failure theories of the nonprofit sector that have obscured recognition of key features of the sector that make cooperation with the state a natural and necessary path to effectiveness, and to certain inherent limitations of the state that make engagement of nonprofits a natural and useful path to state effectiveness. The article then outlines a set of conditions that must be met by both nonprofits and governments for this partnership to achieve the promise of which it is capable.

Book
13 May 2015
TL;DR: The Spaces Hs and Second-Order Strongly Elliptic Systems in Lipschitz Domains as mentioned in this paper are a generalization of the spaces Hs used in the space Hs.
Abstract: Preface.- Preliminaries.- 1 The Spaces Hs..- 2 Elliptic Equations and Elliptic Boundary Value Problems.- 3 The Spaces Hs and Second-Order Strongly Elliptic Systems in Lipschitz Domains.- 4 More General Spaces and Their Applications.- References.- Index.

Journal ArticleDOI
TL;DR: SNSPDs embedded in nanophotonic integrated circuits which achieve internal quantum efficiencies close to unity at 1550 nm wavelength allows for the SNSPDs to be operated at bias currents far below the critical current where unwanted dark count events reach milli-Hz levels while on-chip detection efficiencies above 70% are maintained.
Abstract: Superconducting nanowire single-photon detectors (SNSPDs) provide high efficiency for detecting individual photons while keeping dark counts and timing jitter minimal. Besides superior detection performance over a broad optical bandwidth, compatibility with an integrated optical platform is a crucial requirement for applications in emerging quantum photonic technologies. Here we present SNSPDs embedded in nanophotonic integrated circuits which achieve internal quantum efficiencies close to unity at 1550 nm wavelength. This allows for the SNSPDs to be operated at bias currents far below the critical current where unwanted dark count events reach milli-Hz levels while on-chip detection efficiencies above 70% are maintained. The measured dark count rates correspond to noise-equivalent powers in the 10−19 W/Hz−1/2 range and the timing jitter is as low as 35 ps. Our detectors are fully scalable and interface directly with waveguide-based optical platforms.

Journal ArticleDOI
TL;DR: The long-standing conjectures of the optimality of Gaussian inputs and additivity are solved for a broad class of gauge-covariant or contravariant bosonic Gaussian channels restricting to the class of states with finite second moments.
Abstract: The long-standing conjectures of the optimality of Gaussian inputs and additivity are solved for a broad class of gauge-covariant or contravariant bosonic Gaussian channels (which includes in particular thermal, additive classical noise, and amplifier channels) restricting to the class of states with finite second moments. We show that the vacuum is the input state which minimizes the entropy at the output of such channels. This allows us to show also that the classical capacity of these channels (under the input energy constraint) is additive and is achieved by Gaussian encodings.

Book ChapterDOI
01 Jan 2015
TL;DR: A novel perspective is used to conceptualize a database view on event data that scopes, binds, and classifies data to create “flat” event logs that can be analyzed using traditional process-mining techniques.
Abstract: Increasingly organizations are using process mining to understand the way that operational processes are executed. Process mining can be used to systematically drive innovation in a digitalized world. Next to the automated discovery of the real underlying process, there are process-mining techniques to analyze bottlenecks, to uncover hidden inefficiencies, to check compliance, to explain deviations, to predict performance, and to guide users towards “better” processes. Dozens (if not hundreds) of process-mining techniques are available and their value has been proven in many case studies. However, process mining stands or falls with the availability of event logs. Existing techniques assume that events are clearly defined and refer to precisely one case (i.e. process instance) and one activity (i.e., step in the process). Although there are systems that directly generate such event logs (e.g., BPM/WFM systems), most information systems do not record events explicitly. Cases and activities only exist implicitly. However, when creating or using process models “raw data” need to be linked to cases and activities. This paper uses a novel perspective to conceptualize a database view on event data. Starting from a class model and corresponding object models it is shown that events correspond to the creation, deletion, or modification of objects and relations. The key idea is that events leave footprints by changing the underlying database. Based on this an approach is described that scopes, binds, and classifies data to create “flat” event logs that can be analyzed using traditional process-mining techniques.

Journal ArticleDOI
TL;DR: For example, the authors found that political activism relates positively to self-transcendence and openness to change values, especially to universalism and autonomy of thought, a subtype of self-direction.
Abstract: Using data from 28 countries in four continents, the present research addresses the question of how basic values may account for political activism. Study 1 (N = 35,116) analyses data from representative samples in 20 countries that responded to the 21-item version of the Portrait Values Questionnaire (PVQ-21) in the European Social Survey. Study 2 (N = 7,773) analyses data from adult samples in six of the same countries (Finland, Germany, Greece, Israel, Poland, and United Kingdom) and eight other countries (Australia, Brazil, Chile, Italy, Slovakia, Turkey, Ukraine, and United States) that completed the full 40-item PVQ. Across both studies, political activism relates positively to self-transcendence and openness to change values, especially to universalism and autonomy of thought, a subtype of self-direction. Political activism relates negatively to conservation values, especially to conformity and personal security. National differences in the strength of the associations between individual values and political activism are linked to level of democratization.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: In the experiments with diverse visual descriptors, tree quantization is shown to combine fast encoding and state-of-the-art accuracy in terms of the compression error, the retrieval performance, and the image classification error.
Abstract: We propose a new vector encoding scheme (tree quantization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming. Similarly to several previous schemes such as product quantization, these codes correspond to codeword numbers within multiple codebooks. We propose an integer programming-based optimization that jointly recovers the coding tree structure and the codebooks by minimizing the compression error on a training dataset. In the experiments with diverse visual descriptors (SIFT, neural codes, Fisher vectors), tree quantization is shown to combine fast encoding and state-of-the-art accuracy in terms of the compression error, the retrieval performance, and the image classification error.

Journal ArticleDOI
TL;DR: This article reviews the experimental and theoretical findings describing the biophysical determinants of the three primary classes of dendritic operations: linear, sublinear, and supralinear, and describes how global and local integration strategies permit the implementation of similar classes of computations.
Abstract: Nonlinear dendritic integration is thought to increase the computational ability of neurons Most studies focus on how supralinear summation of excitatory synaptic responses arising from clustered inputs within single dendrites result in the enhancement of neuronal firing, enabling simple computations such as feature detection Recent reports have shown that sublinear summation is also a prominent dendritic operation, extending the range of subthreshold input-output (sI/O) transformations conferred by dendrites Like supralinear operations, sublinear dendritic operations also increase the repertoire of neuronal computations, but feature extraction requires different synaptic connectivity strategies for each of these operations In this article we will review the experimental and theoretical findings describing the biophysical determinants of the three primary classes of dendritic operations: linear, sublinear, and supralinear We then review a Boolean algebra-based analysis of simplified neuron models, which provides insight into how dendritic operations influence neuronal computations We highlight how neuronal computations are critically dependent on the interplay of dendritic properties (morphology and voltage-gated channel expression), spiking threshold and distribution of synaptic inputs carrying particular sensory features Finally, we describe how global (scattered) and local (clustered) integration strategies permit the implementation of similar classes of computations, one example being the object feature binding problem

Journal ArticleDOI
TL;DR: In this article, the authors pointed out the discrepancy between the engineering concept of energy efficiency and the energy intensity as it is understood in macroeconomic statistics, and suggested that the insufficiency of energy intensity indicators can be compensated with the introduction of thermodynamic indicators describing energy efficiency at the physical, technological, enterprise, sub-sector, sectoral and national levels without references to any economic or financial parameters.

Journal ArticleDOI
TL;DR: It is found that malicious envy is related to schadenfreude, while benign envy is not, and this result holds both in the Netherlands and in the USA where a single word is used to denote both types.
Abstract: Previous research has yielded inconsistent findings concerning the relationship between envy and schadenfreude. Three studies examined whether the distinction between benign and malicious envy can resolve this inconsistency. We found that malicious envy is related to schadenfreude, while benign envy is not. This result held both in the Netherlands where benign and malicious envy are indicated by separate words (Study 1: Sample A, N = 139; Sample B, N = 150), and in the USA where a single word is used to denote both types (Study 2, N = 180; Study 3, N = 349). Moreover, the effect of malicious envy on schadenfreude was independent of other antecedents of schadenfreude (such as feelings of inferiority, disliking the target person, anger, and perceived deservedness). These findings improve our understanding of the antecedents of schadenfreude and help reconcile seemingly contradictory findings on the relationship between envy and schadenfreude.

Journal ArticleDOI
TL;DR: This paper introduces an alternative semi-probabilistic approach, which it is called additive regularization of topic models (ARTM), which regularizes an ill-posed problem of stochastic matrix factorization by maximizing a weighted sum of the log-likelihood and additional criteria.
Abstract: Probabilistic topic modeling of text collections has been recently developed mainly within the framework of graphical models and Bayesian inference. In this paper we introduce an alternative semi-probabilistic approach, which we call additive regularization of topic models (ARTM). Instead of building a purely probabilistic generative model of text we regularize an ill-posed problem of stochastic matrix factorization by maximizing a weighted sum of the log-likelihood and additional criteria. This approach enables us to combine probabilistic assumptions with linguistic and problem-specific requirements in a single multi-objective topic model. In the theoretical part of the work we derive the regularized EM-algorithm and provide a pool of regularizers, which can be applied together in any combination. We show that many models previously developed within Bayesian framework can be inferred easier within ARTM and in some cases generalized. In the experimental part we show that a combination of sparsing, smoothing, and decorrelation improves several quality measures at once with almost no loss of the likelihood.

Journal ArticleDOI
TL;DR: In this paper, an approach of combining Foresight and integrated roadmapping for corporate innovation management is presented. And the authors apply the suggested approach through case studies of major Russian companies in the oil & gas, energy, and aviation sectors.

Journal ArticleDOI
TL;DR: In this paper, the conditions under which online social networks can increase public awareness of electoral fraud in non-democracies are examined and it is argued that a given online social network will only increase political awareness if it is first politicized by elites.
Abstract: Do online social media undermine authoritarianism? The conditions under which online social networks can increase public awareness of electoral fraud in non-democracies are examined in this article and it is argued that a given online social network will only increase political awareness if it is first politicized by elites. Survey data from the 2011 Russian parliamentary elections show that usage of Twitter and Facebook, which were politicized by opposition elites, significantly increased respondents’ perceptions of electoral fraud, while usage of Russia's domestic social networking platforms, VKontakte and Odnoklassniki, which were not politicized by opposition activists, had no effect on perceptions of fraud. This study elucidates the causes of post-election protest by uncovering a mechanism through which knowledge of electoral fraud spreads.