scispace - formally typeset
Search or ask a question

Showing papers by "Aalto University published in 2012"


Journal ArticleDOI
Juhani Vaivio1
TL;DR: In this article, Steinar Kvale and Svend Brinkmann present the second edition of Interviews: Learning the Craft of Qualitative Research Interviewing (SSA), a collection of interviews with experts in the field of qualitative research.
Abstract: Interviews – Learning the Craft of Qualitative Research Interviewing Second edition Steinar Kvale and Svend Brinkmann Sage Publications, 2009 The front cover of the second edition of Interviews – L...

3,009 citations


Proceedings ArticleDOI
03 Oct 2012
TL;DR: A new definition for gamification is proposed, which emphases the experiential nature of games and gamification, instead of the systemic understanding, and ties this definition to theory from service marketing because majority of gamification implementations aim towards goals of marketing, which brings to the discussion the notion of how customer / user is always ultimately the creator of value.
Abstract: During recent years "gamification" has gained significant attention among practitioners and game scholars. However, the current understanding of gamification has been solely based on the act of adding systemic game elements into services. In this paper, we propose a new definition for gamification, which emphases the experiential nature of games and gamification, instead of the systemic understanding. Furthermore, we tie this definition to theory from service marketing because majority of gamification implementations aim towards goals of marketing, which brings to the discussion the notion of how customer / user is always ultimately the creator of value. Since now, the main venue for academic discussion on gamification has mainly been the HCI community. We find it relevant both for industry practitioners as well as for academics to study how gamification can fit in the body of knowledge of existing service literature because the goals and the means of gamification and marketing have a significant overlap.

1,148 citations


Journal ArticleDOI
TL;DR: It is shown that TMDs can be doped by filling the vacancies created by the electron beam with impurity atoms, and this results shed light on the radiation response of a system with reduced dimensionality, but also suggest new ways for engineering the electronic structure of T MDs.
Abstract: Using first-principles atomistic simulations, we study the response of atomically thin layers of transition metal dichalcogenides (TMDs)--a new class of two-dimensional inorganic materials with unique electronic properties--to electron irradiation. We calculate displacement threshold energies for atoms in 21 different compounds and estimate the corresponding electron energies required to produce defects. For a representative structure of MoS2, we carry out high-resolution transmission electron microscopy experiments and validate our theoretical predictions via observations of vacancy formation under exposure to an 80 keV electron beam. We further show that TMDs can be doped by filling the vacancies created by the electron beam with impurity atoms. Thereby, our results not only shed light on the radiation response of a system with reduced dimensionality, but also suggest new ways for engineering the electronic structure of TMDs.

947 citations


Journal ArticleDOI
TL;DR: If the lowest loss and waste percentages achieved in any region in each step of the FSC could be reached globally, food supply losses could be halved and there would be enough food for approximately one billion extra people.

943 citations


Journal ArticleDOI
TL;DR: The many-body perturbation and advanced density-functional theory techniques are used to calculate the interlayer binding and exfoliation energies for a large number of layered compounds and show that, independent of the electronic structure of the material, the energies for most systems are around 20 meV/Å2.
Abstract: Although the precise microscopic knowledge of van der Waals interactions is crucial for understanding bonding in weakly bonded layered compounds, very little quantitative information on the strength of interlayer interaction in these materials is available, either from experiments or simulations. Here, using many-body perturbation and advanced density-functional theory techniques, we calculate the interlayer binding and exfoliation energies for a large number of layered compounds and show that, independent of the electronic structure of the material, the energies for most systems are around $20\text{ }\text{ }\mathrm{meV}/{\AA{}}^{2}$. This universality explains the successful exfoliation of a wide class of layered materials to produce two-dimensional systems, and furthers our understanding the properties of layered compounds in general.

841 citations


Journal ArticleDOI
28 Mar 2012-JAMA
TL;DR: Adjuvant imatinib administered for 12 months after surgery has improved recurrence-free survival (RFS) of patients with operable gastrointestinal stromal tumor (GIST) compared with placebo and overall survival of GIST patients with a high risk of Gist recurrence.
Abstract: Context Adjuvant imatinib administered for 12 months after surgery has improved recurrence-free survival (RFS) of patients with operable gastrointestinal stromal tumor (GIST) compared with placebo. Objective To investigate the role of imatinib administration duration as adjuvant treatment of patients who have a high estimated risk for GIST recurrence after surgery. Design, Setting, and Patients Patients with KIT-positive GIST removed at surgery were entered between February 2004 and September 2008 to this randomized, open-label phase 3 study conducted in 24 hospitals in Finland, Germany, Norway, and Sweden. The risk of GIST recurrence was estimated using the modified National Institutes of Health Consensus Criteria. Intervention Imatinib, 400 mg per day, orally for either 12 months or 36 months, started within 12 weeks of surgery. Main Outcome Measures The primary end point was RFS; the secondary end points included overall survival and treatment safety. Results Two hundred patients were allocated to each group. The median follow-up time after randomization was 54 months in December 2010. Diagnosis of GIST was confirmed in 382 of 397 patients (96%) in the intention-to-treat population at a central pathology review. KIT or PDGFRA mutation was detected in 333 of 366 tumors (91%) available for testing. Patients assigned for 36 months of imatinib had longer RFS compared with those assigned for 12 months (hazard ratio [HR], 0.46; 95% CI, 0.32-0.65; P Conclusion Compared with 12 months of adjuvant imatinib, 36 months of imatinib improved RFS and overall survival of GIST patients with a high risk of GIST recurrence. Trial Registration clinicaltrials.gov Identifier: NCT00116935

812 citations


Journal ArticleDOI
TL;DR: Although the modified NIH classification is the best criteria to identify a single high-risk group for consideration of adjuvant therapy, the prognostic contour maps resulting from non-linear modelling are appropriate for estimation of individualised outcomes.
Abstract: Summary Background The risk of recurrence of gastrointestinal stromal tumour (GIST) after surgery needs to be estimated when considering adjuvant systemic therapy. We assessed prognostic factors of patients with operable GIST, to compare widely used risk-stratification schemes and to develop a new method for risk estimation. Methods Population-based cohorts of patients diagnosed with operable GIST, who were not given adjuvant therapy, were identified from the literature. Data from ten series and 2560 patients were pooled. Risk of tumour recurrence was stratified using the National Institute of Health (NIH) consensus criteria, the modified consensus criteria, and the Armed Forces Institute of Pathology (AFIP) criteria. Prognostic factors were examined using proportional hazards and non-linear models. The results were validated in an independent centre-based cohort consisting of 920 patients with GIST. Findings Estimated 15-year recurrence-free survival (RFS) after surgery was 59·9% (95% CI 56·2–63·6); few recurrences occurred after the first 10 years of follow-up. Large tumour size, high mitosis count, non-gastric location, presence of rupture, and male sex were independent adverse prognostic factors. In receiver operating characteristics curve analysis of 10-year RFS, the NIH consensus criteria, modified consensus criteria, and AFIP criteria resulted in an area under the curve (AUC) of 0·79 (95% CI 0·76–0·81), 0·78 (0·75–0·80), and 0·82 (0·80–0·85), respectively. The modified consensus criteria identified a single high-risk group. Since tumour size and mitosis count had a non-linear association with the risk of GIST recurrence, novel prognostic contour maps were generated using non-linear modelling of tumour size and mitosis count, and taking into account tumour site and rupture. The non-linear model accurately predicted the risk of recurrence (AUC 0·88, 0·86–0·90). Interpretation The risk-stratification schemes assessed identify patients who are likely to be cured by surgery alone. Although the modified NIH classification is the best criteria to identify a single high-risk group for consideration of adjuvant therapy, the prognostic contour maps resulting from non-linear modelling are appropriate for estimation of individualised outcomes. Funding Academy of Finland, Cancer Society of Finland, Sigrid Juselius Foundation and Helsinki University Research Funds.

759 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that point defects in graphene (fluorine adatoms in concentrations gradually increasing to stoichiometric fluorographene CFxD1:0 and irradiation defects (vacancies) carry magnetic moments with spin 1.
Abstract: T he possibility to induce a magnetic response in graphene by the introduction of defects has been generating much interest, as this would expand the already impressive list of its special properties and allow novel devices where charge and spin manipulation could be combined. So far there have been many theoretical studies (for reviews, see refs 1‐3) predicting that point defects in graphene should carry magnetic moments B and these can in principle couple (anti)ferromagnetically 1‐12 . However, experimental evidence for such magnetism remains both scarce and controversial 13‐16 . Here we show that point defects in graphene—(1) fluorine adatoms in concentrations x gradually increasing to stoichiometric fluorographene CFxD1:0 (ref. 17) and (2) irradiation defects (vacancies)—carry magnetic moments with spin 1=2. Both types of defect lead to notable paramagnetism but no magnetic ordering could be detected down to liquid helium temperatures. The induced paramagnetism dominates graphene’s low-temperature magnetic properties, despite the fact that the maximum response we could achieve was limited to one moment per approximately 1,000 carbon atoms. This limitation is explained by clustering of adatoms and, for the case of vacancies, by the loss of graphene’s structural stability. Our work clarifies the controversial issue of graphene’s magnetism and sets limits for other graphitic compounds. The emerging consensus that magnetism in carbon-based systems can exist is based mostly on a large body of work on magnetic measurements of highly-oriented pyrolytic graphite (HOPG) and carbon films, with many reports of weak ferromagnetic signals at room temperature (T) observed in both pristine HOPG and after itsionirradiation(see,forexample,refs18,19).However,thewhole subject remains controversial, especially concerning (1) the role of possible contamination and (2) the mechanism responsible for the strong interaction required to lead to ferromagnetic ordering at room temperature. Some observations of ferromagnetism are probably artefacts, doing little justice to the subject (one frequent artefact is identified and described in the Supplementary Information, where we show that commonly used HOPG crystals contain micrometre-sized magnetic particles). Adatom magnetism in graphite is also contentious and, for example, different studies of fluorinatedgraphitehavereportedinconsistentresults 20,21 .

738 citations


Journal ArticleDOI
TL;DR: It is shown that consensus clustering can be combined with any existing method in a self-consistent way, enhancing considerably both the stability and the accuracy of the resulting partitions.
Abstract: The community structure of complex networks reveals both their organization and hidden relationships among their constituents. Most community detection methods currently available are not deterministic, and their results typically depend on the specific random seeds, initial conditions and tie-break rules adopted for their execution. Consensus clustering is used in data analysis to generate stable results out of a set of partitions delivered by stochastic methods. Here we show that consensus clustering can be combined with any existing method in a self-consistent way, enhancing considerably both the stability and the accuracy of the resulting partitions. This framework is also particularly suitable to monitor the evolution of community structure in temporal networks. An application of consensus clustering to a large citation network of physics papers demonstrates its capability to keep track of the birth, death and diversification of topics.

727 citations


Journal ArticleDOI
TL;DR: The COST 2100 channel model is a geometry-based stochastic channel model (GSCM) that can reproduce MIMO channels over time, frequency, and space as mentioned in this paper.
Abstract: The COST 2100 channel model is a geometry- based stochastic channel model (GSCM) that can reproduce the stochastic properties of MIMO channels over time, frequency, and space. In contrast to other popular GSCMs, the COST 2100 approach is generic and flexible, making it suitable to model multi-user or distributed MIMO scenarios. In this article a concise overview of the COST 2100 channel model is presented. Main concepts are described, together with useful implementation guidelines. Recent developments, including dense multipath components, polarization, and multi-link aspects, are also discussed.

544 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of quantum confinement on the electronic structure of monolayer transition metal dichalcogenides have been investigated using the Bethe-Salpeter equation.
Abstract: Using $GW$ first-principles calculations for few-layer and bulk MoS${}_{2}$, we study the effects of quantum confinement on the electronic structure of this layered material. By solving the Bethe-Salpeter equation, we also evaluate the exciton energy in these systems. Our results are in excellent agreement with the available experimental data. Exciton binding energy is found to dramatically increase from 0.1 eV in the bulk to 1.1 eV in the monolayer. The fundamental band gap increases as well, so that the optical transition energies remain nearly constant. We also demonstrate that environments with different dielectric constants have a profound effect on the electronic structure of the monolayer. Our results can be used for engineering the electronic properties of MoS${}_{2}$ and other transition-metal dichalcogenides and may explain the experimentally observed variations in the mobility of monolayer MoS${}_{2}$.

Journal ArticleDOI
Zari Dastani1, Hivert M-F.2, Hivert M-F.3, N J Timpson4  +615 moreInstitutions (128)
TL;DR: A meta-analysis of genome-wide association studies in 39,883 individuals of European ancestry to identify genes associated with metabolic disease identifies novel genetic determinants of adiponectin levels, which, taken together, influence risk of T2D and markers of insulin resistance.
Abstract: Circulating levels of adiponectin, a hormone produced predominantly by adipocytes, are highly heritable and are inversely associated with type 2 diabetes mellitus (T2D) and other metabolic traits. We conducted a meta-analysis of genome-wide association studies in 39,883 individuals of European ancestry to identify genes associated with metabolic disease. We identified 8 novel loci associated with adiponectin levels and confirmed 2 previously reported loci (P = 4.5×10(-8)-1.2×10(-43)). Using a novel method to combine data across ethnicities (N = 4,232 African Americans, N = 1,776 Asians, and N = 29,347 Europeans), we identified two additional novel loci. Expression analyses of 436 human adipocyte samples revealed that mRNA levels of 18 genes at candidate regions were associated with adiponectin concentrations after accounting for multiple testing (p<3×10(-4)). We next developed a multi-SNP genotypic risk score to test the association of adiponectin decreasing risk alleles on metabolic traits and diseases using consortia-level meta-analytic data. This risk score was associated with increased risk of T2D (p = 4.3×10(-3), n = 22,044), increased triglycerides (p = 2.6×10(-14), n = 93,440), increased waist-to-hip ratio (p = 1.8×10(-5), n = 77,167), increased glucose two hours post oral glucose tolerance testing (p = 4.4×10(-3), n = 15,234), increased fasting insulin (p = 0.015, n = 48,238), but with lower in HDL-cholesterol concentrations (p = 4.5×10(-13), n = 96,748) and decreased BMI (p = 1.4×10(-4), n = 121,335). These findings identify novel genetic determinants of adiponectin levels, which, taken together, influence risk of T2D and markers of insulin resistance.

Journal ArticleDOI
TL;DR: The accuracy of tree height, after removing gross errors, was better than 0.5 m in all tree height classes with the best methods investigated in this experiment, suggesting minimum curvature-based tree detection accompanied by point cloud-based cluster detection for suppressed trees is a solution that deserves attention in the future.
Abstract: The objective of the “Tree Extraction” project organized by EuroSDR (European Spatial data Research) and ISPRS (International Society of Photogrammetry and Remote Sensing) was to evaluate the quality, accuracy, and feasibility of automatic tree extraction methods, mainly based on laser scanner data. In the final report of the project, Kaartinen and Hyyppa (2008) reported a high variation in the quality of the published methods under boreal forest conditions and with varying laser point densities. This paper summarizes the findings beyond the final report after analyzing the results obtained in different tree height classes. Omission/Commission statistics as well as neighborhood relations are taken into account. Additionally, four automatic tree detection and extraction techniques were added to the test. Several methods in this experiment were superior to manual processing in the dominant, co-dominant and suppressed tree storeys. In general, as expected, the taller the tree, the better the location accuracy. The accuracy of tree height, after removing gross errors, was better than 0.5 m in all tree height classes with the best methods investigated in this experiment. For forest inventory, minimum curvature-based tree detection accompanied by point cloud-based cluster detection for suppressed trees is a solution that deserves attention in the future.

Journal ArticleDOI
TL;DR: This article provides a survey on state estimation in electric power grids and examines the impact on SE of the technological changes being proposed as a part of the smart grid development.
Abstract: This article provides a survey on state estimation (SE) in electric power grids and examines the impact on SE of the technological changes being proposed as a part of the smart grid development. Although SE at the transmission level has a long history, further research and development of innovative SE schemes, including those for distribution systems, are needed to meet the new challenges presented by the requirements of the future grid. This article also presents some example topics that signal processing (SP) research can contribute to help meet those challenges.

Journal ArticleDOI
TL;DR: An accurate measurement and a quantitative analysis of electron-beam-induced displacements of carbon atoms in single-layer graphene show that a static lattice approximation is not sufficient to describe knock-on damage in this material, while a very good agreement between calculated and experimental cross sections is obtained.
Abstract: We present an accurate measurement and a quantitative analysis of electron-beam-induced displacements of carbon atoms in single-layer graphene. We directly measure the atomic displacement (``knock-on'') cross section by counting the lost atoms as a function of the electron-beam energy and applied dose. Further, we separate knock-on damage (originating from the collision of the beam electrons with the nucleus of the target atom) from other radiation damage mechanisms (e.g., ionization damage or chemical etching) by the comparison of ordinary ($^{12}\mathrm{C}$) and heavy ($^{13}\mathrm{C}$) graphene. Our analysis shows that a static lattice approximation is not sufficient to describe knock-on damage in this material, while a very good agreement between calculated and experimental cross sections is obtained if lattice vibrations are taken into account.

Journal ArticleDOI
TL;DR: It is proposed that negative valence synchronizes individuals' brain areas supporting emotional sensations and understanding of another’s actions, whereas high arousal directs individuals’ attention to similar features of the environment.
Abstract: Sharing others’ emotional states may facilitate understanding their intentions and actions. Here we show that networks of brain areas “tick together” in participants who are viewing similar emotional events in a movie. Participants’ brain activity was measured with functional MRI while they watched movies depicting unpleasant, neutral, and pleasant emotions. After scanning, participants watched the movies again and continuously rated their experience of pleasantness–unpleasantness (i.e., valence) and of arousal–calmness. Pearson’s correlation coefficient was used to derive multisubject voxelwise similarity measures [intersubject correlations (ISCs)] of functional MRI data. Valence and arousal time series were used to predict the moment-to-moment ISCs computed using a 17-s moving average. During movie viewing, participants' brain activity was synchronized in lower- and higher-order sensory areas and in corticolimbic emotion circuits. Negative valence was associated with increased ISC in the emotion-processing network (thalamus, ventral striatum, insula) and in the default-mode network (precuneus, temporoparietal junction, medial prefrontal cortex, posterior superior temporal sulcus). High arousal was associated with increased ISC in the somatosensory cortices and visual and dorsal attention networks comprising the visual cortex, bilateral intraparietal sulci, and frontal eye fields. Seed-voxel–based correlation analysis confirmed that these sets of regions constitute dissociable, functional networks. We propose that negative valence synchronizes individuals’ brain areas supporting emotional sensations and understanding of another’s actions, whereas high arousal directs individuals’ attention to similar features of the environment. By enhancing the synchrony of brain activity across individuals, emotions may promote social interaction and facilitate interpersonal understanding.

Journal ArticleDOI
TL;DR: Applications of CES distributions and the adaptive signal processors based on ML- and M-estimators of the scatter matrix are illustrated in radar detection problems and in array signal processing applications for Direction-of-Arrival estimation and beamforming.
Abstract: Complex elliptically symmetric (CES) distributions have been widely used in various engineering applications for which non-Gaussian models are needed. In this overview, circular CES distributions are surveyed, some new results are derived and their applications e.g., in radar and array signal processing are discussed and illustrated with theoretical examples, simulations and analysis of real radar data. The maximum likelihood (ML) estimator of the scatter matrix parameter is derived and general conditions for its existence and uniqueness, and for convergence of the iterative fixed point algorithm are established. Specific ML-estimators for several CES distributions that are widely used in the signal processing literature are discussed in depth, including the complex t -distribution, K-distribution, the generalized Gaussian distribution and the closely related angular central Gaussian distribution. A generalization of ML-estimators, the M-estimators of the scatter matrix, are also discussed and asymptotic analysis is provided. Applications of CES distributions and the adaptive signal processors based on ML- and M-estimators of the scatter matrix are illustrated in radar detection problems and in array signal processing applications for Direction-of-Arrival (DOA) estimation and beamforming. Furthermore, experimental validation of the usefulness of CES distributions for modelling real radar data is given.

Proceedings ArticleDOI
01 Apr 2012
TL;DR: This work studies the energy consumption of BLE by measuring real devices with a power monitor and derive models of the basic energy consumption behavior observed from the measurement results, and investigates the overhead of Ipv6-based communication over BLE, relevant for future IoT scenarios.
Abstract: Ultra low power communication mechanisms are essential for future Internet of Things deployments. Bluetooth Low Energy (BLE) is one promising candidate for such deployments. We study the energy consumption of BLE by measuring real devices with a power monitor and derive models of the basic energy consumption behavior observed from the measurement results. We investigate also the overhead of Ipv6-based communication over BLE, which is relevant for future IoT scenarios. We contrast our results by performing similar measurements with ZigBee/802.15.4 devices. Our results show that when compared to ZigBee, BLE is indeed very energy efficient in terms of number of bytes transferred per Joule spent. In addition, IPv6 communication energy overhead remains reasonable. We also point out a few specific limitations with current stack implementations and explain that removing those limitations could improve energy utility significantly.

Journal ArticleDOI
TL;DR: It was found that in a large number of different neuropsychiatric, neurological and neurodevelopmental disorders, as well as in normal ageing, the MMN amplitude was attenuated and peak latency prolonged and appears to index cognitive decline irrespective of the specific symptomatologies and aetiologies of the different disorders involved.

Journal ArticleDOI
TL;DR: In this paper, the authors reported a review based study into the Indirect Evaporative Cooling (IEC) technology, which was undertaken from a variety of aspects including background, history, current status, concept, standardisation, system configuration, operational mode, research and industrialisation, market prospect and barriers, as well as the future focuses on RD good distribution of the water stream across the wet surface of the exchanger plate (tube) and adequate (matching up the evaporation) control of water flow rate are critical to achieving the expected system performance.
Abstract: This paper reported a review based study into the Indirect Evaporative Cooling (IEC) technology, which was undertaken from a variety of aspects including background, history, current status, concept, standardisation, system configuration, operational mode, research and industrialisation, market prospect and barriers, as well as the future focuses on RD good distribution of the water stream across the wet surface of the exchanger plate (tube) and adequate (matching up the evaporation) control of the water flow rate are critical to achieving the expected system performance. It was noticed that the IEC devices were always in combined operation with other cooling measures and the commonly available IEC related operational modes are (1) IEC/DEC system; (2) IEC/DEC/mechanical vapour compression system; (3) IEC/desiccant system; (4) IEC/chilled water system; and (5) IEC/heat pipe system. The future potential operational modes may also cover the IEC-inclusive fan coil units, air handle units, cooling towers, solar driven desiccant cycle, and Rankine cycle based power generation system etc. Future works on the IEC technology may focus on (1) heat exchanger structure and material; (2) water flowing, distribution and treatment; (3) incorporation of the IEC components into conventional air conditioning products to enable combined operation between the IEC and other cooling devices; (4) economic, environment and social impacts; (5) standardisation and legislation; (6) public awareness and other dissemination measures; and (7) manufacturing and commercialisation. All above addressed efforts may help increase the market ratio of the IEC to around 20% in the next 20 years, which will lead to significant saving of fossil fuel consumption and cut of carbon emission related to buildings.

Journal ArticleDOI
TL;DR: The treatment concerns statistical robustness, which deals with deviations from the distributional assumptions, and addresses single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data.
Abstract: The word robust has been used in many contexts in signal processing. Our treatment concerns statistical robustness, which deals with deviations from the distributional assumptions. Many problems encountered in engineering practice rely on the Gaussian distribution of the data, which in many situations is well justified. This enables a simple derivation of optimal estimators. Nominal optimality, however, is useless if the estimator was derived under distributional assumptions on the noise and the signal that do not hold in practice. Even slight deviations from the assumed distribution may cause the estimator's performance to drastically degrade or to completely break down. The signal processing practitioner should, therefore, ask whether the performance of the derived estimator is acceptable in situations where the distributional assumptions do not hold. Isn't it robustness that is of a major concern for engineering practice? Many areas of engineering today show that the distribution of the measurements is far from Gaussian as it contains outliers, which cause the distribution to be heavy tailed. Under such scenarios, we address single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data. A rather extensive treatment of the important and challenging case of dependent data for the signal processing practitioner is also included. For these problems, a comparative analysis of the most important robust methods is carried out by evaluating their performance theoretically, using simulations as well as real-world data.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the concept of intrapreneurial bricolage to show how middle manager innovators may promote pro-poor business models despite these obstacles.
Abstract: It is often argued that multinational corporations (MNCs) are in a unique position to innovate business models that can help to alleviate poverty. This empirical study into intra-organizational aspects of pro-poor business innovation in two MNCs suggests, however, that certain elements of their management frameworks – such as short-term profit interests, business unit based incentive structures, and uncertainty avoidance – may turn into obstacles that prevent MNCs from reaching their full potential in this respect. We introduce the concept of intrapreneurial bricolage to show how middle manager innovators may promote pro-poor business models despite these obstacles. We define intrapreneurial bricolage as entrepreneurial activity within a large organization characterized by creative bundling of scarce resources, and illustrate empirically how it helps innovators to overcome organizational constraints and to mobilize internal and external resources. Our findings imply that intrapreneurial bricolage may be of fundamental importance in MNC innovation for inclusive business. In addition to the field of inclusive business, this study has implications for the study of bricolage in large organizations and social intrapreneurship, as well for managerial practice around innovation for inclusive business.

Journal ArticleDOI
TL;DR: In this article, the authors focus on the photo-mechanical effect taking place in various material systems incorporating azobenzene, which can be defined as reversible change in shape by absorption of light.
Abstract: The change in shape inducible in some photo-reversible molecules using light can effect powerful changes to a variety of properties of a host material. This class of reversible light-switchable molecules includes molecules that photo-dimerize, such as coumarins and anthracenes; those that allow intra-molecular photo-induced bond formation, such as fulgides, spiro-pyrans, and diarylethenes; and those that exhibit photo-isomerization, such as stilbenes, crowded alkenes, and azobenzenes. The most ubiquitous natural molecule for reversible shape change, however, and perhaps the inspiration for all artificial bio-mimics, is the rhodopsin/retinal protein system that enables vision, and this is the quintessential reversible photo-switch for performance and robustness. Here, the small retinal molecule embedded in a cage of rhodopsin helices isomerizes from a cis geometry to a trans geometry around a C=C double bond with the absorption of just a single photon. The modest shape change of just a few angstroms is quickly amplified and sets off a cascade of larger shape and chemical changes, eventually culminating in an electrical signal to the brain of a vision event, the energy of the input photon amplified many thousands of times in the process. Complicated biochemical pathways then revert the trans isomer back to cis, and set the system back up for another cascade upon subsequent absorption. The reversibility is complete, and many subsequent cycles are possible. The reversion mechanism back to the initial cis state is complex and enzymatic, hence direct application of the retinal/rhodopsin photo-switch to engineering systems is difficult. Perhaps the best artificial mimic of this strong photo-switching effect however in terms of reversibility, speed, and simplicity of incorporation, is azobenzene. Trans and cis states can be switched in microseconds with low-power light, reversibility of 105 and 106 cycles is routine before chemical fatigue, and a wide variety of molecular architectures is available to the synthetic materials chemist, permitting facile anchoring and compatibility, as well as chemical and physical amplification of the simple geometric change. This review article focuses on photo-mechanical effect taking place in various material systems incorporating azobenzene. The photo-mechanical effect can be defined as reversible change in shape by absorption of light, which results in a significant macroscopic mechanical deformation, and reversible mechanical actuation, of the host material. Thus, we exclude simple thermal expansion effects, reversible but non-mechanical photo-switching or photo-chemistry, as well as the wide range of optical and electro-optical switching effects for which good reviews exist elsewhere. Azobenzene-based material systems are also of great interest for light energy harvesting applications across much of the solar spectrum, yet this emerging field is still in an early enough stage of research output as to not yet warrant review, but we hope that some of the ideas put forward here toward promising future directions of research, will help guide the field.

Journal ArticleDOI
TL;DR: It is shown that the distribution of the number of events in a bursty period serves as a good indicator of the dependencies, leading to the universal observation of power-law distribution for a broad class of phenomena.
Abstract: Inhomogeneous temporal processes, like those appearing in human communications, neuron spike trains, and seismic signals, consist of high-activity bursty intervals alternating with long low-activity periods. In recent studies such bursty behavior has been characterized by a fat-tailed inter-event time distribution, while temporal correlations were measured by the autocorrelation function. However, these characteristic functions are not capable to fully characterize temporally correlated heterogenous behavior. Here we show that the distribution of the number of events in a bursty period serves as a good indicator of the dependencies, leading to the universal observation of power-law distribution for a broad class of phenomena. We find that the correlations in these quite different systems can be commonly interpreted by memory effects and described by a simple phenomenological model, which displays temporal behavior qualitatively similar to that in real systems.

Journal ArticleDOI
TL;DR: The results highlight the role of the pSTS in processing multiple aspects of social information, as well as the feasibility and efficiency of fMRI mapping under conditions that resemble the complexity of real life.
Abstract: Despite the abundant data on brain networks processing static social signals, such as pictures of faces, the neural systems supporting social perception in naturalistic conditions are still poorly understood. Here we delineated brain networks subserving social perception under naturalistic conditions in 19 healthy humans who watched, during 3-tesla functional magnetic imaging (fMRI), a set of 137 short (~16 s each, total 27 min) audiovisual movie clips depicting pre-selected social signals. Two independent raters estimated how well each clip represented eight social features (faces, human bodies, biological motion, goal-oriented actions, emotion, social interaction, pain, and speech) and six filler features (places, objects, rigid motion, people not in social interaction, non-goal-oriented action and non-human sounds) lacking social content. These ratings were used as predictors in the fMRI analysis. The posterior superior temporal sulcus (STS) responded to all social features but not to any non-social features, and the anterior STS responded to all social features except bodies and biological motion. We also found four partially segregated, extended networks for processing of specific social signals: 1) a fronto-temporal network responding to multiple social categories, 2) a fronto-parietal network preferentially activated to bodies, motion and pain, 3) a temporo-amygdalar network responding to faces, social interaction and speech, and 4) a fronto-insular network responding to pain, emotions, social interactions, and speech. Our results highlight the role of the posterior STS in processing multiple aspects of social information, as well as the feasibility and efficiency of fMRI mapping under conditions that resemble the complexity of real life.

Journal ArticleDOI
TL;DR: The large-scale cognitive, motor and limbic brain circuitry dedicated to acoustic feature processing during listening to a naturalistic musical stimulus is revealed by combining neuroimaging, acoustic feature extraction and behavioral methods.

Journal ArticleDOI
01 Mar 2012
TL;DR: Cellulose nanocrystals from ramie fibers are studied as stabilizers of oil-in-water emulsions and the effect of temperature can be counterbalanced with the addition of salt which is explained by the reduction of electrostatic and steric interactions of poly(NIPAM)-g-CNCs at the oil-water interface.
Abstract: Cellulose nanocrystals (CNCs) from ramie fibers are studied as stabilizers of oil-in-water emulsions. The phase behavior of heptane and water systems is studied, and emulsions stabilized by CNCs are analyzed by using drop sizing (light scattering) and optical, scanning, and freeze-fracture electron microscopies. Water-continuous Pickering emulsions are produced with cellulose nanocrystals (0.05–0.5 wt%) grafted with thermo-responsive poly(NIPAM) brushes (poly(NIPAM)-g-CNCs). They are observed to be stable during the time of observation of 4 months. In contrast, unmodified CNCs are unable to stabilize heptane-in-water emulsions. After emulsification, poly(NIPAM)-g-CNCs are observed to form aligned, layered structures at the oil–water interface. The emulsions stabilized by poly(NIPAM)-g-CNCs break after heating at a temperature above the LCST of poly(NIPAM), which is taken as indication of the temperature responsiveness of the brushes installed on the particles and thus the responsiveness of the Pickering emulsions. This phenomenon is further elucidated via rheological measurements, in which viscosities of the Pickering emulsions increase on approach of the low critical solution temperature of poly(NIPAM). The effect of temperature can be counterbalanced with the addition of salt which is explained by the reduction of electrostatic and steric interactions of poly(NIPAM)-g-CNCs at the oil–water interface.

Journal ArticleDOI
TL;DR: It is demonstrated that mixed MoS2/MoSe 2/MoTe2 compounds are thermodynamically stable at room temperature, so that such materials can be manufactured using chemical-vapor deposition technique or exfoliated from the bulk mixed materials.
Abstract: Using density-functional theory calculations, we study the stability and electronic properties of single layers of mixed transition metal dichalcogenides (TMDs), such as MoS2xSe2(1–x), which can be referred to as two-dimensional (2D) random alloys. We demonstrate that mixed MoS2/MoSe2/MoTe2 compounds are thermodynamically stable at room temperature, so that such materials can be manufactured using chemical-vapor deposition technique or exfoliated from the bulk mixed materials. By applying the effective band structure approach, we further study the electronic structure of the mixed 2D compounds and show that general features of the band structures are similar to those of their binary constituents. The direct gap in these materials can continuously be tuned, pointing toward possible applications of 2D TMD alloys in photonics.

Journal ArticleDOI
TL;DR: In this article, a simple and illustrative multipole decomposition of the electric currents excited in the scatterers is introduced, and this decomposition is connected to the classical multipole expansion of the scattered field.
Abstract: Optical properties of natural or designed materials are determined by the electromagnetic multipole moments that light can excite in the constituent particles. In this paper, we present an approach to calculating the multipole excitations in arbitrary arrays of nanoscatterers in a dielectric host medium. We introduce a simple and illustrative multipole decomposition of the electric currents excited in the scatterers and connect this decomposition to the classical multipole expansion of the scattered field. In particular, we find that completely different multipoles can produce identical scattered fields. The presented multipole theory can be used as a basis for the design and characterization of optical nanomaterials.

Journal ArticleDOI
TL;DR: It is shown that the performance of HD based CS is very sensitive to the BEP wall phenomenon while the SD basedCS is more robust in that sense.
Abstract: This paper focuses on the performance analysis and comparison of hard decision (HD) and soft decision (SD) based approaches for cooperative spectrum sensing in the presence of reporting channel errors. For cooperative sensing (CS) in cognitive radio networks, a distributed detection approach with displaced sensors and a fusion center (FC) is employed. For HD based CS, each secondary user (SU) sends a one-bit hard local decision to the FC. For SD based CS, each SU sends a quantized version of a local decision statistic such as the log-likelihood ratio or any suitable sufficient statistic. The decision statistics are sent through channels that may cause errors. The effects of channel errors are incorporated in the analysis through the bit error probability (BEP). For HD based CS, the counting rule or the K-out-of-N rule is used at the FC. For SD based CS, the optimal fusion rule in the presence of reporting channel errors is derived and its distribution is established. A comparison of the two schemes is conducted to show that there is a performance gain in using SD based CS even in the presence of reporting channel errors. In addition, a BEP wall is shown to exist for CS such that if the BEP is above a certain value, then irrespective of the received signal strength corresponding to the primary user, the constraints on false alarm probability and detection probability cannot be met. It is shown that the performance of HD based CS is very sensitive to the BEP wall phenomenon while the SD based CS is more robust in that sense.