scispace - formally typeset
Search or ask a question

Showing papers by "Helsinki University of Technology published in 2012"


Journal ArticleDOI
TL;DR: It is demonstrated that native nanofibrillar cellulose (NFC) hydrogels derived from the abundant plant sources provide the desired functionalities and generates a feasible and sustained microenvironment for 3D cell culture for potential applications, such as drug and chemical testing, tissue engineering, and cell therapy.

368 citations


Journal ArticleDOI
TL;DR: A unified review of Bayesian predictive model assessment and selection methods, and of methods closely related to them, with an emphasis on how each method approximates the expected utility of using a Bayesian model for the purpose of predicting future data.
Abstract: To date, several methods exist in the statistical literature for model assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methods are based are not always clearly stated in the original articles, however. The aim of this survey is to provide a unified review of Bayesian predictive model assessment and selection methods, and of methods closely related to them. We review the various assumptions that are made in this context and discuss the connections between different approaches, with an emphasis on how each method approximates the expected utility of using a Bayesian model for the purpose of predicting future data.

332 citations


Journal ArticleDOI
TL;DR: A new type of reversible, localized and instantaneous transition between two Cassie wetting states enabled by two-level (dual-scale) topography of a superhydrophobic surface that allows writing, erasing, rewriting and storing of optically displayed information in plastrons related to different length scales is presented.
Abstract: Nature offers exciting examples for functional wetting properties based on superhydrophobicity, such as the self-cleaning surfaces on plant leaves and trapped air on immersed insect surfaces allowing underwater breathing. They inspire biomimetic approaches in science and technology. Superhydrophobicity relies on the Cassie wetting state where air is trapped within the surface topography. Pressure can trigger an irreversible transition from the Cassie state to the Wenzel state with no trapped air—this transition is usually detrimental for nonwetting functionality and is to be avoided. Here we present a new type of reversible, localized and instantaneous transition between two Cassie wetting states, enabled by two-level (dual-scale) topography of a superhydrophobic surface, that allows writing, erasing, rewriting and storing of optically displayed information in plastrons related to different length scales.

252 citations


Journal ArticleDOI
TL;DR: Two improvements for laser-based forest inventory are presented, based on using last pulse data for tree detection and the use of individual tree-based features in addition to the statistical point height metrics in area-based prediction of forest variables.
Abstract: We present two improvements for laser-based forest inventory. The first improvement is based on using last pulse data for tree detection. When trees overlap, the surface model between the trees corresponding to the first pulse stays high, whereas the corresponding model from the last pulse results in a drop in elevation, due to its better penetration between the trees. This drop in elevation can be used for separating trees. In a test carried out in Evo, Southern Finland, we used 292 forests plots consisting of more than 5,500 trees and airborne laser scanning (ALS) data comprised of 12.7 emitted laser pulses per m 2 . With last pulse data, an improvement of 6% for individual tree detection was obtained when compared to using first pulse data. The improvement increased with an increasing number of stems per plot and with decreasing diameter breast height (DBH). The results confirm that there is also substantial information for tree detection in last pulse data. The second improvement is based on the use of individual tree-based features in addition to the statistical point height metrics in area-based prediction of forest variables. The commonly-used ALS point height metrics and individual tree-based features were

167 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used a new residential development project in Northern Europe to assess the overall life cycle GHG emissions of a residential area and to evaluate the influence of including the temporal allocation of the life cycle emissions in the assessment.
Abstract: While buildings are often credited as accounting for some 40% of the global greenhouse gas (GHG) emissions, the construction phase is typically assumed to account for only around one tenth of the overall emissions. However, the relative importance of construction phase emissions is quickly increasing as the energy efficiency of buildings increases. In addition, the significance of construction may actually be much higher when the temporal perspective of the emissions is taken into account. The construction phase carbon spike, i.e. high GHG emissions in a short time associated with the beginning of the building’s life cycle, may be high enough to question whether new construction, no matter how energy efficient the buildings are, can contribute to reaching the greenhouse gas mitigation goals of the near future. Furthermore, the construction of energy efficient buildings causes more GHG emissions than the construction of conventional buildings. On the other hand, renovating the current building stock together with making energy efficiency improvements might lead to a smaller construction phase carbon spike and still to the same reduced energy consumption in the use phase as the new energy efficient buildings. The study uses a new residential development project in Northern Europe to assess the overall life cycle GHG emissions of a new residential area and to evaluate the influence of including the temporal allocation of the life cycle GHG emissions in the assessment. In the study, buildings with different energy efficiency levels are compared with a similar hypothetical area of buildings of the average existing building stock, as well as with a renovation of an area with average buildings from the 1960s. The GHG emissions are modeled with a hybrid life cycle assessment. The study suggests that the carbon payback time of constructing new residential areas is several decades long even when using very energy efficient buildings compared to utilizing the current building stock. Thus, while increasing the overall energy efficiency is important in the long term, the construction of new energy efficient buildings cannot be used as a means to achieve the short term and medium term climate change mitigation goals as cities and governments often suggest. Furthermore, given the magnitude of the carbon spike from construction and its implications, the climate change mitigation strategies should set reduction targets for the construction phase emissions alongside the ones for the use phase, which currently receives almost all of the attention from policy-makers.

117 citations


Journal ArticleDOI
TL;DR: Results show that soil freezing and thawing processes have an observable effect on the L-band signature of soil, and the presented emission model is able to relate the observed dynamics in brightness temperature to the increase of soil frost.
Abstract: The launch of the European Space Agency (ESA)'s Soil Moisture and Ocean Salinity (SMOS) satellite mission in November 2009 opened a new era of global passive monitoring at L-band (1.4-GHz band reserved for radio astronomy). The main objective of the mission is to measure soil moisture and sea surface salinity; the sole payload is the Microwave Imaging Radiometer using Aperture Synthesis. As part of comprehensive calibration and validation activities, several ground-based L-band radiometers, so-called ETH L-Band radiometers for soil moisture research (ELBARA-II), have been deployed. In this paper, we analyze a comprehensive set of measurements from one ELBARA-II deployment site in the northern boreal forest zone. The focus of this paper is in the detection of the evolution of soil frost (a relevant topic, e.g., for the study of carbon and methane cycles at high latitudes). We investigate the effects that soil freeze/thaw processes have on the L-band signature and present a simple modeling approach to analyze the relation between frost depth and the observed brightness temperature. Airborne observations are used to expand the analysis for different land cover types. Finally, the first SMOS observations from the same period are analyzed. Results show that soil freezing and thawing processes have an observable effect on the L-band signature of soil. Furthermore, the presented emission model is able to relate the observed dynamics in brightness temperature to the increase of soil frost.

114 citations


Journal ArticleDOI
TL;DR: In this paper, a review summarizes the proposed mechanisms for irreversible coalescence of cellulose microfibrils within fibers during various common industrial treatments for chemical pulp fibers as well as the methods to evaluate it.
Abstract: This review summarizes the proposed mechanisms for irreversible coalescence of cellulose microfibrils within fibers during various common industrial treatments for chemical pulp fibers as well as the methods to evaluate it. It is a phenomenon vital for cellulose accessibility but still under considerable debate. The proposed coalescence mechanisms include irreversible hydrogen bonding. Coalescence is induced by high temperature and by the absence of obstructing molecules, such as water, hemicelluloses, and lignin. The typical industrial processes, in the course of which nano-scale coalescence and possible aggregation of cellulose microfibrillar elements occurs, are drying and chemical pulping. Coalescence reduces cellulose accessibility and therefore, in several instances, the quality of cellulose as a raw material for novel products. The degree of coalescence also affects the processing and the quality of the products. For traditional paper-based products, the loss of strength properties is a major disadvantage. Some properties lost during coalescence can be restored to a certain extent by, e.g., beating. Several factors, such as charge, have an influence on the intensity of the coalescence. The evaluation of the phenomenon is commonly conducted by water retention value measurements. Other techniques, such as deuteration combined with FTIR spectroscopy, are being applied for better understanding of the changes in cellulose accessibility.

102 citations


Journal ArticleDOI
TL;DR: In this article, the vapour-driven Marangoni effect is used for the continuous self-propulsion of floating soft machines by transduction of chemical energy to motility, featuring a prolonged locomotion at steady velocity with a small amount of onboard fuel.
Abstract: We show the vapour-driven Marangoni effect as a new paradigm for the continuous self-propulsion of floating soft machines by transduction of chemical energy to motility, featuring a prolonged locomotion at steady velocity with a small amount of on-board fuel. The propulsion is induced by modification of the liquid surface using organic vapour transported through a nanocellulose aerogel membrane. The steady velocity is achieved through a continuous supply of fuel vapour that lowers the surface tension of the liquid, combined with the spontaneous recovery of the surface tension after the floating machine has passed. The membranes are gas permeable from their open-porous nanofibrillar structure and float on water and oils due to their superhydrophobic and superoleophobic nature. The velocity is tunable by selecting solvents with different vapour pressure.

77 citations


Proceedings Article
21 Mar 2012
TL;DR: This work shows how spatio-temporal Gaussian process (GP) regression problems can be formulated as infinite-dimensional Kalman filtering and Rauch-Tung-Striebel (RTS) smoothing problems, and presents a procedure for converting spatio/temporal covariance functions into infinite- dimension stochastic differential equations (SDEs).
Abstract: We show how spatio-temporal Gaussian process (GP) regression problems (or the equivalent Kriging problems) can be formulated as infinite-dimensional Kalman filtering and Rauch-Tung-Striebel (RTS) smoothing problems, and present a procedure for converting spatio-temporal covariance functions into infinite-dimensional stochastic differential equations (SDEs). The resulting infinitedimensional SDEs belong to the class of stochastic pseudo-differential equations and can be numerically treated using the methods developed for deterministic counterparts of the equations. The scaling of the computational cost in the proposed approach is linear in the number of time steps as opposed to the cubic scaling of the direct GP regression solution. We also show how separable covariance functions lead to a finite-dimensional Kalman filtering and RTS smoothing problem, present analytical and numerical examples, and discuss numerical methods for computing the solutions.

75 citations


Journal ArticleDOI
TL;DR: By the controlled coalescence of reactive droplets, here using the quenching of fluorescent metal nanoclusters as a model reaction, this work presents elementary Boolean logic operations and a flip-flop memory based on these rebounding water droplets.
Abstract: When water droplets impact each other while traveling on a superhydrophobic surface, we demonstrate that they are able to rebound like billiard balls. We present elementary Boolean logic operations and a flip-flop memory based on these rebounding water droplet collisions. Furthermore, bouncing or coalescence can be easily controlled by process parameters. Thus by the controlled coalescence of reactive droplets, here using the quenching of fluorescent metal nanoclusters as a model reaction, we also demonstrate an elementary operation for programmable chemistry.

74 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered a two-player zero-sum game in a bounded open domain, where players I and II play an e-step tug-of-war game with probability α, and with probability β, respectively.
Abstract: We consider a two-player zero-sum-game in a bounded open domain Ω described as follows: at a point x ∈ Ω, Players I and II play an e -step tug-of-war game with probability α , and with probability β (α + β = 1), a random point in the ball of radius e centered at x is chosen. Once the game position reaches the boundary, Player II pays Player I the amount given by a fixed payoff function F . We give a detailed proof of the fact that the value functions of this game satisfy the Dynamic Programming Principle \begin{equation*} u(x) = \frac{\alpha}{2} \left\{ \sup_{y\in \ol B_{\eps}(x)} u (y) + \inf_{ y \in \ol B_{\eps}(x)} u (y) \right\} + \beta \kint_{ B_{\eps}(x)} u(y) \ud y, \end{equation*} u ( x ) = α 2 ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ sup y ∈ B e ( x ) u ( y ) + inf y ∈ B e ( x ) u ( y ) ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ + β ∫ B e ( x ) u ( y ) d y, for x ∈ Ω with u (y ) = F (y ) when y ∉ Ω. This principle implies the existence of quasioptimal Markovian strategies.

Journal ArticleDOI
TL;DR: In this paper, a charge-balancing accelerometer is presented, which consists of a micromechanical sensor element, a self balancing bridge (SBB) open-loop readout, AC force feedback and ΔΣ ADC.
Abstract: In this paper, a charge-balancing accelerometer is presented. A hybrid interface topology is utilised to achieve high resolution, high linearity and low power supply sensitivity. The accelerometer consists of a micromechanical sensor element, a self-balancing bridge (SBB) open-loop readout, AC force feedback and ΔΣ ADC. The SBB converts acceleration to ratiometric voltage. The ratiometric output of the SBB is converted to the digital domain by the ADC. In order to achieve high resolution, a micromechanical sensor element with a high quality factor, Q, is utilised. The AC force feedback is used for damping the high Q to get a low settling time. The sensor interface is fabricated in a standard 0.35 μm CMOS process. The fabricated chip has an area of 6.66 mm2 and consumes 1 mA at a nominal supply voltage of 3.6 V. The sensor has a maximum DC nonlinearity of 1.3% over the commercial temperature range with an input range of ±1.15 g. The noise floor of the sensor is around 2 μg/√{Hz} and the signal bandwidth is 200 Hz. The bias instability is 13 μ g and the sensor gain variation is less than 5% in the 3-3.6 V supply range.

Journal ArticleDOI
TL;DR: In this paper, a spray-dried nano-fibrillated cellulose (NFC) surface was shown to have hierarchical surface roughness, similar to lotus leaves, due to the microparticles and their porosity at a considerably smaller length scale.
Abstract: Nanofibrillated cellulose (NFC, also called microfibrillated cellulose and native cellulose nanofibers) is an attractive sustainable nanofibrillar material to template functionalities. It allows the modification of wetting properties, but so far surfaces with NFC have suffered from high adhesion of water droplets, even when the contact angles have been large. Here we show that spray-dried NFC leads to hierarchical surface roughness, closely resembling that of lotus leaves, due to the microparticles and their porosity at a considerably smaller length scale. We present the first report on superhydrophobic surfaces from NFC with contact angle hysteresis of only a few degrees upon surface modification. The shown process to achieve the hierarchies is particularly straightforward involving airbrushing of solvent-based NFC onto the surface, followed by quick drying, and a chemical modification performed either before or after the airbrushing, with essentially similar results. The NFC microparticles also enable the formation of liquid marbles. The presented method is technically feasible, as cellulose is an economic, abundant material from nature and the spraying processes are scalable. The shown approach could allow further functionalities, such as self-cleaning and water droplet manipulation.

01 Jan 2012
TL;DR: In this article, the authors investigate different ways in which players have been categorized in game research literature in order to distinguish relevant customer segments for designing and marketing of game's value offerings.
Abstract: This paper investigates different ways in which players have been categorized in game research literature in order to distinguish relevant customer segments for designing and marketing of game’s value offerings. This paper adopts segmentation and marketing theory as its bases of analysis. The goal is to synthesize the results of various studies and to find the prevailing concepts, combine them, and draw implications to further studies and segmentation of the player base. The research process for this study proceeded from large literature search, to authorcentric (Webster & Watson 2002) identification and categorization of previous works based on the established factors of segmentation (demographic, psychographic, and behavioral variables) in marketing theory. The previous works on player typologies were further analyzed using concept-centric approach and synthesized according to common and repeating factors in the previous studies. The results indicate that player typologies in previous literature can be synthesized into seven key dimensions: Skill, Achievement, Exploration, Sociability, Killer, Immersion and In-game demographics. The paper highlights for further studies the self-fulfilling and self-validating nature of the current player typologies because their relatively high use in game design practices as well as discusses the role of game design in segmentation of players.

Journal ArticleDOI
TL;DR: In this article, a method is presented to determine the fields of a multilayer sphere in a static field, where two types of radial dependencies are interpreted as the forward and backward propagating waves, like in a transmission line.
Abstract: A novel method is presented in this paper to determine the fields of a multilayer sphere in a static field. The solution of the Laplace equation, being of second order, allows two types of radial dependencies. These are interpreted as the forward and backward propagating waves, like in a transmission line. Propagation and scattering matrices relate the amplitudes of these waves in adjacent regions, which permits solving the fields without having to resort to inverting a 2N x 2N matrix for a N-layer sphere. Also the scattered field of a sphere with a continuous permittivity profile is analyzed by deriving a differential equation for the scattered potential and solving this potential for the special case of a linearly decreasing permittivity profile. The results are used in determining the effective permittivity of dielectric mixtures with the multilayer and continuous-permittivity inhomogeneous scatterers. In enumerating this macroscopic effective permittivity, the propagating component of the internal fie...

Journal ArticleDOI
TL;DR: In this article, the Frechet derivative of the boundary measurement is introduced, which enables the fine-tuning of the information on the electrode positions as a part of a Newton-type output least squares (OLS) algorithm.
Abstract: Electrical impedance tomography is a noninvasive imaging technique for recovering the admittivity distribution inside a body from boundary measurements of current and voltage. In practice, impedance tomography suffers from inaccurate modelling of the measurement setting: The exact electrode locations and the shape of the imaged object are not necessarily known precisely. In this work, we tackle the problem with imperfect electrode information by introducing the Frechet derivative of the boundary measurement map of impedance tomography with respect to the electrode shapes and locations. This enables us to include the fine-tuning of the information on the electrode positions as a part of a Newton-type output least squares reconstruction algorithm; we demonstrate that this approach is feasible via a two-dimensional numerical example based on simulated data. The impedance tomography measurements are modelled by the complete electrode model, which is in good agreement with real-life electrode measurements.


Journal ArticleDOI
TL;DR: In this paper, the results of 1.5-dimensional simulations of density profiles and pellet fuelling for the ITER baseline scenario performed with the ASTRA and JETTO transport codes by the European Task Force on Integrated Tokamak Modelling are presented.
Abstract: The paper presents the results of 1.5-dimensional simulations of density profiles and pellet fuelling for the ITER baseline scenario performed with the ASTRA and JETTO transport codes by the ITER Scenario Modelling working group within the European Task Force on Integrated Tokamak Modelling. The first part of the paper describes the physics of the problem and how it is implemented in the different codes available to the working group. The second part presents the results of the simulations. Results obtained with the GLF23 physics based transport model and a simplified description of the pellet particle source are described alongside results obtained with the simpler Bohm/gyro-Bohm semi-empirical transport model and a more sophisticated pellet ablation/deposition code providing a completely self-consistent description of the pellet source. A parametric study has been performed to assess the effect of varying parameters independently, the values of which in ITER are either uncertain or not easily controllable (such as particle diffusivity, edge stability, wall recycling and boundary conditions), on the target plasma density, temperature, Q and pellet frequency required to achieve a certain degree of density control. To this end the edge particle diffusivity was increased by a factor of three, the pedestal normalized critical pressure gradient for ballooning stability was decreased by 20%, the boundary conditions on density and temperature were modified by 30–40% and the wall recycling particle source was increased from zero to 20% of the particle outflux. The results show that variations in the order of 15% for density and temperature, 40% for Q and 100% for the pellet frequency can be expected. Open problems and modelling needs are also discussed in the paper.

Posted Content
TL;DR: This paper surveys the trends in gender gaps in education, their causes and potential policy implications and shows that female educational attainment has surpassed, or is about to surpass, male educational attainment in most industrialized countries.
Abstract: This paper surveys the trends in gender gaps in education, their causes and potential policy implications. I show that female educational attainment has surpassed, or is about to surpass, male educational attainment in most industrialized countries. These gaps reflect male overrepresentation among secondary school drop-outs and female overrepresentation among tertiary education students and graduates. Existing evidence suggests that this pattern is a result of a combination of increasing returns to education and lower female effort costs of education. Widening gender gap in education combined with recent wage and employment polarization will likely lead to widening inequalities and is linked to declining male labor force participation. The paper discusses evidence on educational policies that both widen and reduce gender gaps in educational outcomes.

Journal Article
TL;DR: In this article, the authors investigated the role of the world's largest food and agribusiness corporations in water security via case studies of Nestle, Bunge and Cargill.
Abstract: This article investigates the agency of the world's largest food and agribusiness corporations in global water security via case studies of Nestle, Bunge and Cargill by analysing their position in the political economy of the world agro-food system and the ways they intentionally and non-intentionally manage and govern water in their value chains and wider networks of influence. The concentrated power of a few corporations in global agro- food value chains and their ability to influence the agro-food market dynamics and networks throughout the world pose asymmetric conditions for reaching not only global food security but also water security. The article will analyse the different forms of power exercised by the corporations in focus in relation to global water security and the emerging transnational water governance regime, and the extent to which their value chain position and stakeholder interaction reflect or drive their actions. Due to their vast infrastructural and technological capacity and major role in the global agro-food political economy, food and agribusiness corporations cannot avoid increasingly engaging, for endogenous and exogenous reasons, in multi-stakeholder initiatives and partnerships to devise methods of managing the agro-food value chains and markets to promote global water security. However, their asymmetric position in relation to their stakeholders demands continuous scrutiny.

Posted Content
TL;DR: In this article, an approximate Bayesian inference for LGP density estimation in a grid using Laplace's method to integrate over the non-Gaussian posterior distribution of latent function values and to determine the covariance function parameters with type-II maximum a posteriori (MAP) estimation is presented.
Abstract: Logistic Gaussian process (LGP) priors provide a flexible alternative for modelling unknown densities. The smoothness properties of the density estimates can be controlled through the prior covariance structure of the LGP, but the challenge is the analytically intractable inference. In this paper, we present approximate Bayesian inference for LGP density estimation in a grid using Laplace's method to integrate over the non-Gaussian posterior distribution of latent function values and to determine the covariance function parameters with type-II maximum a posteriori (MAP) estimation. We demonstrate that Laplace's method with MAP is sufficiently fast for practical interactive visualisation of 1D and 2D densities. Our experiments with simulated and real 1D data sets show that the estimation accuracy is close to a Markov chain Monte Carlo approximation and state-of-the-art hierarchical infinite Gaussian mixture models. We also construct a reduced-rank approximation to speed up the computations for dense 2D grids, and demonstrate density regression with the proposed Laplace approach.

Journal Article
TL;DR: In this paper, the authors analyse motivations and strategies of the major F&B corporations participating in the debate and developing different water accounting, disclosure and risk-assessment tools, and find that the corporations share similar goals and values with regard to the emerging regime.
Abstract: The current debate on water accounting and accountability among transnational actors such as corporations and NGOs is likely to contribute to the emergence of a global water governance regime. Corporations within the food and beverage sector (FB therefore, in this article we analyse motivations and strategies of the major F&B corporations participating in the debate and developing different water accounting, disclosure and risk-assessment tools. Neo-institutionalism and neoGramscian regime theory provide the basis for our framework to analyse the discursive, material and organisational corporate water strategies. Findings based on an analysis of the chosen F&B corporations’ sustainability reports and interviews with key informants suggest that the corporations share similar goals and values with regard to the emerging regime. They seek a standardisation that is practical and supportive in improving their water efficiency and communication with stakeholders. This indicates that some harmonisation has taken place over time and new actors have been pursuing the path of the pioneering companies, but the lead corporations are also differentiating their strategies, thus engaging in hegemonic positioning. However, so far the plethora of NGO-driven accountability initiatives and tools has fragmented the field more than 'war of position' amongst the corporations. Furthermore, several companies claim to have proceeded from internal water-risk management to reducing risks throughout their value chains and watersheds. As a result they are 'creating shared value' with stakeholders, and potentially manifesting an emergent paradigm that goes beyond a private regime framework. Nevertheless, in the absence of verification schemes, questions of sustainability and legitimacy of such actions on the ground prevail and remain a topic for further research.

Proceedings Article
21 Mar 2012
TL;DR: The proposed Gaussian process inference scheme is compared to the standard approach using the sparse Cholesky decomposition and it is shown to be much faster and computationally feasible for 100–1000 times larger datasets.
Abstract: This paper presents an efficient Gaussian process inference scheme for modeling shortscale phenomena in spatio-temporal datasets. Our model uses a sum of separable, compactly supported covariance functions, which yields a full covariance matrix represented in terms of small sparse matrices operating either on the spatial or temporal domain. The proposed inference procedure is based on Gibbs sampling, in which samples from the conditional distribution of the latent function values are obtained by applying a simple linear transformation to samples drawn from the joint distribution of the function values and the observations. We make use of the proposed model structure and the conjugate gradient method to compute the required transformation. In the experimental part, the proposed algorithm is compared to the standard approach using the sparse Cholesky decomposition and it is shown to be much faster and computationally feasible for 100–1000 times larger datasets. We demonstrate the advantages of the proposed method in the problem of reconstructing sea surface temperature, which requires processing of a real-world dataset with 10 6 observations.

Journal ArticleDOI
TL;DR: This paper proposes digital IIR filters to realize the required equalization and evaluates a real-time prototype ARA system, which enables several immersive audio applications, such as a virtual audio tourist guide and audio teleconferencing.
Abstract: Augmented reality audio (ARA) combines virtual sound sources with the real sonic environment of the user. An ARA system can be realized with a headset containing binaural microphones. Ideally, the ARA headset should be acoustically transparent, that is, it should not cause audible modification to the surrounding sound. A practical implementation of an ARA mixer requires a low-latency headphone reproduction system with additional equalization to compensate for the attenuation and the modified ear canal resonances caused by the headphones. This paper proposes digital IIR filters to realize the required equalization and evaluates a real-time prototype ARA system. Measurements show that the throughput latency of the digital prototype ARA system can be less than 1.4 ms, which is sufficiently small in practice. When the direct and processed sounds are combined in the ear, a comb filtering effect is brought about and appears as notches in the frequency response. The comb filter effect in speech and music signals was studied in a listening test and it was found to be inaudible when the attenuation is 20 dB. Insert ARA headphones have a sufficient attenuation at frequencies above about 1 kHz. The proposed digital ARA system enables several immersive audio applications, such as a virtual audio tourist guide and audio teleconferencing.

Journal ArticleDOI
TL;DR: In this paper, the authors apply the ideas of co-creation of meaning, which derive from research into the narrative process of strategy and the practice turn of strategy, to a case example from a Finnish property rental company.
Abstract: Purpose – In this paper the authors aim to introduce the perspective of shared meanings as a prerequisite for the formation of market‐focused strategic flexibility.Design/methodology/approach – The authors apply the ideas of co‐creation of meaning, which derive from research into the narrative process of strategy and the practice turn of strategy. The authors' view is illustrated with a case example from a Finnish property rental company. Using action research methodology, data were collected through interviews and workshops from the company, from its clients and from its subcontractors.Findings – The case presented here shows that the lack of common understanding may lead to poor service quality even though the provider aims at meeting clients' needs. On the other hand, the results confirm that developing a shared understanding is possible in business practice. A common lexicon and the conscious use of human narrative capability facilitate the achievement of this goal.Research limitations/implications – ...

Journal ArticleDOI
TL;DR: In this article, a case study examined a productisation project in which four KIBS productised one of their services with the help of a consultant, and the results indicated that productisation contributes to the competitiveness and efficiency, and facilitates the development of customer understanding and business skills.
Abstract: This paper discusses productisation in small knowledge-intensive business service companies (KIBS), whose typical problem is the inefficient production of services, starting from scratch for each client. The paper reviews literature on different approaches to developing services more ‘product-like’. It points out specific challenges linked to productisation of knowledge offerings and services that are co-produced with the customer. The case study examined a productisation project in which four KIBS productised one of their services with the help of a consultant. The results indicate that productisation contributes to the competitiveness and efficiency, and facilitates the development of customer understanding and business skills. An externally supported project is a good way to promote productisation in small KIBS where scarce resources often delay the adoption of this practice.

Journal ArticleDOI
TL;DR: In this article, an 8-element LTCC aperture coupled patch antenna feed array with a switching network is used to electrically steer the main beam in H-plane, and a 100mm diameter Rexolite ("r = 2:53") lens is simulated and tested.
Abstract: Design and measurement results of a beam-steering integrated lens antenna at 77GHz are presented. An 8-element LTCC aperture coupled patch antenna feed array with a switching network is used to electrically steer the main beam in H-plane. A 100-mm diameter Rexolite ("r = 2:53) lens is simulated and tested. The eccentricity of the lens is optimized in an earlier work with ray-tracing simulations for improved beam-steering properties compared to the conventional extended hemispherical and elliptical lenses. The beam- steering properties including scan loss, main-beam width and direction, side-lobe levels, directivity, and cross-polarization are analyzed in detail with both simulations and radiation pattern measurements. As expected, the results show that the side-lobe and cross-polarization levels are not predicted accurately with large feed ofisets using the ray-tracing simulations. Nevertheless, it is shown that the lens shape can be successfully optimized with the simple and fast ray-tracing simulations. The measured half-power beam-width at 77GHz is 2:5 - § 0:2 - up to the largest tested beam-steering angle of 30 - . The optimized eccentricity low permittivity lens results in smaller scan loss than the conventional lenses.

Proceedings ArticleDOI
01 Jun 2012
TL;DR: This paper proposes using a survey method combined with qualitative analysis to investigate the way in which people use mobiles musically, and presents as an area of future research the PDplayer, which provides a completely self contained end application in the mobile device, potentially making the mobile a more viable and expressive tool for musicians.
Abstract: Mobile devices represent a growing research field within NIME, and a growing area for commercial music software. They present unique design challenges and opportunities, which are yet to be fully explored and exploited. In this paper, we propose using a survey method combined with qualitative analysis to investigate the way in which people use mobiles musically. We subsequently present as an area of future research our own PDplayer, which provides a completely self contained end application in the mobile device, potentially making the mobile a more viable and expressive tool for musicians.

Proceedings ArticleDOI
09 Sep 2012
TL;DR: This study focuses on handling high-dimensional classification problems by means of feature selection and gives results that approach or exceed the challenge baselines using a knearest-neighbor classifier.
Abstract: This study focuses on handling high-dimensional classification problems by means of feature selection. The data sets used are provided by the organizers of the Interspeech 2012 Speaker Trait Challenge. A combination of two feature selection approaches gives results that approach or exceed the challenge baselines using a knearest-neighbor classifier. One of the feature selection methods is based on covering the data set with correct unsupervised or supervised classifications according to individual features. The other selection method applies a measure of statistical dependence between discretized features and class labels. Index Terms: pattern recognition, feature selection, high-dimensional data, speaker characteristics

Journal ArticleDOI
TL;DR: The results showed that the temperature can have a significant effect on the lifetimes of component boards under mechanical shock loading but that the effect varied according to the structures of the component boards.