scispace - formally typeset
Search or ask a question

Showing papers by "Helsinki University of Technology published in 2014"


Journal ArticleDOI
TL;DR: This article describes the scenarios identified for the purpose of driving the 5G research direction and gives initial directions for the technology components that will allow the fulfillment of the requirements of the identified 5G scenarios.
Abstract: METIS is the EU flagship 5G project with the objective of laying the foundation for 5G systems and building consensus prior to standardization. The METIS overall approach toward 5G builds on the evolution of existing technologies complemented by new radio concepts that are designed to meet the new and challenging requirements of use cases today?s radio access networks cannot support. The integration of these new radio concepts, such as massive MIMO, ultra dense networks, moving networks, and device-to-device, ultra reliable, and massive machine communications, will allow 5G to support the expected increase in mobile data volume while broadening the range of application domains that mobile communications can support beyond 2020. In this article, we describe the scenarios identified for the purpose of driving the 5G research direction. Furthermore, we give initial directions for the technology components (e.g., link level components, multinode/multiantenna, multi-RAT, and multi-layer networks and spectrum handling) that will allow the fulfillment of the requirements of the identified 5G scenarios.

1,934 citations


Journal ArticleDOI
24 Mar 2014
TL;DR: ToDIGRA publications are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Generic License (http://creativecommons.org/licenses/by-nc-nd/2.5/).
Abstract: ToDIGRA publications are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Generic License (http://creativecommons.org/licenses/by-nc-nd/2.5/).

248 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new combination of properties for stiff and self-healing hydrogel materials from renewable sources, such as cellulose nanocrystals (CNCs).
Abstract: Nanocomposite hydrogels are prepared combining polymer brush-modified ‘hard’ cellulose nanocrystals (CNC) and ‘soft’ polymeric domains, and bound together by cucurbit[8]uril (CB[8]) supramolecular crosslinks, which allow dynamic host–guest interactions as well as selective and simultaneous binding of two guests, i.e., methyl viologen (the first guest) and naphthyl units (the second guest). CNCs are mechanically strong colloidal rods with nanometer-scale lateral dimensions, which are functionalized by surface-initiated atom transfer radical polymerization to yield a dense set of methacrylate polymer brushes bearing naphthyl units. They can then be non-covalently cross-linked through simple addition of poly(vinyl alcohol) polymers containing pendant viologen units as well as CB[8]s in aqueous media. The resulting supramolecular nanocomposite hydrogels combine three important criteria: high storage modulus (G′ > 10 kPa), rapid sol–gel transition (<6 s), and rapid self-healing even upon aging for several months, as driven by balanced colloidal reinforcement as well as the selectivity and dynamics of the CB[8] three-component supramolecular interactions. Such a new combination of properties for stiff and self-healing hydrogel materials suggests new approaches for advanced dynamic materials from renewable sources.

218 citations


Journal Article
TL;DR: It is argued that process indicators should be further developed to complement the picture given by traditional input–output indicators of technology transfer, and found to be phase, interface and component dependent.
Abstract: The locus of industrial innovation is shifting toward industrial networks, in which parallel development processes in individual interconnected actors frequently dominate. This development presents new challenges for the measurement and evaluation of technology transfer. In this paper, various technology transfer mechanisms and indicators are classified and discussed. Technology transfer mechanisms and indicators are found to be phase, interface and component dependent. It is argued that process indicators should be further developed to complement the picture given by traditional input–output indicators of technology transfer.

202 citations


Journal ArticleDOI
TL;DR: This paper aims to bridge recent work on Service Logic with practice and research in the Design for Service to explore whether and how human-centered collaborative design approaches could provide a source for interpreting existing service systems and proposing new ones and thus realize a Service Logic in organizations.
Abstract: This paper aims to bridge recent work on Service Logic with practice and research in the Design for Service to explore whether and how human-centered collaborative design approaches could provide a source for interpreting existing service systems and proposing new ones and thus realize a Service Logic in organizations. A comparison is made of existing theoretical backgrounds and frameworks from Service Logic and Design for Service studies that conceptualize core concepts for value co-creation: actors, resources, resource integration, service systems, participation, context, and experience. We find that Service Logic provides a framework for understanding service systems in action by focusing on how actors integrate resources to co-create value for themselves and others, whereas Design for Service provides an approach and tools to explore current service systems as a context to imagine future service systems and how innovation may develop as a result of reconfigurations of resources and actors. Design for Service also provides approaches, competences, and tools that enable involved actors to participate in and be a part of the service system redesign. Design for value co-creation is presented using this model. The paper builds on and extends the Service Logic research first by repositioning service design from a phase of development to Design for Service as an approach to service innovation, centered on understanding and engaging with customers' own value-creating practices. Second, it builds on and extends through discussing the meaning of value co-creation and identifying and distinguishing collaborative approaches for the generation of new resource constellations. In doing so, the collaborative approaches allow for achieving value co-creation in designing.

143 citations


Proceedings ArticleDOI
12 Jul 2014
TL;DR: This paper revisits batch state estimation through the lens of Gaussian process (GP) regression, and shows that this class of prior results in an inverse kernel matrix that is exactly sparse (block-tridiagonal) and that this can be exploited to carry out GP regression (and interpolation) very efficiently.
Abstract: In this paper, we revisit batch state estimation through the lens of Gaussian process (GP) regression. We consider continuous-discrete estimation problems wherein a trajectory is viewed as a one-dimensional GP, with time as the independent variable. Our continuous-time prior can be defined by any linear, time-varying stochastic differential equation driven by white noise; this allows the possibility of smoothing our trajectory estimates using a variety of vehicle dynamics models (e.g., ‘constant-velocity’). We show that this class of prior results in an inverse kernel matrix (i.e., covariance matrix between all pairs of measurement times) that is exactly sparse (block-tridiagonal) and that this can be exploited to carry out GP regression (and interpolation) very efficiently. Though the prior is continuous, we consider measurements to occur at discrete times. When the measurement model is also linear, this GP approach is equivalent to classical, discrete-time smoothing (at the measurement times). When the measurement model is nonlinear, we iterate over the whole trajectory (as is common in vision and robotics) to maximize accuracy. We test the approach experimentally on a simultaneous trajectory estimation and mapping problem using a mobile robot dataset.

122 citations


Proceedings ArticleDOI
01 Apr 2014
TL;DR: Morfessor 2.0 is a rewrite of the original, widely-used Morfessor 1.0 software, with well documented command-line tools and library interface, that includes new features such as semi-supervised learning, online training, and integrated evaluation code.
Abstract: Morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data. Recent developments include the development of semi-supervised methods for utilizing annotated data. Morfessor 2.0 is a rewrite of the original, widely-used Morfessor 1.0 software, with well documented command-line tools and library interface. It includes new features such as semi-supervised learning, online training, and integrated evaluation code.

104 citations


Proceedings Article
02 Apr 2014
TL;DR: This paper shows how periodic covariance functions in Gaussian process regression can be reformulated as state space models, which can be solved with classical Kalman ltering theory, and the expansion is shown to uniformly converge to the exact covariance function with a known convergence rate.
Abstract: This paper shows how periodic covariance functions in Gaussian process regression can be reformulated as state space models, which can be solved with classical Kalman ltering theory. This reduces the problematic cubic complexity of Gaussian process regression in the number of time steps into linear time complexity. The representation is based on expanding periodic covariance functions into a series of stochastic resonators. The explicit representation of the canonical periodic covariance function is written out and the expansion is shown to uniformly converge to the exact covariance function with a known convergence rate. The framework is generalized to quasi-periodic covariance functions by introducing damping terms in the system and applied to two sets of real data. The approach could be easily extended to nonstationary and spatio-temporal variants.

84 citations


Proceedings Article
01 Aug 2014
TL;DR: A new variant of Morfessor, FlatCat, is introduced that applies a hidden Markov model structure that builds on previous work on MorFessor, sharing model components with the popularMorfessor Baseline and Categories-MAP variants.
Abstract: Morfessor is a family of methods for learning morphological segmentations of words based on unannotated data. We introduce a new variant of Morfessor, FlatCat, that applies a hidden Markov model structure. It builds on previous work on Morfessor, sharing model components with the popular Morfessor Baseline and Categories-MAP variants. Our experiments show that while unsupervised FlatCat does not reach the accuracy of Categories-MAP, with semisupervised learning it provides state-of-the-art results in the Morpho Challenge 2010 tasks for English, Finnish, and Turkish.

78 citations


Journal ArticleDOI
TL;DR: In this article, an approximate series expansion of covariance function in terms of an eigenfunction expansion of the Laplace operator is proposed for reduced-rank Gaussian process regression.
Abstract: This paper proposes a novel scheme for reduced-rank Gaussian process regression. The method is based on an approximate series expansion of the covariance function in terms of an eigenfunction expansion of the Laplace operator in a compact subset of $\mathbb{R}^d$. On this approximate eigenbasis the eigenvalues of the covariance function can be expressed as simple functions of the spectral density of the Gaussian process, which allows the GP inference to be solved under a computational cost scaling as $\mathcal{O}(nm^2)$ (initial) and $\mathcal{O}(m^3)$ (hyperparameter learning) with $m$ basis functions and $n$ data points. Furthermore, the basis functions are independent of the parameters of the covariance function, which allows for very fast hyperparameter learning. The approach also allows for rigorous error analysis with Hilbert space theory, and we show that the approximation becomes exact when the size of the compact subset and the number of eigenfunctions go to infinity. We also show that the convergence rate of the truncation error is independent of the input dimensionality provided that the differentiability order of the covariance function is increases appropriately, and for the squared exponential covariance function it is always bounded by ${\sim}1/m$ regardless of the input dimensionality. The expansion generalizes to Hilbert spaces with an inner product which is defined as an integral over a specified input density. The method is compared to previously proposed methods theoretically and through empirical tests with simulated and real data.

74 citations


Journal ArticleDOI
TL;DR: This work reports on exploring multivalent interactions to CNC surface and shows that dendronized polymers (DenPols) with maltose-based sugar groups on the periphery of lysine dendrons and poly(ethylene-alt-maleimide) polymer backbone interact with CNCs.
Abstract: Cellulose nanocrystals (CNCs) are high aspect ratio colloidal rods with nanoscale dimensions, attracting considerable interest recently due to their high mechanical properties, chirality, sustainability, and availability. In order to exploit them for advanced functions in new materials, novel supracolloidal concepts are needed to manipulate their self-assemblies. We report on exploring multivalent interactions to CNC surface and show that dendronized polymers (DenPols) with maltose-based sugar groups on the periphery of lysine dendrons and poly(ethylene-alt-maleimide) polymer backbone interact with CNCs. The interactions can be manipulated by the dendron generation suggesting multivalent interactions. The complexation of the third generation DenPol (G3) with CNCs allows aqueous colloidal stability and shows wrapping around CNCs, as directly visualized by cryo high-resolution transmission electron microscopy and electron tomography. More generally, as the dimensions of G3 are in the colloidal range due to ...

Journal ArticleDOI
TL;DR: In this article, the surface and volume integral equation methods for finding time-harmonic solutions of the full-wave solutions of MaxMax's equations are discussed, and the main focus is on advanced techniques that would enable accurate, stable and scalable solutions on a wide range of material parameters, frequencies and applications.
Abstract: During the last two-three decades the importance of computer simulations based on numerical full-wave solutions of Maxwell's has continuously increased in electrical engineering. Software products based on integral equation methods have an unquestionable importance in the frequency domain electromagnetic analysis and design of open-region problems. This paper deals with the surface and volume integral equation methods for finding time-harmonic solutions of Maxwell's equations. First a review of classical integral equation representations and formulations is given. Thereafter we briefly overview the mathematical background of integral operators and equations and their discretization with the method of moments. The main focus is on advanced techniques that would enable accurate, stable, and scalable solutions on a wide range of material parameters, frequencies and applications. Finally, future perspectives of the integral equation methods for solving Maxwell's equations are discussed.

Journal ArticleDOI
TL;DR: Polymeric self-assemblies where nanoscale organization guides the macroscopic alignment up to millimetre scale are shown where halogen bonding mesogenic 1-iodoperfluoroalkanes to a star-shaped ethyleneglycol-based polymer, having chloride end-groups is shown.
Abstract: Aligning polymeric nanostructures up to macroscale in facile ways remains a challenge in materials science and technology. Here we show polymeric self-assemblies where nanoscale organization guides the macroscopic alignment up to millimetre scale. The concept is shown by halogen bonding mesogenic 1-iodoperfluoroalkanes to a star-shaped ethyleneglycol-based polymer, having chloride end-groups. The mesogens segregate and stack parallel into aligned domains. This leads to layers at ~10 nm periodicity. Combination of directionality of halogen bonding, mesogen parallel stacking and minimization of interfacial curvature translates into an overall alignment in bulk and films up to millimetre scale. Upon heating, novel supramolecular halogen-bonded polymeric liquid crystallinity is also shown. As many polymers present sites capable of receiving halogen bonding, we suggest generic potential of this strategy for aligning polymer self-assemblies.

Proceedings Article
01 Jan 2014
TL;DR: This paper proposes RQ-tree, a novel index which is based on a hierarchical clustering of the nodes in the graph, and further optimized using a balanced-minimum-cut criterion, and defines a fast filtering-and-verification online query-evaluation phase that relies on a maximum-flow-based candidate-generation phase and returns no incorrect nodes, thus guaranteeing perfect precis ion.
Abstract: Uncertain, or probabilistic, graphs have been increasingl y used to represent noisy linked data in many emerging application scenarios, and have recently attracted the attention of the databa se research community. A fundamental problem on uncertain graphs is reliability, which deals with the probability of nodes being reachable one from another. Existing literature has exclusively focused on reliability detection, which asks to compute the probability that two given nodes are connected. In this paper we study reliability search on uncertain graphs, which we define as the problem of computing all nodes reachable from a set of query nodes with probability no less than a given threshold. Existing reliability-detection approac hes are not well-suited to efficiently handle the reliability-search p roblem. We propose RQ-tree, a novel index which is based on a hierarchical clustering of the nodes in the graph, and further optimized using a balanced-minimum-cut criterion. Based on RQ-tree, we define a fast filtering-and-verification online query-evaluation s trategy that relies on a maximum-flow-based candidate-generation phase , followed by a verification phase consisting of either a lower-bo unding method or a sampling technique. The first verification method returns no incorrect nodes, thus guaranteeing perfect precis ion, completely avoids sampling, and is more efficient. The second ve rification method ensures instead better recall. Extensive experiments on real-world uncertain graphs show that our methods are very efficient—over state-of-the-art relia bilitydetection methods, we obtain a speed-up up to five orders of ma gnitude; as well as accurate—our techniques achieve precision > 0.95 and recall usually higher than 0.75.

Journal ArticleDOI
16 Sep 2014-Water
TL;DR: In this paper, the authors introduce a method to assess the financial feasibility of MAR under uncertainty by identifying cross-over points in break-even analysis, where MAR and surface storage have equal financial returns.
Abstract: Additional storage of water is a potential option to meet future water supply goals. Financial comparisons are needed to improve decision making about whether to store water in surface reservoirs or below ground, using managed aquifer recharge (MAR). In some places, the results of cost-benefit analysis show that MAR is financially superior to surface storage. However, uncertainty often exists as to whether MAR systems will remain operationally effective and profitable in the future, because the profitability of MAR is dependent on many uncertain technical and financial variables. This paper introduces a method to assess the financial feasibility of MAR under uncertainty. We assess such uncertainties by identification of cross-over points in break-even analysis. Cross-over points are the thresholds where MAR and surface storage have equal financial returns. Such thresholds can be interpreted as a set of minimum requirements beyond which an investment in MAR may no longer be worthwhile. Checking that these thresholds are satisfied can improve confidence in decision making. Our suggested approach can also be used to identify areas that may not be suitable for MAR, thereby avoiding expensive hydrogeological and geophysical investigations.

Journal ArticleDOI
TL;DR: In this paper, an approximate Bayesian inference for logistic Gaussian process (LGP) density estimation in a grid using Laplace's method to integrate over the non-Gaussian posterior distribution of latent function values and to determine the covariance function parameters with type-II maximum a posteriori (MAP) estimation is presented.
Abstract: Logistic Gaussian process (LGP) priors provide a exible alternative for modelling unknown densities. The smoothness properties of the density esti- mates can be controlled through the prior covariance structure of the LGP, but the challenge is the analytically intractable inference. In this paper, we present ap- proximate Bayesian inference for LGP density estimation in a grid using Laplace's method to integrate over the non-Gaussian posterior distribution of latent function values and to determine the covariance function parameters with type-II maximum a posteriori (MAP) estimation. We demonstrate that Laplace's method with MAP is suciently fast for practical interactive visualisation of 1D and 2D densities. Our experiments with simulated and real 1D data sets show that the estimation accu- racy is close to a Markov chain Monte Carlo approximation and state-of-the-art hierarchical innite Gaussian mixture models. We also construct a reduced-rank approximation to speed up the computations for dense 2D grids, and demonstrate density regression with the proposed Laplace approach.

Proceedings Article
01 Jan 2014
TL;DR: Two indexes are devised based on the idea of landmarks: distances from all vertices of the graph to a selected subset of landmark vertices are pre-computed and then used at query time to efficiently approximate distance queries, showing that these indexes can efficiently and accurately estimate label-constrained distance queries.
Abstract: A fundamental operation over edge-labeled graphs is the computation of shortest-path distances subject to a constraint on the set of permissible edge labels. Applying exact algorithms for such an operation is not a viable option, especially for massive graphs, or in scenarios where the distance computation is used as a primitive for more complex computations. In this paper we study the problem of efficient approximation of shortest-path queries with edge-label constraints, for which we devise two indexes based on the idea of landmarks: distances from all vertices of the graph to a selected subset of landmark vertices are pre-computed and then used at query time to efficiently approximate distance queries. The major challenge to face is that, in principle, an exponential number of constraint label sets needs to be stored for each vertex-landmark pair, which makes the index pre-computation and storage far from trivial. We tackle this challenge from two different perspectives, which lead to indexes with different characteristics: one index is faster and more accurate, but it requires more space than the other. We extensively evaluate our techniques on real and synthetic datasets, showing that our indexes can efficiently and accurately estimate label-constrained distance queries.

01 Jan 2014
TL;DR: This document investigated how these sensing capabilities could be used to derive information about available parking places in the absence of parking sensor infrastructure and tested the method with a dedicated mobile client connected to a network server.
Abstract: New mobile phones come with an increasing array of sensors. Recently mobile operating systems have started to incorporate means to offer contextual information derived from measurements of multiple sensors. A phone can be aware of whether it is transported in a vehicle or carried on foot. We investigated how these sensing capabilities could be used to derive information about available parking places in the absence of parking sensor infrastructure and tested the method with a dedicated mobile client connected to a network server. In this document we present the sensing method and the application, and qualitatively analyze the pros and cons of the approach.

Journal Article
TL;DR: In this paper, the authors define the attributes of partnering relations and identify the key factors that help a relationship to succeed from the theoretical point of view, and analyse the attributes and success factors.
Abstract: Traditionally, relationships between facility service providers and clients have been based on an adversarial approach. The expansion of existing outsource contracts and outsourcing of strategically more important services have created a need to develop relationships based on a more collaborative approach. These relationships can be called partnering relations or partnerships, and currently they are also widely applied in many other industries. In the real estate industry the term partnering is used rather loosely and thus there is a need to define the elements that characterize partnering more exactly. The aim of this article is to define the attributes of partnering relations and to identify the key factors that help a relationship to succeed. These attributes and success factors are analysed from the theoretical point of view. From the literature review conducted, conclusions are drawn from the point of view of facility services. A partnering relation is selected when companies outsource strategically more important functions or when a property owner bundles outsourced services together or moves from single-site sourcing to multiple-site sourcing. Partnering relations are based on mutual trust, commitment, openness, involvement of different organisational levels, continuous development and sharing of benefits and risks. The success of the relationship is based on two-way information-sharing, joint problem-solving, the partners’ ability to meet performance expectations, clearly-defined and mutually-agreed goals, and mutual involvement in relationship development and planning.

Journal ArticleDOI
TL;DR: A simple one-pot procedure is demonstrated to synthesize fluorescent magic number Au25 clusters carrying controlled amounts of bulky calix[4]arene functionalities, resulting in clusters carrying one to eight calixarene moieties.
Abstract: Although various complex, bulky ligands have been used to functionalize plasmonic gold nanoparticles, introducing them to small, atomically precise gold clusters is not trivial. Here, we demonstrate a simple one-pot procedure to synthesize fluorescent magic number Au25 clusters carrying controlled amounts of bulky calix[4]arene functionalities. These clusters are obtained from a synthesis feed containing binary mixtures of tetrathiolated calix[4]arene and 1-butanethiol. By systematic variation of the molar ratio of ligands, clusters carrying one to eight calixarene moieties were obtained. Structural characterization reveals unexpected binding of the calix[4]arenes to the Au25 cluster surface with two or four thiolates per moiety.

Journal ArticleDOI
TL;DR: In this article, a tentative typology of dominant dynamic complementarities is developed and used to hypothesize differences in size distributions of technology-based firms in different industries, markets, and technological systems.
Abstract: The borderlines between a firm and its environment are becoming increasingly blurred. Small firms and large firms can be viewed as constituting innovation networks where dynamic complementarities between small and large firms are exploited. Network structures are different in different industries, markets, and in different technological systems. The characteristics of the industry, market, or technological system can be expected to affect the configuration of innovation networks. In addition, it is expected that network transactions, which provide a basic mechanism of change in networks, are different in different industries, markets, and technological systems. Studies on industry evolution in Finland and the conceptual analysis in the present paper indicate that some interrelationships exist between systemic determinants and the patterns of acquisition of new technology–based firms. The concept of dominant dynamic complementarities is developed and used to hypothesize differences in size distributions of technology–based firms in different industries. A tentative typology of dominant dynamic complementarities is developed.

Proceedings ArticleDOI
22 Jun 2014
TL;DR: Experiments show that the approach can yield significant improvement in tagging accuracy in case the labels have sufficiently rich inner structure, and is incorporated into the CRF model via a (relatively) straightforward feature extraction scheme.
Abstract: We discuss part-of-speech (POS) tagging in presence of large, fine-grained label sets using conditional random fields (CRFs). We propose improving tagging accuracy by utilizing dependencies within sub-components of the fine-grained labels. These sub-label dependencies are incorporated into the CRF model via a (relatively) straightforward feature extraction scheme. Experiments on five languages show that the approach can yield significant improvement in tagging accuracy in case the labels have sufficiently rich inner structure.

Proceedings ArticleDOI
14 Sep 2014
TL;DR: The proposed voice source model is compared to a robust and high-quality excitation modelling method based on manually selected mean glottal flow pulses for each vocal effort level and using a spectral matching filter to correctly match the voice source spectrum to a desired style.
Abstract: This paper studies a deep neural network (DNN) based voice source modelling method in the synthesis of speech with varying vocal effort. The new trainable voice source model learns a mapping between the acoustic features and the time-domain pitch-synchronous glottal flow waveform using a DNN. The voice source model is trained with various speech material from breathy, normal, and Lombard speech. In synthesis, a normal voice is first adapted to a desired style, and using the flexible DNN-based voice source model, a style-specific excitation waveform is automatically generated based on the adapted acoustic features. The proposed voice source model is compared to a robust and high-quality excitation modelling method based on manually selected mean glottal flow pulses for each vocal effort level and using a spectral matching filter to correctly match the voice source spectrum to a desired style. Subjective evaluations show that the proposed DNN-based method is rated comparable to the baseline method, but avoids the manual selection of the pulses and is computationally faster than a system using a spectral matching filter.

Journal ArticleDOI
24 Dec 2014-Energies
TL;DR: In this article, 16 combined heat and power (CHP) units representing different technologies are taken into account for multicriteria evaluation with respect to the end users' requirements, specifically including the criteria of efficiency, investment cost, electricity cost, heat cost, CO2 production and footprint.
Abstract: Combined heat and power (CHP) is a promising technology that can contribute to energy efficiency and environmental protection. More CHP-based energy systems are planned for the future. This makes the evaluation and selection of CHP systems very important. In this paper, 16 CHP units representing different technologies are taken into account for multicriteria evaluation with respect to the end users’ requirements. These CHP technologies cover a wide range of power outputs and fuel types. They are evaluated from the energy, economy and environment (3E) points of view, specifically including the criteria of efficiency, investment cost, electricity cost, heat cost, CO2 production and footprint. Uncertainties and imprecision are common both in criteria measurements and weights, therefore the stochastic multicriteria acceptability analysis (SMAA) model is used in aiding this decision making problem. These uncertainties are treated better using a probability distribution function and Monte Carlo simulation in the model. Moreover, the idea of “feasible weight space (FWS)” which represents the union of all preference information from decision makers (DMs) is proposed. A complementary judgment matrix (CJM) is introduced to determine the FWS. It can be found that the idea of FWS plus CJM is well compatible with SMAA and thus make the evaluation reliable.

Proceedings Article
01 May 2014
TL;DR: The annotations of 297 documents and over 9000 sentences can be used for research purposes when developing methods for detecting topic-wise sentiment in financial text and are evaluated using a number of inter-annotator agreement metrics.
Abstract: Public opinion, as measured by media sentiment, can be an important indicator in the financial and economic context. These are domains where traditional sentiment estimation techniques often struggle, and existing annotated sentiment text collections are of less use. Though considerable progress has been made in analyzing sentiments at sentence-level, performing topic-dependent sentiment analysis is still a relatively uncharted territory. The computation of topic-specific sentiments has commonly relied on naive aggregation methods without much consideration to the relevance of the sentences to the given topic. Clearly, the use of such methods leads to a substantial increase in noise-to-signal ratio. To foster development of methods for measuring topic-specific sentiments in documents, we have collected and annotated a corpus of financial news that have been sampled from Thomson Reuters newswire. In this paper, we describe the annotation process and evaluate the quality of the dataset using a number of inter-annotator agreement metrics. The annotations of 297 documents and over 9000 sentences can be used for research purposes when developing methods for detecting topic-wise sentiment in financial text.

Journal ArticleDOI
TL;DR: This paper derives a more general class of portfolio value functions that deploy symmetric multilinear functions to capture nonlinearities in the criterion-specific portfolio values and develops novel techniques for eliciting these value functions.
Abstract: Portfolio decision analysis models support selection of a portfolio of projects with multiple objectives and limited resources. These models often rely on the additive-linear portfolio value function, although empirical evidence suggests that the underlying preference assumptions do not always hold. In this paper we relax these assumptions and derive a more general class of portfolio value functions that deploy symmetric multilinear functions to capture nonlinearities in the criterion-specific portfolio values. These values can be aggregated with an additive or a multilinear function, allowing a rich representation of preferences among the multiple objectives. We develop novel techniques for eliciting these value functions and also discuss the use of existing techniques that are often applied in practice. Furthermore, we demonstrate that the value functions can be maximized for problem sizes of practical relevance using an implicit enumeration algorithm or an approximate mixed-integer linear programming m...

16 May 2014
TL;DR: The problem of reverberation in speech recognition is addressed in this study by extending a noise-robust feature enhancement method based on non-negative matrix factorization to account for the effects of room reverberation.
Abstract: The problem of reverberation in speech recognition is addressed in this study by extending a noise-robust feature enhancement method based on non-negative matrix factorization. The signal model of the observation as a linear combination of sample spectrograms is augmented by a mel-spectral feature domain convolution to account for the effects of room reverberation. The proposed method is contrasted with missing data techniques for reverberant speech, and evaluated for speech recognition performance using the REVERB challenge corpus. Our results indicate consistent gains in recognition performance compared to the baseline system, with a relative improvement in word error rate of 42.6% for the optimal case.

Journal Article
TL;DR: This technique is revisited in detail and the effect of different parameters is examined to ultimately achieve optimal quality and perception in all situations and it is suggested that the revised method is viable in many cases.
Abstract: Synthesis of volumetric virtual sources is a useful technique for auditory displays and virtual worlds. This task can be simplified into synthesis of perceived spatial extent. Previous research in virtual-world Directional Audio Coding has shown that spatial extent can be synthesized with monophonic sources by applying a time-frequency-space decomposition, i.e., randomly distributing time-frequency bins of the source signal. However, although this technique often achieved perception of spatial extent, it was not guaranteed and the timbre could degrade. In this article this technique is revisited in detail and the effect of different parameters is examined to ultimately achieve optimal quality and perception in all situations. The results of a series of informal and formal experiments are presented here, and they suggest that the revised method is viable in many cases. There is some dependency on the signal content that requires proper tuning of parameters. Furthermore, it is shown that different distribution widths can be produced with the method as well. From a psychoacoustical perspective, it is interesting that distributed narrow frequency bands form a spatially extended auditory event with no apparent directional focus.

Journal ArticleDOI
TL;DR: A modular service architecture framework to develop tailored service solutions is suggested and a real-life example of how modular service design can be adopted when developing new and modified service encounter processes in the context of the less studied, small e-stores' order-delivery process is provided.
Abstract: Servitization and productization lead to offerings and solutions that combine tangible products, standardized base services, and customized services. These tailored service offerings or solutions call for new modular service architectures and service process design approaches to complement traditional service and product design methods. In this paper we suggest a modular service architecture framework to develop tailored service solutions. We use the framework as a lens to analyze three Finnish small- and medium-sized enterprises in the retail industry offering an e-store to their customers. We identify and define the order-delivery process of the case e-stores' supply chain; the modularity and modularization principles of the order-delivery process; and constructs such as service process modularization, modular reuse, and modular variation as well as their interrelationships. For practitioners, we provide a real-life example of how modular service design can be adopted when developing new and modified se...

Book ChapterDOI
01 Jan 2014
TL;DR: The technological developments in the field starting from the first expert systems of the late 1970s down to today’s configuration solutions are outlined.
Abstract: Nearly 40-years of configuration technologies motivated us to give this brief overview of the main configuration technological streams. We outline the technological developments in the field starting from the first expert systems of the late 1970s down to today’s configuration solutions. The vastness and intricacies of the configuration field makes it impossible to cover concisely all of its relevant aspects. For the purpose of this overview, we decided to focus on the following four different yet overlapping technological developments: (1) rule-based configurators, (2) early model-based configurators, (3) mainstream configuration environments, and (4) mass customization toolkits.