scispace - formally typeset
Search or ask a question

Showing papers by "Konstantinos G. Margaritis published in 2005"


Proceedings ArticleDOI
08 Sep 2005
TL;DR: The results show that a reduction in the dimension of the item neighborhood is promising, since it does not only tackle some of the recorded problems of recommender systems, but also assists in increasing the accuracy of systems employing it.
Abstract: In this paper we examine the use of a matrix factorization technique called singular value decomposition (SVD) in item-based collaborative filtering. After a brief introduction to SVD and some of its previous applications in recommender systems, we proceed with a full description of our algorithm, which uses SVD in order to reduce the dimension of the active item's neighborhood. The experimental part of this work first locates the ideal parameter settings for the algorithm, and concludes by contrasting it with plain item-based filtering which utilizes the original, high dimensional neighborhood. The results show that a reduction in the dimension of the item neighborhood is promising, since it does not only tackle some of the recorded problems of recommender systems, but also assists in increasing the accuracy of systems employing it.

49 citations


Proceedings ArticleDOI
04 Sep 2005
TL;DR: This work reports work on mapping the acoustic speech signal, parametrized using Mel Frequency Cepstral Analysis, onto electromagnetic articulography trajectories from the MOCHA database, using the machine learning technique of Support Vector Regression.
Abstract: We report work on mapping the acoustic speech signal, parametrized using Mel Frequency Cepstral Analysis, onto electromagnetic articulography trajectories from the MOCHA database. We employ the machine learning technique of Support Vector Regression, contrasting previous works that applied Neural Networks to the same task. Our results are comparable to those older attempts, even though, due to training time considerations, we use a much smaller training set, derived by means of clustering the acoustic data.

21 citations


Journal ArticleDOI
TL;DR: This paper presents a new linear processor array for solving the LCS problem, based on parallelization of a recent LCS algorithm which consists of two phases, i.e. preprocessing and computation.
Abstract: A longest common subsequence (LCS) of two strings is a common subsequence of two strings of maximal length. The LCS problem is to find an LCS of two given strings and the length of the LCS (LLCS). In this paper, we present a new linear processor array for solving the LCS problem. The array is based on parallelization of a recent LCS algorithm which consists of two phases, i.e. preprocessing and computation. The computation phase is based on bit-level dynamic programming approach. Implementations of the preprocessing and computation phases are discussed on the same processor array architecture for the LCS problem. Further, we propose a block processor array architecture which reduces the overall communication and time requirements. Finally, we develop a performance model for estimating the performance of the processor array architecture on Pentium processors.

8 citations


Proceedings ArticleDOI
14 Jun 2005
TL;DR: This paper proposes a generic programmable array processor architecture that maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication.
Abstract: This paper proposes a generic programmable array processor architecture for a wide variety of approximate string matching algorithms. Further, we describe the architecture of the array and the architecture of the cell in detail in order to efficiently implement for both the preprocessing and searching phases of most string matching algorithms. Further, the architecture performs approximate string matching for complex patterns that contain don't care, complement and classes symbols. Our architecture maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication. It may be adopted as a basic structure for a universal flexible string matcher engine.

8 citations


Book ChapterDOI
19 Apr 2005
TL;DR: This work presents results on experiments using Support Vector Classification and a combination of Principal Component Analysis and Support Vector Regression on the mapping from the acoustic signal to electropalatographic information.
Abstract: Electropalatography is a well established technique for recording information on the patterns of contact between the tongue and the hard palate during speech. It leads to a stream of binary vectors, called electropalatograms. We are interested in the mapping from the acoustic signal to electropalatographic information. We present results on experiments using Support Vector Classification and a combination of Principal Component Analysis and Support Vector Regression.

3 citations


Book ChapterDOI
12 Sep 2005
TL;DR: Support Vector Regression is employed as the main tool, and Principal Component Analysis is used as an auxiliary one for the mapping between the speech signal and articulatory trajectories from the MOCHA database.
Abstract: We report work on the mapping between the speech signal and articulatory trajectories from the MOCHA database. Contrasting previous works that used Neural Networks for the same task, we employ Support Vector Regression as our main tool, and Principal Component Analysis as an auxiliary one. Our results are comparable, even though, due to training time considerations we use only a small portion of the available data.

2 citations


Book ChapterDOI
11 Nov 2005
TL;DR: A performance model for estimating the performance of two proposed matrix – vector implementations on a cluster of heterogeneous workstations and identifies cost of reading of data from disk and communication cost as the primary factors affecting performance of the basic parallel matrix –vector implementation.
Abstract: This paper presents the basic parallel implementation and a variation for matrix – vector multiplication We evaluated and compared the performance of the two implementations on a cluster of workstations using Message Passing Interface (MPI) library The experimental results demonstrate that the basic implementation achieves lower performance than the other variation Further, we analyzed the several classes of overheads contribute to lowered performance of the basic implementation These analyses have identified cost of reading of data from disk and communication cost as the primary factors affecting performance of the basic parallel matrix – vector implementation Finally, we present a performance model for estimating the performance of two proposed matrix – vector implementations on a cluster of heterogeneous workstations

1 citations


Journal ArticleDOI
01 Feb 2005
TL;DR: A hybrid parallel implementation is proposed that combines the advantages of static and dynamic parallel methods in order to reduce the load imbalance and communication overhead and a performance prediction model of the four implementations agrees well with experimental measurements.
Abstract: In this paper, we propose four text searching implementations on a cluster of heterogeneous workstations using MPI message passing library. The first three parallel implementations are based on the static and dynamic master-worker methods. Further, we propose a hybrid parallel implementation that combines the advantages of static and dynamic parallel methods in order to reduce the load imbalance and communication overhead. We test these parallel implementations and present experimental results for different text sizes and number of workstations. We also propose a performance prediction model of the four implementations that agrees well with our experimental measurements.

1 citations


Book ChapterDOI
01 Jan 2005
TL;DR: The news collection process, the techniques adopted for structuring the news archive, the creation, maintenance and update of the user model and the generation of the personalized web pages are described.
Abstract: PENA (Personalized News Access) is an adaptive system for the personalized access to news. The aims of the system are to collect news from predefined news sites, to select the sections and news in the server that are most relevant for each user and to present the selected news. In this paper are described the news collection process, the techniques adopted for structuring the news archive, the creation, maintenance and update of the user model and the generation of the personalized web pages. This is a preliminary work that is based on the system that is described in [1].