scispace - formally typeset
Search or ask a question
Author

Ángel Cuevas

Bio: Ángel Cuevas is an academic researcher from Charles III University of Madrid. The author has contributed to research in topics: The Internet & Online advertising. The author has an hindex of 18, co-authored 105 publications receiving 1210 citations. Previous affiliations of Ángel Cuevas include Telecom SudParis & Carlos III Health Institute.


Papers
More filters
Journal ArticleDOI
TL;DR: IMS is examined from a mobile operator's perspective and its possible adaptation to the next-generation networks is analysed.
Abstract: As third-generation (mobile) networks (3G networks) become a commercial reality, strong movements are emerging in the direction of a common infrastructure based on the Internet protocol (IP) The users' mobile devices are like another IP host connected to the Internet In such a scenario, the network operator infrastructure is degraded to bit pipes To avoid this, the 3G partnership project (3GPP) and ETSI TISPAN have designed IP multimedia subsystem (IMS), a service platform that aims to place the network operator again in the central role of service provisioning In this article we examine IMS from a mobile operator's perspective and analyse its possible adaptation to the next-generation networks

146 citations

Posted Content
TL;DR: In this article, the authors identify the content publishers of more than 55k torrents in two major BitTorrent portals and examine their behavior, showing that a small fraction of publishers are responsible for 66% of published content and 75% of downloads.
Abstract: BitTorrent is the most popular P2P content delivery application where individual users share various type of content with tens of thousands of other users. The growing popularity of BitTorrent is primarily due to the availability of valuable content without any cost for the consumers. However, apart from required resources, publishing (sharing) valuable (and often copyrighted) content has serious legal implications for user who publish the material (or publishers). This raises a question that whether (at least major) content publishers behave in an altruistic fashion or have other incentives such as financial. In this study, we identify the content publishers of more than 55k torrents in 2 major BitTorrent portals and examine their behavior. We demonstrate that a small fraction of publishers are responsible for 66% of published content and 75% of the downloads. Our investigations reveal that these major publishers respond to two different profiles. On one hand, antipiracy agencies and malicious publishers publish a large amount of fake files to protect copyrighted content and spread malware respectively. On the other hand, content publishing in BitTorrent is largely driven by companies with financial incentive. Therefore, if these companies lose their interest or are unable to publish content, BitTorrent traffic/portals may disappear or at least their associated traffic will significantly reduce.

78 citations

Proceedings ArticleDOI
30 Nov 2010
TL;DR: This study identifies the content publishers of more than 55K torrents in two major BitTorrent portals and examines their behavior, demonstrating that a small fraction of publishers is responsible for 67% of the published content and 75%" of the downloads.
Abstract: BitTorrent is the most popular P2P content delivery application where individual users share various type of content with tens of thousands of other users. The growing popularity of BitTorrent is primarily due to the availability of valuable content without any cost for the consumers. However, apart from required resources, publishing (sharing) valuable (and often copyrighted) content has serious legal implications for users who publish the material (or publishers). This raises a question that whether (at least major) content publishers behave in an altruistic fashion or have other incentives such as financial. In this study, we identify the content publishers of more than 55K torrents in two major BitTorrent portals and examine their behavior. We demonstrate that a small fraction of publishers is responsible for 67% of the published content and 75% of the downloads. Our investigations reveal that these major publishers respond to two different profiles. On the one hand, antipiracy agencies and malicious publishers publish a large amount of fake files to protect copyrighted content and spread malware respectively. On the other hand, content publishing in BitTorrent is largely driven by companies with financial incentives. Therefore, if these companies lose their interest or are unable to publish content, BitTorrent traffic/portals may disappear or at least their associated traffic will be significantly reduced.

68 citations

Journal ArticleDOI
01 Jan 2015
TL;DR: This paper investigates how users' interest similarity relates to various social features and proposes an interest similarity prediction model based on the learned social features, which reveals that people tend to exhibit more similar tastes if they have similar demographic information, or if they are friends.
Abstract: Understanding how much two individuals are alike in their interests (i.e., interest similarity) has become virtually essential for many applications and services in Online Social Networks (OSNs). Since users do not always explicitly elaborate their interests in OSNs like Facebook, how to determine users' interest similarity without fully knowing their interests is a practical problem. In this paper, we investigate how users' interest similarity relates to various social features (e.g. geographic distance); and accordingly infer whether the interests of two users are alike or unalike where one of the users' interests are unknown. Relying on a large Facebook dataset, which contains 479,048 users and 5,263,351 user-generated interests, we present comprehensive empirical studies and verify the homophily of interest similarity across three interest domains (movies, music and TV shows). The homophily reveals that people tend to exhibit more similar tastes if they have similar demographic information (e.g., age, location), or if they are friends. It also shows that the individuals with a higher interest entropy usually share more interests with others. Based on these results, we provide a practical prediction model under a real OSN environment. For a given user with no interest information, this model can select some individuals who not only exhibit many interests but also probably achieve high interest similarities with the given user. Eventually, we illustrate a use case to demonstrate that the proposed prediction model could facilitate decision-making for OSN applications and services. Pose a practical research problem: how to infer the interest similarity of two users where we do not know one of the user's interests.Reveal the homophily of interest similarity with respect to various social features, relying on a large Facebook dataset (479,048 users and 5,263,351 user-generated interests).Devise an interest similarity prediction model based on the learned social features.A recommendation system for new users is illustrated to show the practicality of the proposed interest similarity prediction model.

64 citations

Posted Content
TL;DR: This study offers the most comprehensive characterization of G+ based on the largest collected data sets and shows that despite the dramatic growth in the size of G+, the relative size of LCC has been decreasing and its connectivity has become less clustered.
Abstract: In the era when Facebook and Twitter dominate the market for social media, Google has introduced Google+ (G+) and reported a significant growth in its size while others called it a ghost town. This begs the question that "whether G+ can really attract a significant number of connected and active users despite the dominance of Facebook and Twitter?". This paper tackles the above question by presenting a detailed characterization of G+ based on large scale measurements. We identify the main components of G+ structure, characterize the key features of their users and their evolution over time. We then conduct detailed analysis on the evolution of connectivity and activity among users in the largest connected component (LCC) of G+ structure, and compare their characteristics with other major OSNs. We show that despite the dramatic growth in the size of G+, the relative size of LCC has been decreasing and its connectivity has become less clustered. While the aggregate user activity has gradually increased, only a very small fraction of users exhibit any type of activity. To our knowledge, our study offers the most comprehensive characterization of G+ based on the largest collected data sets.

52 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

25 Apr 2017
TL;DR: This presentation is a case study taken from the travel and holiday industry and describes the effectiveness of various techniques as well as the performance of Python-based libraries such as Python Data Analysis Library (Pandas), and Scikit-learn (built on NumPy, SciPy and matplotlib).
Abstract: This presentation is a case study taken from the travel and holiday industry. Paxport/Multicom, based in UK and Sweden, have recently adopted a recommendation system for holiday accommodation bookings. Machine learning techniques such as Collaborative Filtering have been applied using Python (3.5.1), with Jupyter (4.0.6) as the main framework. Data scale and sparsity present significant challenges in the case study, and so the effectiveness of various techniques are described as well as the performance of Python-based libraries such as Python Data Analysis Library (Pandas), and Scikit-learn (built on NumPy, SciPy and matplotlib). The presentation is suitable for all levels of programmers.

1,338 citations