scispace - formally typeset
Search or ask a question

Showing papers on "User modeling published in 2016"


Journal ArticleDOI
TL;DR: Several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.
Abstract: In the last 16 years, more than 200 research articles were published about research-paper recommender systems. We reviewed these articles and present some descriptive statistics in this paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18 % of the reviewed approaches, and graph-based recommendations by 16 %. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users' information needs. Our review revealed some shortcomings of the current research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81 %) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single paper as input. Information on runtime was provided for 10 % of the approaches. Finally, few research papers had an impact on research-paper recommender systems in practice. We also identified a lack of authority and long-term research interest in the field: 73 % of the authors published no more than one paper on research-paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.

648 citations


Proceedings ArticleDOI
11 Apr 2016
TL;DR: This paper proposes a new probabilistic approach that directly incorporates user exposure to items into collaborative filtering, and recovers one of the most successful state-of-the-art approaches as a special case of the model.
Abstract: Collaborative filtering analyzes user preferences for items (e.g., books, movies, restaurants, academic papers) by exploiting the similarity patterns across users. In implicit feedback settings, all the items, including the ones that a user did not consume, are taken into consideration. But this assumption does not accord with the common sense understanding that users have a limited scope and awareness of items. For example, a user might not have heard of a certain paper, or might live too far away from a restaurant to experience it. In the language of causal analysis (Imbens & Rubin, 2015), the assignment mechanism (i.e., the items that a user is exposed to) is a latent variable that may change for various user/item combinations. In this paper, we propose a new probabilistic approach that directly incorporates user exposure to items into collaborative filtering. The exposure is modeled as a latent variable and the model infers its value from data. In doing so, we recover one of the most successful state-of-the-art approaches as a special case of our model (Hu et al. 2008), and provide a plug-in method for conditioning exposure on various forms of exposure covariates (e.g., topics in text, venue locations). We show that our scalable inference algorithm outperforms existing benchmarks in four different domains both with and without exposure covariates.

329 citations


Journal ArticleDOI
TL;DR: The principles and system components for navigation and manipulation in domestic environments, the interaction paradigm and its implementation in a multimodal user interface, the core robot tasks, as well as the results from the user studies are described.

263 citations


Journal ArticleDOI
TL;DR: This paper proposes a latent class probabilistic generative model Spatial-Temporal LDA (ST-LDA) to learn region-dependent personal interests according to the contents of their checked-in POIs at each region, and designs an effective attribute pruning algorithm to overcome the curse of dimensionality and support fast online recommendation for large-scale POI data.
Abstract: Point-of-Interest recommendation is an essential means to help people discover attractive locations, especially when people travel out of town or to unfamiliar regions. While a growing line of research has focused on modeling user geographical preferences for POI recommendation, they ignore the phenomenon of user interest drift across geographical regions, i.e., users tend to have different interests when they travel in different regions, which discounts the recommendation quality of existing methods, especially for out-of-town users. In this paper, we propose a latent class probabilistic generative model Spatial-Temporal LDA (ST-LDA) to learn region-dependent personal interests according to the contents of their checked-in POIs at each region. As the users’ check-in records left in the out-of-town regions are extremely sparse, ST-LDA incorporates the crowd’s preferences by considering the public’s visiting behaviors at the target region. To further alleviate the issue of data sparsity, a social-spatial collective inference framework is built on ST-LDA to enhance the inference of region-dependent personal interests by effectively exploiting the social and spatial correlation information. Besides, based on ST-LDA, we design an effective attribute pruning (AP) algorithm to overcome the curse of dimensionality and support fast online recommendation for large-scale POI data. Extensive experiments have been conducted to evaluate the performance of our ST-LDA model on two real-world and large-scale datasets. The experimental results demonstrate the superiority of ST-LDA and AP, compared with the state-of-the-art competing methods, by making more effective and efficient mobile recommendations.

201 citations


Proceedings ArticleDOI
13 Mar 2016
TL;DR: A user study designed to measure user satisfaction over a range of typical scenarios of use is described, finding that the notion of satisfaction varies across different scenarios, and that overall task-level satisfaction cannot be reduced to query- level satisfaction alone.
Abstract: Voice-controlled intelligent personal assistants, such as Cortana, Google Now, Siri and Alexa, are increasingly becoming a part of users' daily lives, especially on mobile devices. They introduce a significant change in information access, not only by introducing voice control and touch gestures but also by enabling dialogues where the context is preserved. This raises the need for evaluation of their effectiveness in assisting users with their tasks. However, in order to understand which type of user interactions reflect different degrees of user satisfaction we need explicit judgements. In this paper, we describe a user study that was designed to measure user satisfaction over a range of typical scenarios of use: controlling a device, web search, and structured search dialogue. Using this data, we study how user satisfaction varied with different usage scenarios and what signals can be used for modeling satisfaction in the different scenarios. We find that the notion of satisfaction varies across different scenarios, and show that, in some scenarios (e.g. making a phone call), task completion is very important while for others (e.g. planning a night out), the amount of effort spent is key. We also study how the nature and complexity of the task at hand affects user satisfaction, and find that preserving the conversation context is essential and that overall task-level satisfaction cannot be reduced to query-level satisfaction alone. Finally, we shed light on the relative effectiveness and usefulness of voice-controlled intelligent agents, explaining their increasing popularity and uptake relative to the traditional query-response interaction.

167 citations


Proceedings ArticleDOI
13 Jul 2016
TL;DR: How the field has evolved, novel work that is pursuing on applying user modeling and adaptation to information retrieval, insights into where the field is headed and the hottest topics for exploration, and some thoughts on the conflict between the benefits ofuser modeling and its intrusion on people's lives are discussed.
Abstract: User modeling and adaptation had its inception as a field at a workshop in Maria Laach, Germany in 1986. Most of the work at that time focused on applications in natural language processing, such as adapting explanations to the user's level of expertise. Since then, the field has grown tremendously and new applications are arising each year. As appropriate for the 30th anniversary of the first workshop, this talk will discuss how the field has evolved, novel work that we are pursuing on applying user modeling and adaptation to information retrieval, insights into where the field is headed and the hottest topics for exploration, and some thoughts on the conflict between the benefits of user modeling and its intrusion on people's lives.

162 citations


Proceedings ArticleDOI
13 Aug 2016
TL;DR: This work explores a new concept of ``Latent User Space'' to more naturally model the relationship between the underlying real users and their observed projections onto the varied social platforms, such that the more similar the real users, the closer their profiles in the latent user space.
Abstract: User identity linkage across social platforms is an important problem of great research challenge and practical value. In real applications, the task often assumes an extra degree of difficulty by requiring linkage across multiple platforms. While pair-wise user linkage between two platforms, which has been the focus of most existing solutions, provides reasonably convincing linkage, the result depends by nature on the order of platform pairs in execution with no theoretical guarantee on its stability. In this paper, we explore a new concept of ``Latent User Space'' to more naturally model the relationship between the underlying real users and their observed projections onto the varied social platforms, such that the more similar the real users, the closer their profiles in the latent user space. We propose two effective algorithms, a batch model(ULink) and an online model(ULink-On), based on latent user space modelling. Two simple yet effective optimization methods are used for optimizing objective function: the first one based on the constrained concave-convex procedure(CCCP) and the second on accelerated proximal gradient. To our best knowledge, this is the first work to propose a unified framework to address the following two important aspects of the multi-platform user identity linkage problem --- (I) the platform multiplicity and (II) online data generation. We present experimental evaluations on real-world data sets for not only traditional pairwise-platform linkage but also multi-platform linkage. The results demonstrate the superiority of our proposed method over the state-of-the-art ones.

139 citations


Patent
23 Mar 2016
TL;DR: In this paper, a control circuitry analyzes the verbal data to automatically identify a media asset referred to during the interaction by at least one of the user and the person with whom the user is interacting.
Abstract: Methods and systems are provided for generating automatic program recommendations based on user interactions. In some embodiments, control circuitry processes verbal data received during an interaction between a user of a user device and a person with whom the user is interacting. The control circuitry analyzes the verbal data to automatically identify a media asset referred to during the interaction by at least one of the user and the person with whom the user is interacting. The control circuitry adds the identified media asset to a list of media assets associated with the user of the user device. The list of media assets is transmitted to a second user device of the user.

124 citations


Proceedings ArticleDOI
07 Jul 2016
TL;DR: This paper proposes an automatic method to predict user satisfaction with intelligent assistants that exploits all the interaction signals, including voice commands and physical touch gestures on the device, and finds that interaction signals that capture the user reading patterns have a high impact.
Abstract: There is a rapid growth in the use of voice-controlled intelligent personal assistants on mobile devices, such as Microsoft's Cortana, Google Now, and Apple's Siri. They significantly change the way users interact with search systems, not only because of the voice control use and touch gestures, but also due to the dialogue-style nature of the interactions and their ability to preserve context across different queries. Predicting success and failure of such search dialogues is a new problem, and an important one for evaluating and further improving intelligent assistants. While clicks in web search have been extensively used to infer user satisfaction, their significance in search dialogues is lower due to the partial replacement of clicks with voice control, direct and voice answers, and touch gestures. In this paper, we propose an automatic method to predict user satisfaction with intelligent assistants that exploits all the interaction signals, including voice commands and physical touch gestures on the device. First, we conduct an extensive user study to measure user satisfaction with intelligent assistants, and simultaneously record all user interactions. Second, we show that the dialogue style of interaction makes it necessary to evaluate the user experience at the overall task level as opposed to the query level. Third, we train a model to predict user satisfaction, and find that interaction signals that capture the user reading patterns have a high impact: when including all available interaction signals, we are able to improve the prediction accuracy of user satisfaction from 71% to 81% over a baseline that utilizes only click and query features.

114 citations


Proceedings ArticleDOI
Dmitry Lagun1, Mounia Lalmas2
08 Feb 2016
TL;DR: The proposed user engagement classes provide clear and interpretable taxonomy of user engagement with online news, and are defined based on amount of time user spends on the page, proportion of the article user actually reads and the amount of interaction users performs with the comments.
Abstract: Prior work on user engagement with online media identified web page dwell time as a key metric reflecting level of user engagement with online news articles. While on average, dwell time gives a reasonable estimate of user experience with a news article, it is not able to capture important aspects of user interaction with the page, such as how much time a user spends reading the article vs. viewing the comment posted by other users, or the actual proportion of article read by the user. In this paper, we propose a set of user engagement classes along with new user engagement metrics that, unlike dwell time, more accurately reflect user experience with the content. Our user engagement classes provide clear and interpretable taxonomy of user engagement with online news, and are defined based on amount of time user spends on the page, proportion of the article user actually reads and the amount of interaction users performs with the comments. Moreover, we demonstrate that our metrics are relatively easier to predict from the news article content, compared to the dwell time, making optimization of user engagement more attainable goal.

93 citations


Proceedings ArticleDOI
07 May 2016
TL;DR: The UBS is a 20-item scale with 6 individual sub-scales representing each construct of user burden, which has good overall inter-item reliability, convergent validity with similar scales, and concurrent validity when compared to systems abandoned vs. those still in use.
Abstract: Computing systems that place a high level of burden on their users can have a negative affect on initial adoption, retention, and overall user experience. Through an iterative process, we have developed a model for user burden that consists of six constructs: 1) difficulty of use, 2) physical, 3) time and social, 4) mental and emotional, 5) privacy, and 6) financial. If researchers and practitioners can have an understanding of the overall level of burden systems may be having on the user, they can have a better sense of whether and where to target future design efforts that can reduce those burdens. To help assist with understanding and measuring user burden, we have also developed and validated a measure of user burden in computing systems called the User Burden Scale (UBS), which is a 20-item scale with 6 individual sub-scales representing each construct. This paper presents the process we followed to develop and validate this scale for use in evaluating user burden in computing systems. Results indicate that the User Burden Scale has good overall inter-item reliability, convergent validity with similar scales, and concurrent validity when compared to systems abandoned vs. those still in use.

Journal ArticleDOI
01 Aug 2016
TL;DR: More recent attempts to support users, primarily in the private-life context (on mobile devices), are becoming more sophisticated and have been met with a more favorable response (e.g., Apple's Siri and Google’s Google Now).
Abstract: Information technology (IT) capabilities are increasing at an impressive pace, but users’ cognitive abilities are not developing at the same speed. Thus, there is a gap between users’ abilities and available IT. Handbooks or online help functions such as ‘‘F1 help’’ try to close this gap by providing explanatory information for the IT capabilities at hand. However, there is strong empirical evidence that traditional support structures are not as effective as intended (Sykes 2015); on the contrary, they distract users from their work (Barrett et al. 2004), which results in decreased efficiency and effectiveness as well as lower job satisfaction. Initial attempts to support users with more comprehensive integrated assistance functions failed miserably. A well-known example of such a dismal failure is ‘‘Clippy, the paperclip’’, a cartoon character developed by Microsoft that automatically popped up to assist users of Microsoft Office. However, instead of supporting the user with clear and precise guidance, studies show that Clippy ‘‘was considered to be annoying, impolite, and disruptive of a user’s workflow’’ (Veletsianos 2007, p. 374). In the end, Clippy, the ‘‘non-intelligent artificial intelligence assistant’’, was so despised that even Microsoft made fun of it. However, more recent attempts to support users, primarily in the private-life context (on mobile devices), are becoming more sophisticated and have been met with a more favorable response (e.g., Apple’s Siri and Google’s Google Now). Moreover, Microsoft has integrated its personal assistant, Cortana, into the latest version of the operating system Windows 10, which is available for private and business environments. One domain that is far more mature with regard to ‘‘user’’ support is the automotive sector. For more than 30 years there has been research into assistance systems that proactively support drivers (Bengler et al. 2014). Early driver assistance systems (DAS) only measured the parameters inside the car, for example with regard to vehicle stabilization (electronic stability control). Later on, sensors also captured the car’s external environment. The use of the collected data, navigation systems, adaptive cruise control, and parking assistance can assist drivers in avoiding hazardous situations and increasing driver comfort. Advanced DAS, considered to be the third phase of DAS evolution, are about to become commercialized as Accepted after three revisions by Prof. Dr. Sinz.

Journal ArticleDOI
TL;DR: The recommender-system community needs to survey other research fields and learn from them, find a common understanding of reproducibility, identify and understand the determinants that affect reproduCibility, conduct more comprehensive experiments, and establish best-practice guidelines for recommender -systems research.
Abstract: Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista's news recommender system, and Docear's research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms' user model depended on users' age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach's performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research.

Proceedings ArticleDOI
07 Sep 2016
TL;DR: A framework which exploits the information available in the Linked Open Data cloud to generate a natural language explanation of the suggestions produced by a recommendation algorithm and the preliminary results provided us with encouraging findings.
Abstract: In this paper we present ExpLOD, a framework which exploits the information available in the Linked Open Data (LOD) cloud to generate a natural language explanation of the suggestions produced by a recommendation algorithm. The methodology is based on building a graph in which the items liked by a user are connected to the items recommended through the properties available in the LOD cloud. Next, given this graph, we implemented some techniques to rank those properties and we used the most relevant ones to feed a module for generating explanations in natural language. In the experimental evaluation we performed a user study with 308 subjects aiming to investigate to what extent our explanation framework can lead to more transparent, trustful and engaging recommendations. The preliminary results provided us with encouraging findings, since our algorithm performed better than both a non-personalized explanation baseline and a popularity-based one.

Patent
11 Feb 2016
TL;DR: In this paper, a system and method for providing various user interfaces for machine learning systems is described, which include a series of user interfaces that guide a user through the machine learning process.
Abstract: A system and method for providing various user interfaces is disclosed. In one embodiment, the various user interfaces include a series of user interfaces that guide a user through the machine learning process. In one embodiment, the various user interfaces are associated with a unified, project-based data scientist workspace to visually prepare, build, deploy, visualize and manage models, their results and datasets.

Journal ArticleDOI
TL;DR: A method is proposed that facilitates better understanding of execution order and integration dependencies of user stories by making use of business process models and contributes to the discipline of conceptual modeling in agile development.
Abstract: ContextAgile software development projects often manage user requirements with models that are called user stories. Every good user story has to be independent, negotiable, valuable, estimable, small, and testable. A proper understanding of a user story also requires an understanding of its dependencies. The lack of explicit representation of such dependencies presumably leads to missing information regarding the context of a user story. ObjectiveWe propose a method that facilitates better understanding of execution order and integration dependencies of user stories by making use of business process models. The method associates user stories with the corresponding business process model activity element. MethodWe adopted a situational method engineering approach to define our proposed method. In order to provide understanding of proposed method's constructs we used ontological concepts. Our method associates a user story to an activity element. In this way, the business process model can be used to infer information about the execution order and integration dependencies of the user story. We defined three levels of association granularity: a user story can be more abstract, approximately equal to, or more detailed than its associated business process model activity element. In our experiment we evaluate each of these three levels. ResultsOur experiment uses a between-subject design. We applied comprehension, problem-solving and recall tasks to evaluate the hypotheses. The statistical results provide support for all of the hypotheses. Accordingly, there appears to be significantly greater understanding of the execution order and integration dependencies of user stories when associated business process models are available. ConclusionsWe addressed a problem which arises from managing user stories in software development projects and focuses on the missing context of a user story. Our method contributes to the discipline of conceptual modeling in agile development. Our experiment provides empirical insight into requirement dependencies.

Proceedings ArticleDOI
14 Feb 2016
TL;DR: The paper concludes that shape-changing interfaces tend to assign the control to either the user or the underlying system, while few (e.g. [16,28]) consider sharing the control between the user and the system.
Abstract: Despite an increasing number of examples of shape-changing interfaces, the relation between users' actions and product movements has not gained a great deal of attention, nor been very well articulated. This paper presents a framework articulating the level of control offered to the user over the shape change. The framework considers whether the shape change is: 1) directly controlled by the user's explicit interactions; 2) negotiated with the user; 3) indirectly controlled by the users actions; 4) fully controlled by the system. The four types are described through design examples using ReFlex, a shape-changing interface in the form of a smartphone. The paper concludes that shape-changing interfaces tend to assign the control to either the user or the underlying system, while few (e.g. [16,28]) consider sharing the control between the user and the system.

Journal ArticleDOI
TL;DR: An ISB look-up table is proposed that allows users to search the table for a network receiver of their own type and select the corresponding ISBs, thus effectively realizing their own ISB-corrected user model and the number of integer-estimable user ambiguities is maximized.
Abstract: PPP-RTK has the potential of benefiting enormously from the integration of multiple GNSS/RNSS systems. However, since unaccounted inter-system biases (ISBs) have a direct impact on the integer ambiguity resolution performance, the PPP-RTK network and user models need to be flexible enough to accommodate the occurrence of system-specific receiver biases. In this contribution we present such undifferenced, multi-system PPP-RTK full-rank models for both network and users. By an application of $$\mathcal {S}$$ -system theory, the multi-system estimable parameters are presented, thereby identifying how each of the three PPP-RTK components are affected by the presence of the system-specific biases. As a result different scenarios are described of how these biases can be taken into account. To have users benefit the most, we propose the construction of an ISB look-up table. It allows users to search the table for a network receiver of their own type and select the corresponding ISBs, thus effectively realizing their own ISB-corrected user model. By applying such corrections, the user model is strengthened and the number of integer-estimable user ambiguities is maximized.

Book
Deepak Agarwal1, Bee-Chung Chen1
01 Feb 2016
TL;DR: This comprehensive treatment of the statistical issues that arise in recommender systems includes detailed, in-depth discussions of current state-of-the-art methods such as adaptive sequential designs (multi-armed bandit methods), bilinear random-effects models (matrix factorization), and scalable model fitting using modern computing paradigms like MapReduce.
Abstract: Designing algorithms to recommend items such as news articles and movies to users is a challenging task in numerous web applications. The crux of the problem is to rank items based on users' responses to different items to optimize for multiple objectives. Major technical challenges are high dimensional prediction with sparse data and constructing high dimensional sequential designs to collect data for user modeling and system design. This comprehensive treatment of the statistical issues that arise in recommender systems includes detailed, in-depth discussions of current state-of-the-art methods such as adaptive sequential designs (multi-armed bandit methods), bilinear random-effects models (matrix factorization) and scalable model fitting using modern computing paradigms like MapReduce. The authors draw upon their vast experience working with such large-scale systems at Yahoo! and LinkedIn, and bridge the gap between theory and practice by illustrating complex concepts with examples from applications they are directly involved with.

Journal ArticleDOI
22 Mar 2016
TL;DR: This paper discusses flexible but powerful methods for usability and user experience engineering in the context of Industrie 4.0, which stands for functional integration, dynamic reorganization, and resource efficiency.
Abstract: Industrie 4.0 (English translation: Industry 4.0) stands for functional integration, dynamic reorganization, and resource efficiency. Technical advances in control and communication create infrastructures that handle more and more tasks automatically. As a result, the complexity of today's and future technical systems is hidden from the user. These advances, however, come with distinct challenges for user interface design. A central question is: how to empower users to understand, monitor, and control the automated processes of Industrie 4.0? Addressing these design challenges requires a full integration of user-centered design (UCD) processes into the development process. This paper discusses flexible but powerful methods for usability and user experience engineering in the context of Industrie 4.0.

01 Jan 2016
TL;DR: A conceptual framework and evaluation model of LMS is built through the lens of User Experience (UX) research and practice, an epistemology that is quite important but currently neglected in the elearning domain.
Abstract: Learning Management Systems (LMS) have been the main vehicle for delivering and managing e-learning courses in educational, business, governmental and vocational learning settings. Since the mid-nineties there is a plethora of LMS in the market with a vast array of features. The increasing complexity of these platforms makes LMS evaluation a hard and demanding process that requires a lot of knowledge, time, and effort. Nearly 50% of respondents in recent surveys have indicated they seek to change their existing LMS primarily due to user experience issues. Yet the vast majority of the extant literature focuses only on LMS capabilities in relation to administration and management of teaching and learning processes. In this study the authors try to build a conceptual framework and evaluation model of LMS through the lens of User Experience (UX) research and practice, an epistemology that is quite important but currently neglected in the elearning domain. They conducted an online survey with 446 learning professionals, and from the results, developed a new UX-oriented evaluation model with four dimensions: pragmatic quality, authentic learning, motivation and engagement, and autonomy and relatedness. Their discussion on findings includes some ideas for future research.


Posted Content
TL;DR: This paper presents an anomalous user behaviour detection framework that applies an extended version of Isolation Forest algorithm, which is fast and scalable and does not require example anomalies in the training data set.
Abstract: Anomalous user behavior detection is the core component of many information security systems, such as intrusion detection, insider threat detection and authentication systems. Anomalous behavior will raise an alarm to the system administrator and can be further combined with other information to determine whether it constitutes an unauthorised or malicious use of a resource. This paper presents an anomalous user behaviour detection framework that applies an extended version of Isolation Forest algorithm. Our method is fast and scalable and does not require example anomalies in the training data set. We apply our method to an enterprise dataset. The experimental results show that the system is able to isolate anomalous instances from the baseline user model using a single feature or combined features.

Journal ArticleDOI
TL;DR: This study analyzed user experiences (UX) of cross-platform services with a mixed methods (quantitative and qualitative) approach and proposed the idea of inter-usability for designing user-centered systems.
Abstract: Cross-platform services provide unified entertainment experiences across multiple devices between which users can toggle when watching content using televisions, tablets, personal computers, and smartphones. The software automatically adapts the programming to fit the diverse formats. This study analyzed user experiences (UX) of cross-platform services with a mixed methods (quantitative and qualitative) approach. It used a multi-state analytical approach, in which the user model was tested in a statistical model and accompanying experiment. A variety of methods were used to best understand the complexities of UX. Heuristic results revealed the ways that UX of cross-platform services are formed, moderated, and improved, and the ways that users’ intentions are determined through the relationships among factors. The results revealed that the key elements of cross-platform UX include access, mobility, and coherence, which imply the importance of seamless UX of cross-platform services. Based on those k...

Patent
04 Feb 2016
TL;DR: In this article, the authors present a system for monitoring user authenticity during user activities in a user session on an application server, which is carried out in a distributed manner by a distributed server system.
Abstract: Systems and methods for monitoring user authenticity during user activities in a user session on an application server is provided. The method being carried out in a distributed manner by a distributed server system. The method comprises a user modeling-process and a user-verification process. The user-modeling process is performed on a user-model server in which a user model is adapted session-by-session to user activity data received from the application server. The user-verification process is performed on the application server on the basis of the user model adapted on the user-model server. The user-verification process comprises comparing the user model with features extracted from user activity in the user session on the application server and determining a total risk-score value based on the comparison. If the total risk-score value is greater than a given threshold, a corrective action is performed.

Proceedings ArticleDOI
12 Sep 2016
TL;DR: This paper proposes user modeling strategies which use Concept Frequency - Inverse Document Frequency (CF-IDF) as a weighting scheme and incorporate either or both of the dynamics and semantics of user interests and results show that these strategies outperform two baseline strategies significantly in the context of link recommendations on Twitter.
Abstract: User modeling for individual users on the Social Web plays an important role and is a fundamental step for personalization as well as recommendations. Recent studies have proposed different user modeling strategies considering various dimensions such as temporal dynamics and semantics of user interests. Although previous work proposed different user modeling strategies considering the temporal dynamics of user interests, there is a lack of comparative studies on those methods and therefore the comparative performance over each other is unknown. In terms of semantics of user interests, background knowledge from DBpedia has been explored to enrich user interest profiles so as to reveal more information about users. However, it is still unclear to what extent different types of information from DBpedia contribute to the enrichment of user interest profiles.In this paper, we propose user modeling strategies which use Concept Frequency - Inverse Document Frequency (CF-IDF) as a weighting scheme and incorporate either or both of the dynamics and semantics of user interests. To this end, we first provide a comparative study on different user modeling strategies considering the dynamics of user interests in previous literature to present their comparative performance. In addition, we investigate different types of information (i.e., categories, classes and connected entities via various properties) for entities from DBpedia and the combination of them for extending user interest profiles. Finally, we build our user modeling strategies incorporating either or both of the best-performing methods in each dimension. Results show that our strategies outperform two baseline strategies significantly in the context of link recommendations on Twitter.

Proceedings ArticleDOI
11 Apr 2016
TL;DR: By analyzing user behavioral logs from Bing Now news recommendation, it is found that user fatigue is a severe problem that greatly affects the user experience and experimental results indicate that significant gains can be achieved by introducing features that reflect users' interaction with previously seen recommendations.
Abstract: Many aspects and properties of Recommender Systems have been well studied in the past decade, however, the impact of User Fatigue has been mostly ignored in the literature. User fatigue represents the phenomenon that a user quickly loses the interest on the recommended item if the same item has been presented to this user multiple times before. The direct impact caused by the user fatigue is the dramatic decrease of the Click Through Rate (CTR, i.e., the ratio of clicks to impressions). In this paper, we present a comprehensive study on the research of the user fatigue in online recommender systems. By analyzing user behavioral logs from Bing Now news recommendation, we find that user fatigue is a severe problem that greatly affects the user experience. We also notice that different users engage differently with repeated recommendations. Depending on the previous users' interaction with repeated recommendations, we illustrate that under certain condition the previously seen items should be demoted, while some other times they should be promoted. We demonstrate how statistics about the analysis of the user fatigue can be incorporated into ranking algorithms for personalized recommendations. Our experimental results indicate that significant gains can be achieved by introducing features that reflect users' interaction with previously seen recommendations (up to 15% enhancement on all users and 34% improvement on heavy users).

01 Nov 2016
TL;DR: This thesis presents four assistants to help data explorers interrogate a database to discover its content: Claude, Blaeu, Ziggy and Raimond, which are an attempt to generalize semi-automatic exploration to text data.
Abstract: Data explorers interrogate a database to discover its content. Their aim is to get an overview of the data and discover interesting new facts. They have little to no knowledge of the data, and their requirements are often vague and abstract. How can such users write database queries? This thesis presents four assistants to help them through this task: Claude, Blaeu, Ziggy and Raimond. Each assistant focuses on a specific exploration task. Claude helps users analyze data warehouses, by highlighting the combinations of variables which influence a predefined measure of interest. Blaeu helps users build and refine queries, by allowing them to select and project clusters of tuples. Ziggy is a tuple characterization engine: its aim is to show what makes a selection of tuples unique, by highlighting the differences between those and the rest of the database. Finally, Raimond is an attempt to generalize semi-automatic exploration to text data, inspired by an industrial use case. For each system, we present a user model, that is, a formalized set of assumptions about the users’ goals. We then present practical methods to make recommendations. We either adapt existing algorithms from the machine learning literature or present our own. Next, we validate our approaches with experiments. We present use cases in which our systems led to discoveries, and we benchmark their speed, quality and robustness.

Book ChapterDOI
19 Sep 2016
TL;DR: This work contributes to new ways to mine and infer personality-based user models, and shows how these models can be implemented in a music recommender system to positively contribute to the user experience.
Abstract: Applications are getting increasingly interconnected. Al-though the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience.

Patent
04 Feb 2016
TL;DR: In this article, a user model is adapted session-by-session to user activities, in which the user model includes a plurality of adaptive feature-specific user-behavior models.
Abstract: Systems and methods for monitoring user authenticity according to user activities on an application server. A user-modeling process and a user-verification process are performed. In the user-modeling process, a user model is adapted session-by-session to user activities in which the user model includes a plurality of adaptive feature-specific user-behavior models. The user-verification process includes determining a plurality of feature-specific risk-score values, comparing the at least one of the adaptive feature-specific user-behavior models with a respective feature extracted from user activity in the user session on the application server, and determining a total risk-score value indicative of user authenticity by weighting and combining the plurality of feature-specific risk-score values. If the total risk-score value is greater than a given threshold, a corrective action is performed.