scispace - formally typeset
Search or ask a question
Author

Ruijing Zhao

Bio: Ruijing Zhao is an academic researcher. The author has contributed to research in topics: Transparency (behavior) & Computer science. The author has an hindex of 2, co-authored 3 publications receiving 13 citations.

Papers
More filters
Proceedings Article

[...]

01 Jan 2019
TL;DR: It is argued that providing information regarding how AGSs work can enhance users’ trust only when users have enough time and ability to process and understand the information, and providing excessively detailed information may even reduce users�’ perceived understanding of AGss, and thus hurt users” trust.
Abstract: Users’ adoptions of online-shopping advice-giving systems (AGSs) are crucial for e-commerce websites to attract users and increase profits. Users’ trust in AGSs influences them to adopt AGSs. While previous studies have demonstrated that AGS transparency increases users’ trust through enhancing users’ understanding of AGSs’ reasoning, hardly any attention has been paid to the possible inconsistency between the level of AGS transparency and the extent to which users feel they understand the logic of AGSs’ inner working. We argue that the relationship between them may not always be positive. Specifically, we posit that providing information regarding how AGSs work can enhance users’ trust only when users have enough time and ability to process and understand the information. Moreover, providing excessively detailed information may even reduce users’ perceived understanding of AGSs, and thus hurt users’ trust. In this research, we will use a lab experiment to explore how providing information with different levels of detail will influence users’ perceived understanding of and trust in AGSs. Our study would contribute to the literature by exploring the potential inverted U-shape relationship among AGS transparency, users’ perceived understanding of and trust in AGSs, and contribute to the practice by offering suggestions for designing trustworthy AGSs.

7 citations

[...]

01 Jan 2019
TL;DR: It is argued that instead of setting a uniform rule of providing AGS transparency, optimal transparency provision strategies for different types of AGSs and users based on their unique features should be developed.
Abstract: Advice-giving systems (AGSs) provide recommendations based on users’ unique preferences or needs. Maximizing users’ adoptions of AGSs is an effective way for ecommerce websites to attract users and increase profits. AGS transparency, defined as the extent to which information of a system’s reasoning is provided and made available to users, has been proved to be effective in increasing users’ adoptions of AGSs. While previous studies have identified providing explanations as an effective way of enhancing AGS transparency, most of them failed to further explore the optimal transparency provision strategy of AGSs. We argue that instead of setting a uniform rule of providing AGS transparency, we should develop optimal transparency provision strategies for different types of AGSs and users based on their unique features. In this paper, we first developed a framework of AGS transparency provision and identified six components of AGS transparency provision strategies. We then developed a research model of AGS transparency provision strategy with a set of propositions. We hope that based on this model, researchers could evaluate how to effect transparency for AGSs and users with different characteristics. Our work would contribute to the existing knowledge by exploring how AGS and user characteristics will influence the optimal strategy of providing AGS transparency. Our work would also contribute to the practice by offering design suggestions for AGS explanation interfaces.

4 citations

DOI

[...]

TL;DR: In this article , the authors take the most popular blind box as an example to discuss its marketing strategies, including uncertainty, hunger marketing, joint name and fan effect of IP, Demand driven by social communication, etc., to analyze how to better take the advantages and values of the product in the communication.
Abstract: With the increasing variety of products, the gap between them is getting smaller and smaller. People pay more and more attention to marketing in order to make products out of the lead. Based on this, this paper takes the most popular blind box as an example to discuss its marketing strategies, including uncertainty, hunger marketing, joint name and fan effect of IP, Demand driven by social communication, etc., to analyze how to better take the advantages and values of the product in the communication. At the same time, as a new product, blind box also has some shortcomings such as over marketing and inadequate supervision.

3 citations

[...]

01 Jan 2018
TL;DR: The trust antecedents of AGSs are summarized and some researchers proposed a new kind of users-based collaborative filtering models that take into consideration the trust relationship among users (Zhou et al., 2012) to provide users with advice that is liked by other users whom they trust more.
Abstract: Advice-giving systems (AGSs), sometimes also called recommendation agents or recommender systems, are decision aids software that provide users personalized recommendations based on users’ unique preferences or needs (Xiao and Benbasat, 2007; 2014). Due to their effectiveness in reducing users’ information overload (Komiak and Benbasat, 2007) and facilitating users’ decision-making process (Wang and Benbasat, 2008), AGSs have been considered as key influential factors of the success of online shopping websites in facilitating product customization and increasing revenue in ecommerce (Komiak and Benbasat 2006). First-generation AGSs generate advice by asking users to explicitly indicate their product attribute preferences or needs. Such systems are usually labelled as content-filtering recommendation agents (Wang and Benbasat, 2005) and provide users with recommendations that best meet their preferences. Users who rely on such advice-giving systems to make decisions need to have a clear idea about their needs, to spend effort in identifying them, and then expressing or conveying them to the AGS. In recent years, another kind of AGSs, what we will label as second-generation AGSs, have become increasingly popular. Examples of second-generation AGSs include recommendations that appear in the homepages of websites such as Amazon, eBay, and Netflix, and content/ad push in websites like Facebook and Twitter. Unlike first-generation AGSs which directly ask users to provide their inputs of needs, second-generation AGSs implicitly collect and identify users’ information, such as users’ demographic information, past browsing behaviors, purchase behaviors, relationships with other users (Briggs and Smyth, 2006; Zhou et al., 2012), etc., and use these information as the input for their advicegenerating process. In addition, compared to first-generation AGSs, second-generation AGSs employ more complex techniques to analyze data from a diverse set of input sources and generate advice for their users accordingly. Item-based collaborative filtering, an algorithm that generates advice similar to what users have adopted/bought before, and user-based collaborative filtering, an algorithm that offer users advice liked by other users who are similar to them, are basic techniques that support second-generation AGSs (Konstan and Riedl, 2012; Zhou et al., 2012). Based on these techniques, more advanced AGS models have already been suggested (Briggs and Smyth, 2006; O'Donovan and Smyth, 2005; Walter et al., 2008). For example, some researchers proposed a new kind of users-based collaborative filtering models that take into consideration the trust relationship among users (Zhou et al., 2012) to provide users with advice that are liked by other users whom they trust more. Due to the effective decision aids AGSs bring to website users, it is important for website managers to know how to maximize user adoption of their AGSs in order to attract more users and increase website profits. Trust, as a crucial influential factor in IT adoption, has been shown to have an influence on users’ adoption of AGSs (Al-Natour et al., 2008; Komiak and Benbasat, 2006; Wang and Benbasat, 2005; 2008) and product purchase intentions (McKnight et al., 2002; Wang and Benbasat, 2007). UsTrust in Second-generation Advice-giving Systems Workshop on Designing User Assistance in Interactive Intelligent Systems, Portsmouth,UK, 2018 27 ers’ trusts in AGSs can be influenced by a number of antecedents (for a summary, see Söllner, Benbasat, Gefen et al., 2016; Söllner and Leimeister, 2013). According to the framework developed by Wang and Benbasat (2008), we summarized the trust antecedents of AGSs, for both firstand secondgeneration ones that have already been studied in the existing literature, into six categories, namely dispositional reasons, institutional reasons, heuristic reasons, calculative reasons, interactive reasons, and knowledge-based reasons. Dispositional reasons include users’ general predispositions to trust other parties. Institutional reasons include societal structures (e.g., legislation, rules, and third-party assurances) that people believe will make an environment trustworthy. Heuristic reasons include users’ impressions of the website/e-vendor and users’ past experiences with the system. Calculative reasons include users’ perceived intelligence/efficiency/personalization of systems, users’ privacy concerns, and users’ perceived possibility/solutions of systems’ mistakes/opportunistic behaviors. Interactive reasons include users’ perceived control over systems, users’ social presence, users’ perceived ease of use, users’ perceived similarity with systems, users’ perceived adaptiveness of systems, users’ decision confidence, etc. Knowledge-based reasons include explanations of how advice is generated/why AGSs ask certain questions. Over the past decade, users’ trusts in first-generation AGSs have been thoroughly studied (Al‐Natour, et al., 2006; 2008; Komiak and Benbasat, 2006; Wang and Benbasat, 2005; 2007; etc.). However, our understanding of users’ trusts in second-generation AGSs is in its infancy. Most of the existing research about second-generation AGSs are conducted from a technical perspective, focusing on how to design better algorithms in order to generate higher quality advice. Very few studies, the topics of which are related to users’ perceptions on such systems, only roughly mention the potential trust risks due to the unique features of such systems and stay at the theoretical level. Hardly any empirical studies can be found in the literature. We argue for the necessity of studying users’ trusts in secondgeneration AGSs because users may feel second-generation AGSs are less controllable and less transparent than first-generation ones due to the implicit elicitation of user needs and high complexity of advice-generating algorithms in second-generation AGSs. Accordingly, influential factors of users’ trusts in second-generation AGSs may also be different from those in first-generation ones. Based on the literature, we picked out trust antecedents that were once studied in the context of second-generation AGSs. We found that researchers who studied trust antecedents of second-generation AGSs mainly focused on calculative reasons, interactive reasons, and know-based reasons. The trust antecedents they studied are either unique to second-generation AGSs or more important in the context of second-generation AGSs than that of first-generation AGSs. Accordingly, we proposed design suggestions for trustworthy second-generation AGSs. In order to increase users’ trusts through affecting calculative reasons, we suggest that second-generation AGSs should be designed with intelligence high enough to keep bringing users “pleasant surprise” – recommendations that they have never thought of but will fall in love with at the first glimpse. We also suggest second-generation AGS designers to provide clear instructions about: a) what kind of user information they collect; b) when, how, and why they collect such information from users; and c) how they will use the collected information; and d) structural assurance that can ensure the privacy and security of users’ input data. In order to increase users’ trusts through affecting calculative reasons, we suggest designers create sufficient opportunities for users to provide feedbacks for previously generated advice to AGSs (e.g. whether users like the advice, why do users like/dislike the advice, etc.). In addition, we suggest designers create interfaces for human intervention when developing second-generation AGSs. In order to increase users’ trusts through affecting knowledge-based reasons, we suggest designers indicate the input used for advice-generating process, use plain words to explain the complex advice-generating techniques, and avoid giving non-specific explanations such as “Here are recommendations for you”. Our research makes contributions in both academic and practical field. Unlike existing research focusing on the design AGS algorithms, we studied trust, a crucial factor of successful adoptions of such AGS technologies. To the best of our knowledge, our research is one of the first to systematically study trust issues on second-generation AGSs in the IS field. As for practical contributions, this paper helps system designers better develop second-generation AGSs by proposing detailed design suggestions. Trust in Second-generation Advice-giving Systems Workshop on Designing User Assistance in Interactive Intelligent Systems, Portsmouth,UK, 2018 28

2 citations

Journal ArticleDOI

[...]

TL;DR: This work proposes a novel frequency-aware MLP architecture, in which the domain-specific features are filtered out in the transformed frequency domain, augmenting the invariant descriptor for label prediction, and is the first to propose a MLP-like backbone for domain generalization.
Abstract: MLP-like models built entirely upon multi-layer perceptrons have recently been revisited, exhibiting the comparable performance with transformers. It is one of most promising architectures due to the excellent trade-off between network capability and efficiency in the large-scale recognition tasks. However, its generalization performance to heterogeneous tasks is inferior to other architectures (e.g., CNNs and transformers) due to the extensive retention of domain information. To address this problem, we propose a novel frequency-aware MLP architecture, in which the domain-specific features are filtered out in the transformed frequency domain, augmenting the invariant descriptor for label prediction. Specifically, we design an adaptive Fourier filter layer, in which a learnable frequency filter is utilized to adjust the amplitude distribution by optimizing both the real and imaginary parts. A low-rank enhancement module is further proposed to rectify the filtered features by adding the low-frequency components from SVD decomposition. Finally, a momentum update strategy is utilized to stabilize the optimization to fluctuation of model parameters and inputs by the output distillation with weighted historical states. To our best knowledge, we are the first to propose a MLP-like backbone for domain generalization. Extensive experiments on three benchmarks demonstrate significant generalization performance, outperforming the state-of-the-art methods by a margin of 3%, 4% and 9%, respectively.

2 citations


Cited by
More filters
Proceedings ArticleDOI

[...]

17 Mar 2020
TL;DR: It is found that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an Auto ML tool.
Abstract: We explore trust in a relatively new area of data science: Automated Machine Learning (AutoML). In AutoML, AI methods are used to generate and optimize machine learning models by automatically engineering features, selecting models, and optimizing hyperparameters. In this paper, we seek to understand what kinds of information influence data scientists' trust in the models produced by AutoML? We operationalize trust as a willingness to deploy a model produced using automated methods. We report results from three studies - qualitative interviews, a controlled experiment, and a card-sorting task - to understand the information needs of data scientists for establishing trust in AutoML systems. We find that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an AutoML tool.

36 citations

Journal ArticleDOI

[...]

TL;DR: The results of a behavioural experiment are reported in which subjects were able to draw on the support of an ML-based decision support tool for text classification and show that transparency can actually have a negative impact on trust.
Abstract: Assistive technology featuring artificial intelligence (AI) to support human decision-making has become ubiquitous. Assistive AI achieves accuracy comparable to or even surpassing that of human exp...

15 citations

Book ChapterDOI

[...]

19 Jul 2020
TL;DR: This paper looked at the computer science (CS) community research to identify the main research themes about AI explainability, or “explainable AI”, and focuses on Human-Computer Interaction (HCI) research trying to answer three questions about the selected publications.
Abstract: Explainability is a hot topic nowadays for artificial intelligent (AI) systems. The role of machine learning (ML) models on influencing human decisions shed light on the back-box of computing systems. AI based system are more than just ML models. ML models are one element for the AI explainability’ design and needs to be combined with other elements so it can have significant meaning for people using AI systems. There are different goals and motivations for AI explainability. Regardless the goal for AI explainability, there are more to AI explanation than just ML models or algorithms. The explainability of an AI systems behavior needs to consider different dimensions: 1) who is the receiver of that explanation, 2) why that explanation is needed, and 3) in which context and other situated information the explanation is presented. Considering those three dimensions, the explanation can be effective by fitting the user needs and expectation in the right moment and format. The design of an AI explanation user experience is central for the pressing need from people and the society to understand how an AI system may impact on human decisions. In this paper, we present a literature review on AI explainability research and practices. We first looked at the computer science (CS) community research to identify the main research themes about AI explainability, or “explainable AI”. Then, we focus on Human-Computer Interaction (HCI) research trying to answer three questions about the selected publications: to whom the AI explainability is for (who), which is the purpose of the AI explanation (why), and in which context the AI explanation is presented (what + when).

13 citations

Journal ArticleDOI

[...]

25 Jul 2022
TL;DR: It is argued that domain-invariant features should be originating from both internal and mutual sides: the internally-invarant features capture the intrinsic semantics of the data while the mutually-invARIant features learn the cross-domain transferable knowledge.
Abstract: Deep learning has achieved great success in the past few years. However, the performance of deep learning is likely to impede in face of non-IID situations. Domain generalization (DG) enables a model to generalize to an unseen test distribution, i.e., to learn domain-invariant representations. In this paper, we argue that domain-invariant features should be originating from both internal and mutual sides. Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i.e., the property within a domain, which is agnostic to other domains. Mutual invariance means that the features can be learned with multiple domains (cross-domain) and the features contain common information, i.e., the transferable features w.r.t. other domains. We then propose DIFEX for Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation framework to capture the high-level Fourier phase as the internally-invariant features and learn cross-domain correlation alignment as the mutually-invariant features. We further design an exploration loss to increase the feature diversity for better generalization. Extensive experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.

10 citations

Proceedings ArticleDOI

[...]

04 Jul 2022
TL;DR: The results show that the perception of explainable RS with different levels of detail is affected to different degrees by the explanation goal and user type, and are suggested to support the systematic design of explanatory interfaces in RS tailored to the user’s context.
Abstract: Despite the acknowledgment that the perception of explanations may vary considerably between end-users, explainable recommender systems (RS) have traditionally followed a one-size-fits-all model, whereby the same explanation level of detail is provided to each user, without taking into consideration individual user’s context, i.e., goals and personal characteristics. To fill this research gap, we aim in this paper at a shift from a one-size-fits-all to a personalized approach to explainable recommendation by giving users agency in deciding which explanation they would like to see. We developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations of the recommendations, with three levels of detail (basic, intermediate, advanced) to meet the demands of different types of end-users. We conducted a within-subject study (N=31) to investigate the relationship between user’s personal characteristics and the explanation level of detail, and the effects of these two variables on the perception of the explainable RS with regard to different explanation goals. Our results show that the perception of explainable RS with different levels of detail is affected to different degrees by the explanation goal and user type. Consequently, we suggested some theoretical and design guidelines to support the systematic design of explanatory interfaces in RS tailored to the user’s context.

7 citations