scispace - formally typeset
Search or ask a question
Author

Eduardo M. Pereira

Other affiliations: Deloitte, University of Porto
Bio: Eduardo M. Pereira is an academic researcher from State University of Campinas. The author has contributed to research in topics: Context (language use) & Motion estimation. The author has an hindex of 5, co-authored 19 publications receiving 365 citations. Previous affiliations of Eduardo M. Pereira include Deloitte & University of Porto.

Papers
More filters
Journal ArticleDOI
TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Abstract: Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.

813 citations

Journal ArticleDOI
TL;DR: This paper addresses the topic of social semantic meaning in a well-defined surveillance scenario, namely shopping mall, and proposes new definitions of individual and group behaviour that consider environment context, a relational descriptor that emphasises position and attention-based characteristics, and a new classification approach based on mini-batches.
Abstract: The increasing demand for human activity analysis in surveillance scenarios has been triggered by the emergence of new features and concepts to help in identifying activities of interest. However, the characterisation of individual and group behaviours is a topic not so well studied in the video surveillance community due to not only its intrinsic difficulty and large variety of topics involved, but also because of the lack of valid semantic concepts that relate human activity to social context. In this paper, we address the topic of social semantic meaning in a well-defined surveillance scenario, namely shopping mall, and propose new definitions of individual and group behaviour that consider environment context, a relational descriptor that emphasises position and attention-based characteristics, and a new classification approach based on mini-batches. We also present a wide evaluation process that analyses the sociological meaning of the individual features and outlines the performance impact of automatic features extraction processes into our classification framework. We verify the discriminative value of the selected features, state the descriptor performance and robustness over different stress conditions, confirm the advantage of the proposed mini-batch classification approach which obtains promising results, and outline future research lines to improve our novel social behavioural analysis framework.

9 citations

Journal ArticleDOI
TL;DR: The proposed KRAV method is an extension of the conventional CKA to mitigate the imbalance effect of unusual human behaviors and outperforms state-of-the-art results concerning the classification performance and number of employed features.
Abstract: This paper presents a kernel-based relevance analysis for video data to support social behavior recognition. Our approach, termed KRAV, is twofold: (i) A feature ranking based on centered kernel alignment (CKA) is carried out to match social semantic features with the output labels (individual and group behaviors). The employed method is an extension of the conventional CKA to mitigate the imbalance effect of unusual human behaviors. (ii) A classification stage to perform the behavior prediction. For concrete testing, the Israel Institute of Technology social behavior database is employed to assess the KRAV under a tenfold cross-validation scheme. Attained results show that the proposed approach for the individual recognition task obtains 0.5925 $$F_1$$ measure using 50 relevant features. Likewise, for the group recognition task obtains 0.8094 $$F_1$$ measure using 12 relevant features, which in both cases outperforms state-of-the-art results concerning the classification performance and number of employed features. Also, our video-based approach would assist further social behavior analysis from the set of features selected regarding the recognition of individual profiles and group behaviors.

8 citations

Book ChapterDOI
05 Jun 2013
TL;DR: Two motion tracking algorithms that combine features from crowd motion detection and multiple tracking are presented to build motion patterns and understand customer’s behavior under unconstrained video conditions.
Abstract: We present a complete and modular framework that extract trajectories in a real and complex retail scenario, under unconstrained video conditions. Two motion tracking algorithms that combine features from crowd motion detection and multiple tracking are presented to build motion patterns and understand customer’s behavior. Their evaluation across several datasets show promising results.

8 citations

Journal ArticleDOI
TL;DR: A methodology is proposed and implemented for estimating anthropometric measures considering data provided by low-cost sensors, such as the Microsoft Kinect, and a more complete characterization of the whole body structure was achieved.
Abstract: Anthropometry has been widely used in different fields, providing relevant information for medicine, ergonomics and biometric applications. However, the existent solutions present marked disadvantages, reducing the employment of this type of evaluation. Studies have been conducted in order to easily determine anthropometric measures considering data provided by low-cost sensors, such as the Microsoft Kinect. In this work, a methodology is proposed and implemented for estimating anthropometric measures considering the information acquired with this sensor. The measures obtained with this method were compared with the ones from a validation system, Qualisys. Comparing the relative errors determined with state-of-art references, for some of the estimated measures, lower errors were verified and a more complete characterization of the whole body structure was achieved.

7 citations


Cited by
More filters
Journal Article
TL;DR: An independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator, or HSIC, is proposed.
Abstract: We propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.

1,134 citations

01 Jan 2018
TL;DR: The conferencia "Les politiques d'Open Data / Open Acces: Implicacions a la recerca" orientada a investigadors i gestors de projectes europeus que va tenir lloc el 20 de setembre de 2018 a la Universitat Autonoma de Barcelona.
Abstract: Presentacio sobre l'Oficina de Proteccio de Dades Personals de la UAB i la politica Open Science. Va formar part de la conferencia "Les politiques d'Open Data / Open Acces: Implicacions a la recerca" orientada a investigadors i gestors de projectes europeus que va tenir lloc el 20 de setembre de 2018 a la Universitat Autonoma de Barcelona

665 citations

01 Jan 2016
TL;DR: The perturbation analysis of optimization problems is universally compatible with any devices to read and will help you to enjoy a good book with a cup of tea in the afternoon instead of facing with some malicious virus inside their computer.
Abstract: Thank you very much for reading perturbation analysis of optimization problems. Maybe you have knowledge that, people have look hundreds times for their favorite books like this perturbation analysis of optimization problems, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they are facing with some malicious virus inside their computer. perturbation analysis of optimization problems is available in our book collection an online access to it is set as public so you can get it instantly. Our books collection saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the perturbation analysis of optimization problems is universally compatible with any devices to read.

461 citations

Proceedings ArticleDOI
21 Apr 2020
TL;DR: An algorithm-informed XAI question bank is developed in which user needs for explainability are represented as prototypical questions users might ask about the AI, and used as a study probe to identify gaps between current XAI algorithmic work and practices to create explainable AI products.
Abstract: A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.

371 citations

Proceedings ArticleDOI
27 Jan 2020
TL;DR: It is shown that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI's errors.
Abstract: Today, AI is being increasingly used to help human experts make decisions in high-stakes scenarios. In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success. We refer to these scenarios as AI-assisted decision making, where the individual strengths of the human and the AI come together to optimize the joint decision outcome. A key to their success is to appropriately calibrate human trust in the AI on a case-by-case basis; knowing when to trust or distrust the AI allows the human expert to appropriately apply their knowledge, improving decision outcomes in cases where the model is likely to perform poorly. This research conducts a case study of AI-assisted decision making in which humans and AI have comparable performance alone, and explores whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI. Specifically, we study the effect of showing confidence score and local explanation for a particular prediction. Through two human experiments, we show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making, which may also depend on whether the human can bring in enough unique knowledge to complement the AI's errors. We also highlight the problems in using local explanation for AI-assisted decision making scenarios and invite the research community to explore new approaches to explainability for calibrating human trust in AI.

287 citations