scispace - formally typeset
Search or ask a question
Author

Thomas Marquenie

Bio: Thomas Marquenie is an academic researcher from Katholieke Universiteit Leuven. The author has contributed to research in topics: Law enforcement & Data Protection Act 1998. The author has an hindex of 2, co-authored 2 publications receiving 14 citations.

Papers
More filters
Proceedings ArticleDOI

[...]

20 Jun 2022
TL;DR: This work reveals issues of overrepresentation of minority subjects in violence situations that limit the external validity of the dataset for real-time crime detection systems and proposes data augmentation techniques to rebalance the dataset.
Abstract: Researchers and practitioners in the fairness community have highlighted the ethical and legal challenges of using biased datasets in data-driven systems, with algorithmic bias being a major concern. Despite the rapidly growing body of literature on fairness in algorithmic decision-making, there remains a paucity of fairness scholarship on machine learning algorithms for the real-time detection of crime. This contribution presents an approach for fairness-aware machine learning to mitigate the algorithmic bias / discrimination issues posed by the reliance on biased data when building law enforcement technology. Our analysis is based on RWF-2000, which has served as the basis for violent activity recognition tasks in data-driven law enforcement projects. We reveal issues of overrepresentation of minority subjects in violence situations that limit the external validity of the dataset for real-time crime detection systems and propose data augmentation techniques to rebalance the dataset. The experiments on real world data show the potential to create more balanced datasets by synthetically generated samples, thus mitigating bias and discrimination concerns in law enforcement applications.

10 citations

Journal ArticleDOI

[...]

TL;DR: While a considerable improvement and major step forward for the protection of personal data in its field, the Directive is unlikely to mend the fragmented legal framework and achieve the intended high level of data protection standards consistent across European Union member states.
Abstract: This article presents a two-sided analysis of the recently adopted Police and Criminal Justice Authorities Directive. First, it examines the impact of the Directive on the current legal framework and considers to what extent it is capable of overcoming existing obstacles to a consistent and comprehensive data protection scheme in the area of police and criminal justice. Second, it delivers a brief outline and review of the provisions of the Directive itself and explores whether the instrument improves upon the current legislation and sets out adequate data protection rules and standards. Analyzing the Directive from these angles, this article finds that while a considerable improvement and major step forward for the protection of personal data in its field, the Directive is unlikely to mend the fragmented legal framework and achieve the intended high level of data protection standards consistent across European Union member states.

8 citations

Proceedings ArticleDOI

[...]

01 Sep 2017
TL;DR: It is argued that while the law regulates both algorithms and their discriminatory effects, the framework is insufficient in addressing the complex interactions that must take place between system developers, users, oversight and profiled individuals to fully guarantee algorithmic transparency and accountability.
Abstract: In the hopes of making law enforcement more effective and efficient, police and intelligence analysts are increasingly relying on algorithms underpinning technologybased and data-driven policing. To achieve these objectives, algorithms must also be accurate, unbiased and just. In this paper, we examine how European data protection law regulates automated profiling and how this regulation impacts police and intelligence algorithms and algorithmic discrimination. In particular, we assess to what extent the regulatory frameworks address the challenges of algorithmic transparency and accountability. We argue that while the law regulates both algorithms and their discriminatory effects, the framework is insufficient in addressing the complex interactions that must take place between system developers, users, oversight and profiled individuals to fully guarantee algorithmic transparency and accountability.

6 citations

Proceedings ArticleDOI

[...]

01 Jan 2022
TL;DR: In this article , the authors proposed a service design curriculum at the University of Antwerp to support the development of the field of service design within Belgium, which is a focus of study on its own.
Abstract: The Belgian design scene is not unfamiliar with the concept of service design and has hosted some leading-edge companies throughout the years, pushing the field forward. The next step in the development of the field is the expansion of the academic aspect of service design within Belgium. The topic of service design is already addressed in existing design programmes, though it is yet to be a focus of study on its own. This paper helps to expand the range of study programmes at the University of Antwerp and contributes to shaping the service design landscape in Belgium. The Department of Design Sciences at the University of Antwerp provides students with extended interdisciplinary skills and knowledge, leading them into the world of design. The department now seeks to create an additional master’s programmes with a curriculum catered to service design providing graduates of this programme with all the necessary skills to be at the top of their game when they start their careers in service design. To ensure continuity between what the university delivers and what the job market expects and desires, the programmes will be developed from the ground up. The university, for this study, has worked in collaboration with current leaders in the field, providing their expertise and requirements for the future of service design. Insights were obtained by means of workshops and collaborative projects with students and experts in the service design scene, as well as building on existing literature and current educational programmes across the world.

Cited by
More filters
Proceedings Article

[...]

01 Jan 2019
TL;DR: It is argued that providing information regarding how AGSs work can enhance users’ trust only when users have enough time and ability to process and understand the information, and providing excessively detailed information may even reduce users�’ perceived understanding of AGss, and thus hurt users” trust.
Abstract: Users’ adoptions of online-shopping advice-giving systems (AGSs) are crucial for e-commerce websites to attract users and increase profits. Users’ trust in AGSs influences them to adopt AGSs. While previous studies have demonstrated that AGS transparency increases users’ trust through enhancing users’ understanding of AGSs’ reasoning, hardly any attention has been paid to the possible inconsistency between the level of AGS transparency and the extent to which users feel they understand the logic of AGSs’ inner working. We argue that the relationship between them may not always be positive. Specifically, we posit that providing information regarding how AGSs work can enhance users’ trust only when users have enough time and ability to process and understand the information. Moreover, providing excessively detailed information may even reduce users’ perceived understanding of AGSs, and thus hurt users’ trust. In this research, we will use a lab experiment to explore how providing information with different levels of detail will influence users’ perceived understanding of and trust in AGSs. Our study would contribute to the literature by exploring the potential inverted U-shape relationship among AGS transparency, users’ perceived understanding of and trust in AGSs, and contribute to the practice by offering suggestions for designing trustworthy AGSs.

7 citations

Journal ArticleDOI

[...]

TL;DR: It is discovered that GA has unusual permission requirements and sensitive Application Programming Interface (API) usage, and its privacy requirements are not transparent to smartphone users, which makes the risk assessment and accountability of GA difficult posing risks to establishing private and secure personal spaces in a smart city.
Abstract: Smart Assistants have rapidly emerged in smartphones, vehicles, and many smart home devices. Establishing comfortable personal spaces in smart cities requires that these smart assistants are transparent in design and implementation—a fundamental trait required for their validation and accountability. In this article, we take the case of Google Assistant (GA), a state-of-the-art smart assistant, and perform its diagnostic analysis from the transparency and accountability perspectives. We compare our discoveries from the analysis of GA with those of four leading smart assistants. We use two online user studies (N = 100 and N = 210) conducted with students from four universities in three countries (China, Italy, and Pakistan) to learn whether risk communication in GA is transparent to its potential users and how it affects them. Our research discovered that GA has unusual permission requirements and sensitive Application Programming Interface (API) usage, and its privacy requirements are not transparent to smartphone users. The findings suggest that this lack of transparency makes the risk assessment and accountability of GA difficult posing risks to establishing private and secure personal spaces in a smart city. Following the separation of concerns principle, we suggest that autonomous bodies should develop standards for the design and development of smart city products and services.

6 citations

[...]

01 Jan 2014
TL;DR: In this article, the authors identify data protection shortcomings in the inter-agency cooperation regime in the EU criminal justice and law enforcement area and, under six possible scenarios, the interplay among the data protection legal instruments in the law-making process today in field, as well as, the response each could provide to such shortcomings.
Abstract: This study aims, first, at identifying data protection shortcomings in the inter-agency cooperation regime in the EU criminal justice and law enforcement area and, second, at outlining, under six possible scenarios, the interplay among the data protection legal instruments in the law-making process today in field, as well as, the response each could provide to such shortcomings.

4 citations

[...]

01 Jan 2019
TL;DR: It is argued that instead of setting a uniform rule of providing AGS transparency, optimal transparency provision strategies for different types of AGSs and users based on their unique features should be developed.
Abstract: Advice-giving systems (AGSs) provide recommendations based on users’ unique preferences or needs. Maximizing users’ adoptions of AGSs is an effective way for ecommerce websites to attract users and increase profits. AGS transparency, defined as the extent to which information of a system’s reasoning is provided and made available to users, has been proved to be effective in increasing users’ adoptions of AGSs. While previous studies have identified providing explanations as an effective way of enhancing AGS transparency, most of them failed to further explore the optimal transparency provision strategy of AGSs. We argue that instead of setting a uniform rule of providing AGS transparency, we should develop optimal transparency provision strategies for different types of AGSs and users based on their unique features. In this paper, we first developed a framework of AGS transparency provision and identified six components of AGS transparency provision strategies. We then developed a research model of AGS transparency provision strategy with a set of propositions. We hope that based on this model, researchers could evaluate how to effect transparency for AGSs and users with different characteristics. Our work would contribute to the existing knowledge by exploring how AGS and user characteristics will influence the optimal strategy of providing AGS transparency. Our work would also contribute to the practice by offering design suggestions for AGS explanation interfaces.

4 citations

Journal ArticleDOI

[...]

TL;DR: In this paper , the authors provide an overview of machine learning techniques utilized in prior research with a specific focus on model generalization when using these public datasets as training data, and shed light on the challenges and opportunities that machine learning-enabled stress monitoring and detection face.
Abstract: Wearable sensors have shown promise as a non-intrusive method for collecting biomarkers that may correlate with levels of elevated stress. Stressors cause a variety of biological responses, and these physiological reactions can be measured using biomarkers including Heart Rate Variability (HRV), Electrodermal Activity (EDA) and Heart Rate (HR) that represent the stress response from the Hypothalamic-Pituitary-Adrenal (HPA) axis, the Autonomic Nervous System (ANS), and the immune system. While Cortisol response magnitude remains the gold standard indicator for stress assessment [1], recent advances in wearable technologies have resulted in the availability of a number of consumer devices capable of recording HRV, EDA and HR sensor biomarkers, amongst other signals. At the same time, researchers have been applying machine learning techniques to the recorded biomarkers in order to build models that may be able to predict elevated levels of stress.The aim of this review is to provide an overview of machine learning techniques utilized in prior research with a specific focus on model generalization when using these public datasets as training data. We also shed light on the challenges and opportunities that machine learning-enabled stress monitoring and detection face.This study reviewed published works contributing and/or using public datasets designed for detecting stress and their associated machine learning methods. The electronic databases of Google Scholar, Crossref, DOAJ and PubMed were searched for relevant articles and a total of 33 articles were identified and included in the final analysis. The reviewed works were synthesized into three categories of publicly available stress datasets, machine learning techniques applied using those, and future research directions. For the machine learning studies reviewed, we provide an analysis of their approach to results validation and model generalization. The quality assessment of the included studies was conducted in accordance with the IJMEDI checklist [2].A number of public datasets were identified that are labeled for stress detection. These datasets were most commonly produced from sensor biomarker data recorded using the Empatica E4 device, a well-studied, medical-grade wrist-worn wearable that provides sensor biomarkers most notable to correlate with elevated levels of stress. Most of the reviewed datasets contain less than twenty-four hours of data, and the varied experimental conditions and labeling methodologies potentially limit their ability to generalize for unseen data. In addition, we discuss that previous works show shortcomings in areas such as their labeling protocols, lack of statistical power, validity of stress biomarkers, and model generalization ability.Health tracking and monitoring using wearable devices is growing in popularity, while the generalization of existing machine learning models still requires further study, and research in this area will continue to provide improvements as newer and more substantial datasets become available.

3 citations