scispace - formally typeset
Search or ask a question
Author

Thomas Marquenie

Bio: Thomas Marquenie is an academic researcher from Katholieke Universiteit Leuven. The author has contributed to research in topics: Law enforcement & Data Protection Act 1998. The author has an hindex of 2, co-authored 2 publications receiving 14 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: While a considerable improvement and major step forward for the protection of personal data in its field, the Directive is unlikely to mend the fragmented legal framework and achieve the intended high level of data protection standards consistent across European Union member states.

10 citations

Proceedings ArticleDOI
20 Jun 2022
TL;DR: This work reveals issues of overrepresentation of minority subjects in violence situations that limit the external validity of the dataset for real-time crime detection systems and proposes data augmentation techniques to rebalance the dataset.
Abstract: Researchers and practitioners in the fairness community have highlighted the ethical and legal challenges of using biased datasets in data-driven systems, with algorithmic bias being a major concern. Despite the rapidly growing body of literature on fairness in algorithmic decision-making, there remains a paucity of fairness scholarship on machine learning algorithms for the real-time detection of crime. This contribution presents an approach for fairness-aware machine learning to mitigate the algorithmic bias / discrimination issues posed by the reliance on biased data when building law enforcement technology. Our analysis is based on RWF-2000, which has served as the basis for violent activity recognition tasks in data-driven law enforcement projects. We reveal issues of overrepresentation of minority subjects in violence situations that limit the external validity of the dataset for real-time crime detection systems and propose data augmentation techniques to rebalance the dataset. The experiments on real world data show the potential to create more balanced datasets by synthetically generated samples, thus mitigating bias and discrimination concerns in law enforcement applications.

10 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: It is argued that while the law regulates both algorithms and their discriminatory effects, the framework is insufficient in addressing the complex interactions that must take place between system developers, users, oversight and profiled individuals to fully guarantee algorithmic transparency and accountability.
Abstract: In the hopes of making law enforcement more effective and efficient, police and intelligence analysts are increasingly relying on algorithms underpinning technologybased and data-driven policing. To achieve these objectives, algorithms must also be accurate, unbiased and just. In this paper, we examine how European data protection law regulates automated profiling and how this regulation impacts police and intelligence algorithms and algorithmic discrimination. In particular, we assess to what extent the regulatory frameworks address the challenges of algorithmic transparency and accountability. We argue that while the law regulates both algorithms and their discriminatory effects, the framework is insufficient in addressing the complex interactions that must take place between system developers, users, oversight and profiled individuals to fully guarantee algorithmic transparency and accountability.

8 citations

Proceedings ArticleDOI
01 Jan 2022
TL;DR: In this article , the authors proposed a service design curriculum at the University of Antwerp to support the development of the field of service design within Belgium, which is a focus of study on its own.
Abstract: The Belgian design scene is not unfamiliar with the concept of service design and has hosted some leading-edge companies throughout the years, pushing the field forward. The next step in the development of the field is the expansion of the academic aspect of service design within Belgium. The topic of service design is already addressed in existing design programmes, though it is yet to be a focus of study on its own. This paper helps to expand the range of study programmes at the University of Antwerp and contributes to shaping the service design landscape in Belgium. The Department of Design Sciences at the University of Antwerp provides students with extended interdisciplinary skills and knowledge, leading them into the world of design. The department now seeks to create an additional master’s programmes with a curriculum catered to service design providing graduates of this programme with all the necessary skills to be at the top of their game when they start their careers in service design. To ensure continuity between what the university delivers and what the job market expects and desires, the programmes will be developed from the ground up. The university, for this study, has worked in collaboration with current leaders in the field, providing their expertise and requirements for the future of service design. Insights were obtained by means of workshops and collaborative projects with students and experts in the service design scene, as well as building on existing literature and current educational programmes across the world.

Cited by
More filters
Proceedings Article
01 Jan 2019
TL;DR: It is argued that providing information regarding how AGSs work can enhance users’ trust only when users have enough time and ability to process and understand the information, and providing excessively detailed information may even reduce users�’ perceived understanding of AGss, and thus hurt users” trust.
Abstract: Users’ adoptions of online-shopping advice-giving systems (AGSs) are crucial for e-commerce websites to attract users and increase profits. Users’ trust in AGSs influences them to adopt AGSs. While previous studies have demonstrated that AGS transparency increases users’ trust through enhancing users’ understanding of AGSs’ reasoning, hardly any attention has been paid to the possible inconsistency between the level of AGS transparency and the extent to which users feel they understand the logic of AGSs’ inner working. We argue that the relationship between them may not always be positive. Specifically, we posit that providing information regarding how AGSs work can enhance users’ trust only when users have enough time and ability to process and understand the information. Moreover, providing excessively detailed information may even reduce users’ perceived understanding of AGSs, and thus hurt users’ trust. In this research, we will use a lab experiment to explore how providing information with different levels of detail will influence users’ perceived understanding of and trust in AGSs. Our study would contribute to the literature by exploring the potential inverted U-shape relationship among AGS transparency, users’ perceived understanding of and trust in AGSs, and contribute to the practice by offering suggestions for designing trustworthy AGSs.

17 citations

Journal ArticleDOI
TL;DR: It is discovered that GA has unusual permission requirements and sensitive Application Programming Interface (API) usage, and its privacy requirements are not transparent to smartphone users, which makes the risk assessment and accountability of GA difficult posing risks to establishing private and secure personal spaces in a smart city.
Abstract: Smart Assistants have rapidly emerged in smartphones, vehicles, and many smart home devices. Establishing comfortable personal spaces in smart cities requires that these smart assistants are transparent in design and implementation—a fundamental trait required for their validation and accountability. In this article, we take the case of Google Assistant (GA), a state-of-the-art smart assistant, and perform its diagnostic analysis from the transparency and accountability perspectives. We compare our discoveries from the analysis of GA with those of four leading smart assistants. We use two online user studies (N = 100 and N = 210) conducted with students from four universities in three countries (China, Italy, and Pakistan) to learn whether risk communication in GA is transparent to its potential users and how it affects them. Our research discovered that GA has unusual permission requirements and sensitive Application Programming Interface (API) usage, and its privacy requirements are not transparent to smartphone users. The findings suggest that this lack of transparency makes the risk assessment and accountability of GA difficult posing risks to establishing private and secure personal spaces in a smart city. Following the separation of concerns principle, we suggest that autonomous bodies should develop standards for the design and development of smart city products and services.

11 citations

Journal ArticleDOI
18 Mar 2020
TL;DR: This study answers the question how a PIA should be carried out for large-scale digital forensic operations and describes the privacy risks, threats, and articulates concrete privacy measures to demonstrate compliance with the Police Directive.
Abstract: The large increase in the collection of location, communication, health data etc. from seized digital devices like mobile phones, tablets, IoT devices, laptops etc. often poses serious privacy risks. To measure privacy risks, privacy impact assessments (PIA) are substantially useful tools and the Directive EU 2016/80 (Police Directive) requires their use. While much has been said about PIA methods pursuant to the Regulation EU 2016/679 (GDPR), less has been said about PIA methods pursuant to the Police Directive. Yet, little research has been done to explore and measure privacy risks that are specific to law enforcement activities which necessitate the processing of large amounts of data. This study tries to fill this gap by conducting a PIA on a big data forensic platform as a case study. This study also answers the question how a PIA should be carried out for large-scale digital forensic operations and describes the privacy risks, threats we learned from conducting it. Finally, it articulates concrete privacy measures to demonstrate compliance with the Police Directive.

7 citations

01 Jan 2019
TL;DR: It is argued that instead of setting a uniform rule of providing AGS transparency, optimal transparency provision strategies for different types of AGSs and users based on their unique features should be developed.
Abstract: Advice-giving systems (AGSs) provide recommendations based on users’ unique preferences or needs. Maximizing users’ adoptions of AGSs is an effective way for ecommerce websites to attract users and increase profits. AGS transparency, defined as the extent to which information of a system’s reasoning is provided and made available to users, has been proved to be effective in increasing users’ adoptions of AGSs. While previous studies have identified providing explanations as an effective way of enhancing AGS transparency, most of them failed to further explore the optimal transparency provision strategy of AGSs. We argue that instead of setting a uniform rule of providing AGS transparency, we should develop optimal transparency provision strategies for different types of AGSs and users based on their unique features. In this paper, we first developed a framework of AGS transparency provision and identified six components of AGS transparency provision strategies. We then developed a research model of AGS transparency provision strategy with a set of propositions. We hope that based on this model, researchers could evaluate how to effect transparency for AGSs and users with different characteristics. Our work would contribute to the existing knowledge by exploring how AGS and user characteristics will influence the optimal strategy of providing AGS transparency. Our work would also contribute to the practice by offering design suggestions for AGS explanation interfaces.

7 citations

Book ChapterDOI
22 Mar 2019
TL;DR: The use of big data in the law enforcement sector turns the traditional practices of profiling to search for suspects or determining the threat level of a suspect into a data-driven process as discussed by the authors.
Abstract: The use of Big Data in the law enforcement sector turns the traditional practices of profiling to search for suspects or determining the threat level of a suspect into a data-driven process. Risk profiling is frequently used in the USA and is becoming more prominent in national law enforcement practices in Member States of the European Union. While risk profiling creates challenges that differ per jurisdiction in which it is used and vary along the purpose for which the profiling is deployed, this technological development brings fundamental changes that are quite universal. Risk profiling of suspects, or of large parts of the population to detect suspects, brings challenges of transparency, discrimination and challenges procedural safeguards. After exploring the concept of risk profiling, this chapter discusses those fundamental challenges. To illustrate the challenges, the chapter uses two main examples of risk profiling: COMPAS and SyRI.

5 citations