scispace - formally typeset
Search or ask a question
Author

Delmar Karlen

Bio: Delmar Karlen is an academic researcher. The author has contributed to research in topics: Court of equity & Suspect classification. The author has an hindex of 1, co-authored 1 publications receiving 275 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In 1995 Congress amended §43 of the Trademark Act of 1946, 15 U. S. C. § 1125, to provide a remedy for the ''dilution of famous marks''.
Abstract: JUSTICE STEVENS delivered the opinion of the Court.* In 1995 Congress amended §43 of the Trademark Act of 1946, 15 U. S. C. §1125, to provide a remedy for the “dilution of famous marks.” 109 Stat. 985–986. That amendment, known as the Federal Trademark Dilution Act (FTDA), describes the factors that determine whether a mark is “distinctive and famous,” and defines the term “dilution” as “the lessening of the capacity of a famous mark to identify and distinguish goods or services.”1 The

319 citations


Cited by
More filters
Posted Content
TL;DR: In this paper, the authors propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes, and present empirical evidence supporting the effectiveness of their test and their approach for masking bias and preserving relevant information in the data.
Abstract: What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender, religious practice) and an explicit description of the process. When the process is implemented using computers, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the algorithm, we propose making inferences based on the data the algorithm uses. We make four contributions to this problem. First, we link the legal notion of disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.

679 citations

Journal ArticleDOI
TL;DR: In this article, the authors summarize what is known about adolescent brain development and what remains unknown, as well as what neuroscience can and cannot tell us about the adolescent brain and behavior.

510 citations

Proceedings ArticleDOI
29 Jan 2019
TL;DR: It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.
Abstract: Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption. We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.

476 citations

Posted Content
TL;DR: This document is a response to some of the privacy characteristics of direct contact tracing apps like TraceTogether and an early-stage Request for Comments to the community to encourage community efforts to develop alternative effective solutions with stronger privacy protection for the users.
Abstract: Contact tracing is an essential tool for public health officials and local communities to fight the spread of novel diseases, such as for the COVID-19 pandemic. The Singaporean government just released a mobile phone app, TraceTogether, that is designed to assist health officials in tracking down exposures after an infected individual is identified. However, there are important privacy implications of the existence of such tracking apps. Here, we analyze some of those implications and discuss ways of ameliorating the privacy concerns without decreasing usefulness to public health. We hope in writing this document to ensure that privacy is a central feature of conversations surrounding mobile contact tracing apps and to encourage community efforts to develop alternative effective solutions with stronger privacy protection for the users. Importantly, though we discuss potential modifications, this document is not meant as a formal research paper, but instead is a response to some of the privacy characteristics of direct contact tracing apps like TraceTogether and an early-stage Request for Comments to the community. Date written: 2020-03-24 Minor correction: 2020-03-30

344 citations