scispace - formally typeset
Search or ask a question

Showing papers in "Fordham Urban Law Journal in 2019"


Journal Article
TL;DR: This Article explores how algorithms in the housing arena operate, or have the potential to operate, in a manner that perpetuates previous eras of discrimination and segregation by focusing on algorithms used in housing finance, marketing, and tenancy selection.
Abstract: Modern algorithms are capable of processing gargantuan amounts of data — with them, decision-making is faster and more efficient than ever. This massive amount of data, termed “big data,” is compiled from innumerable sources, and due to decades of discrimination, often leads algorithms to arrive at biased results that disadvantage people of color and people from lowand moderateincome communities. Moreover, the decision-making procedures of modern algorithms are often structured by a homogenous group of people, who develop algorithms without transparency, auditing, or oversight. This lack of accountability is particularly worrisome because algorithms are beginning to be deployed more rapidly and more expansively by public and private actors. Recent scholarship has raised concerns about how algorithms work to perpetuate discrimination and stereotypes in practically all areas, from casually searching the internet to criminal justice. This Article explores how algorithms in the housing arena operate, or have the potential to operate, in a manner that perpetuates previous eras of discrimination and segregation. By specifically concentrating on algorithms used in housing finance, marketing, and tenancy selection, this Article * J.D., Brooklyn Law School; B.A., Howard University. I am indebted to Professor Christina Mulligan, for without her direction, this Article would not have been possible. Special thanks to Nadav Pearl and the staff of the Fordham Urban Law Journal for excellent editing assistance and finding a home for this piece. I am also grateful for the helpful comments of Sara Amri, Christopher Wallace, and Thebe Kgositsile. Finally, thanks are owed to Richard Rothstein, author of The Color of Law, which was the inspiration for this Article. 220 FORDHAM URB. L.J. [Vol. XLVI provides a research agenda for exploring whether housing stakeholders are creating an era of algorithmic redlining.

16 citations




Journal Article
TL;DR: In this paper, the authors discuss the most problematic aspects of governmental use of big data and artificial intelligence, including issues of governmental malfeasance, system capacity for masking encoded bias, technological alteration of policy, the ceding of political decisions to private developers, and systemic data error.
Abstract: This Article posits that governments deploy algorithms as social control mechanisms to contain and criminalize marginalized populations. Though recognition of the dangers inherent in misuse of big data and predictive analytics is growing, governments and scholars alike have not paid sufficient attention to how these systems inevitably target the poor, the disabled, and communities of color. As the criminal justice and social welfare systems have become fused, big data analytics increases the breadth of government control over those caught within these overlapping systems. Challenging governmental use of algorithms as instruments of social control requires understanding the fallibility of the technology, the historical and political forces driving adoption of the technology, and the strategies that have been most effective in advocating against it. It also requires recognizing that the technological capacity to control and punish includes, but also expands far beyond, uses by law enforcement. This Article discusses the most problematic aspects of governmental use of big data and artificial intelligence. These include issues of governmental malfeasance, system capacity for masking encoded bias, technological alteration of policy, the ceding of political decisions to private developers, and systemic data error. It then examines the social and political forces driving governmental deployment of data analytics. It concludes by examining litigation, regulatory, and organizing strategies that can be used to challenge governmental employment of algorithmic social control mechanisms.

5 citations


Journal Article
TL;DR: This paper argued that the United States is a present-day settler colonial society whose laws and policies function to support an ongoing structure of invasion called "settler colonialism", which operates through the processes of Indigenous elimination and the subordination of racialized outsiders.
Abstract: This Article flows from the premise that the United States is a present-day settler colonial society whose laws and policies function to support an ongoing structure of invasion called “settler colonialism,” which operates through the processes of Indigenous elimination and the subordination of racialized outsiders. At a time when U.S. immigration laws continue to be used to oppress, exclude, subordinate, racialize, and dehumanize, this Article seeks to broaden the understanding of the U.S. immigration system using a settler colonialism lens. The Article analyzes contemporary U.S. immigration laws and policies such as the National Security Entry-Exit Registration System (NSEERS) and Trump’s immigration policies within a settler colonialism framework in order to locate the U.S. immigration system at the heart of settler colonialism’s ongoing project of elimination and subordination. The Article showcases solidarity movements between Indigenous and immigrant communities that protest the enduring structures of settler colonialism and engender transformative visions that defy the boundaries of the U.S. immigration legal system. Finally, the Article offers pedagogies that disrupt traditional immigration law pedagogy and that are designed to increase awareness of settler colonialism in the immigration law classroom.

5 citations



Journal Article
TL;DR: A new AI Data Transparency Model is proposed that focuses on disclosure of data rather than on the initial software program and programmers, and follows already existing legal frameworks of data transparency, such as the ones being implemented by the FDA and the SEC.
Abstract: Artificial Intelligence and Machine Learning (AI) are often described as technological breakthroughs that will completely transform our society and economy. AI systems have been implemented everywhere, from medicine, transportation, finance, art, to legal and social spheres, and even in weapons development. In many sectors, AI systems have already started making decisions previously made by humans. Promising as AI systems may be, they also pose urgent challenges to our everyday life. While much attention has concerned AI’s legal implications, the literature suffers from a lack of solutions that account for both legal and engineering practices and constraints. This leaves technology firms without * Professor Shlomit Yanisky-Ravid, Ph.D., Fordham Law School, Visiting Professor; Fordham Law Center on Law and Information Policy (CLIP), Head of AI-IP and Blockchain Project; Yale Law School, Information Society Project (ISP), Fellow; Ono Law School, Israel, Senior Faculty, the Shalom Comparative Legal Research Institute, OAC, Founder and Academic Director. Sean K. Hallisey, Fordham Law, CLIP, AI-IP Project, Fellow. We gratefully dedicate this Article to Joel Reidenberg, the founder and the head of Fordham Law Center of Law and Information Policy (CLIP), for his initiative, support, and encouragement, all of which tremendously contributed to the writing of this Article and the development of its ideas. We would also like to thank all the Fellows at the Fordham CLIP IP-AI and Blockchain Project, Yale Law, ISP, as well as to the students of the course “Intellectual Property and the Challenges of Advanced Technology: AI and Blockchain,” for their wonderful discussions, insights and comments. Finally, we thank Dean Matthew Diller, Fordham Law School, for promoting and stressing the challenges of advanced technology, data privacy, and intellectual property, and Linda Sugin, Associate Dean for Academic Affairs at Fordham Law, for her support. 2019] FORDHAM URB. L.J. 429 guidelines and increases the risk of societal harm. It also means that policymakers and judges operate without a regulatory regime to turn to when addressing these novel and unpredictable outcomes. This Article tries to fill the void by focusing on data rather than on the software and programmers. It suggests a new model that stems from a recognition of the significant role that the data plays in the development and functioning of AI systems. Data is the most important aspect of teaching AI systems to operate. AI algorithms begin with a massive preexisting dataset, which data providers use to train the system. But the data that AI systems “swallow” can be illegal, discriminatory, altered, unreliable, or simply incomplete. Thus, the more data fed to the AI systems, the higher the likelihood that they could produce biased, discriminatory decisions and violate privacy rights. The Article discusses how discrimination can arise, even inadvertently, from the operation of “trusted” and “objective” AI systems. To address this problem, this Article proposes a new AI Data Transparency Model that focuses on disclosure of data rather than, as some scholars argue, focusing on the initial software program and programmers. The Model includes an auditing regime and a certification program, run either by a governmental body or, in the absence of such entity, by private institutions. This Model will encourage the industry to take proactive steps to ensure and publicize that datasets are trustworthy. The suggested Model includes a safe harbor, which incentivizes firms to implement transparency recommendations even without massive regulatory oversight. From an engineering point of view, the Model recognizes data providers and big data as the most important components in the process of creating, training and operating AI systems. Even more importantly, the Model is technologically feasible because data can be easily absorbed and kept by a technological tool. Further, this Model is also practically feasible because it follows already existing legal frameworks of data transparency, such as the ones being implemented by the FDA and the SEC. Improving transparency in data systems would result in less harmful AI systems, better protect societal rights and norms, and produce improved outcomes in this emerging field, especially for minority communities that often lack resources or representation to challenge AI systems. Increased transparency of the data used while developing, training or operating AI systems would mitigate and reduce these harms. Additionally, to better identify the risks of faulty data, industry players must conduct critical evaluations and audits of the data used to train AI systems; one way to incentivize this is a 430 FORDHAM URB. L.J. [Vol. XLVI certification system to publicize good-faith efforts to reduce the possibility of discriminatory outcomes and privacy violations in AI systems. This Article strives to incentivize the creation of new standards, which the industry could implement from the genesis of AI systems to mitigate the possibility of harm, rather than post-hoc assignments of liability.

4 citations




Journal Article
TL;DR: Lee et al. as mentioned in this paper examined the role of law enforcement to make the initial determination of what is reasonable and found that immunity determinations are largely formed on the basis of police discretion, human nature and implicit biases inform the reality that this discretion is being exercised in a biased way.
Abstract: Twenty-five states across the country have enacted some form of “Stand Your Ground” (SYG) laws, undercutting the traditional notion of a duty to retreat when faced with a perceived threat. Proponents of SYG argue that these laws derive from a fundamental right of self-defense and are intended to safeguard all citizens from imminent threats of bodily harm. However, the application and enforcement of SYG laws do not offer the same protections to black shooters as they do white shooters. Of these twenty-five SYG jurisdictions, six provide immunity from arrest for those who stand their ground in “reasonable” self-defense. While there are various factors that contribute to the unequal enforcement of SYG law, this Note examines the statutory responsibility placed on law enforcement to make the initial determination of what is “reasonable.” Because immunity determinations are largely formed on the basis of police discretion, human nature and implicit biases inform the reality that this discretion is being exercised in a biased way. As a result, immunity determinations reinforce a presumption that black shooters * J.D. Candidate, 2020, Fordham University School of Law; B.A., 2017, The College of New Jersey. First, I would like to thank Professor Youngjae Lee, Professor of Law, for his guidance during this process and for his overall love of learning that will continue to inspire me throughout my career. I would also like to thank my parents, Haylee and Edward Bell, for their overwhelming encouragement throughout my life and for shaping me into the woman I am today. Last, I would like to thank Paul Lowry for his endless love and support throughout my time as a law student and beyond. 2019] FORDHAM URB. L.J. 903 are inherently unreasonable, leading to a disparate impact of increased arrest and prosecution for black shooters.

2 citations



Journal Article
TL;DR: This paper found that significant percentages of those who manage to navigate the application process do become licensed barbers and nursing assistants, according to officials and available state data, and that people with criminal histories remain marked and open to surveillance and control in the extended American carceral state.
Abstract: It is commonly assumed that people with criminal backgrounds are ineligible for licensed employment in the United States. This study, based on more than one hundred interviews with occupational-certification officials in states across the country, demonstrates that people with conviction histories seeking professional credentials confront an unpredictable process that resurrects and amplifies their records and often requires them to perform their rehabilitation, good character, and governability. State laws are extremely varied, complex, and sometimes opaque; application procedures expose would-be licensees to inspection and judgment by a variety of public and private actors. People with criminal backgrounds are not flatly excluded from occupational certification. Indeed, significant percentages of those who manage to navigate the application process do become licensed barbers and nursing assistants, according to officials and available state data. But neither are they restored to full and equal standing. They are in a kind of liminal state, one that is uncertain and precarious. Even when they succeed, people with criminal records seeking licensure often need to navigate a process that reinforces their diminished status and their vulnerability to state authority and private power. These findings yield new insight into the civic status created by American collateral-consequences laws. While not cast out or condemned to permanent exclusion, people with criminal histories remain marked and open to surveillance and control in the extended American carceral state. They are, in effect, disciplinary subjects. Such civil barriers are more porous than absolute, but licensure practices raise serious problems of transparency, consistency, and fairness.





Journal Article
TL;DR: In this article, the intersectionality between intersectionality and intersectionality has been studied in the context of intersectionality in the area of intersectional intersectionality, where intersectionality is defined as:
Abstract: תקציר בעברית: המאמר מציע תרגיל מחשבתי שעניינו שינוי בתפיסה החברתית כלפי תופעת נפגעות העברה. כיום, נפגעות נתפסת כמאפיין זהות מכונן חולשה וממקמת את נפגעי העברה נמוך בהייררכיה החברתית. באמצעות תאוריית הצמתים (Intersectionality) הדנה בריבוד חברתי הנובע מקיומם של צירי וצומתי זהות, נציע קריאה שונה של ציר הנפגעות: לא כציר חולשה וקורבנות המהווה קרקע להדרה ולהפליה, אלא כציר חוזק המעניק כוח אישי וחברתי לממוקמים עליו. קריאה זו תתאפשר, בין היתר, באמצעות \"תאוריות חיוביות\" אשר מציגות נרטיבים השוברים את הסטראוטיפים המלווים את ציר הנפגעות, ותחת זאת מעצימים את הנפגע. השינוי בתפיסת הציר אינו סמלי בלבד, אלא נלוות לו השלכות מעשיות חשובות. הממוקמים על צירי חוזק נהנים מגמישות בעיצוב המוסדות החברתיים וכך עשויים ביתר קלות להחדיר אליהם את תפיסת עולמם ואת צורכיהם. כך, זיהוי נפגעות עברה כמאפיין זהות שממוקם על ציר גבוה יותר בהייררכיה החברתית, יאפשר לנפגעים להשפיע על עיצוב כלים ומוסדות חברתיים, לפתח מסלולי העצמה ולערוך אדפטציה חברתית לחוויית הנפגעות, בדמות יצירת מנגנוני תמיכה. English Abstract: This article offers an intellectual exercise dealing with the change in the social perception of crime-victimhood. Nowadays, victimhood is seen as an identity characteristic which establishes weakness and positions this group low in the social hierarchy. Through Intersectionality theory, which discusses social stratification that derives from the existence of identity axes and intersections, we will suggest conceptualizing the crime-victimhood axis not as an axis of oppression which leads to exclusion and discrimination; but rather as an axis of strength which provides personal and social empowerment for those who are located on it. This conceptualization will be possible through the use of positive social theories, which present narratives that break the stereotypes accompanying victimhood, and empower the victim. The new conceptualization is not only symbolic but also has important practical implications. Identifying crime-victimhood as a privileged axis will allow victims to influence the design of social institutions and to develop supportive mechanisms and empowering pathways. Thus, identification of victimhood as an identity characteristic that located on a higher axis in the social hierarchy, will allow victims to influence and design tools and social institutions, to develop empowerment paths and to create social support mechanisms.