scispace - formally typeset
Search or ask a question

Showing papers by "Wai Chee Dimock published in 2020"


Book
10 Nov 2020

3 citations


Journal ArticleDOI
TL;DR: A machine learning algorithm using natural language processing combs through a vast database every day, looking for syndromes of over 150 diseases in sixty-five languages Local news stories, medical bulletins, even livestock and animal disease data are all on the algorithm's reading lists When an outbreak is identified, its location is linked to global air-travel data to pinpoint other cities at risk as mentioned in this paper.
Abstract: Dimock reflects on languages in the time of the COVID-19 pandemic On December 30, 2019, before the World Health Organization (WHO) issued warnings about the coronavirus outbreak that would be known as COVID-19, a Canadian start-up, BlueDot, noticed a cluster of unusual pneumonia cases in Wuhan and flagged it BlueDot doesn't just accept official statistics Kamran Khan, its founder and CEO said that they know that governments may not always be relied upon to provide information in a timely fashion Instead, a machine-learning algorithm using natural language processing combs through a vast database every day, looking for syndromes of over 150 diseases in sixty-five languages Local news stories, medical bulletins, even livestock and animal disease data are all on the algorithm's reading lists When an outbreak is identified, its location is linked to global air-travel data to pinpoint other cities at risk

1 citations


Journal ArticleDOI
TL;DR: The 2018 MLA session 388, "Being Human, Seemingly Human" as discussed by the authors, was the first of its kind, which was organized by the Office of the Executive Director of Microsoft.
Abstract: A T FIRST GLANCE IT SEEMED NO DIFFERENT FROM ANY OTHER MLA session: in a midsize room at the Washington State Convention Center, well attended but not quite filled to capacity, with people leafing through their programs, checking their phones, drifting in and out. It was session 388, Being Human, Seeming Human. Arranged by the Office of the Executive Director, it was the first of its kind. Four of the six speakers were from Microsoft,1 expressly invited to start a conversation about what it means for those who selfidentify as human to share the planet with those who seem to be. The Microsoft representatives talked about OpenAI’s GPT2, a widely used text generator. Can we always tell that the writer is an algorithm? And should we object if it happens to write like us?2 These playful conundrums were exactly right for the occasion, but just the tip of the iceberg. Indeed, “seeming human” might turn out to be one of the less scary things AI can do. Replacing, supplanting, and eliminating human beings are also in the cards. From selfdriving cars to facial recognition biometrics to drones carrying out remote assassinations, AI is poised to transform the fabric of life and the future of work. The Brookings Institution, drawing on a study by the Stanford graduate student Michael Webb,3 reported on 20 November 2019 that, unlike the automation enabled by robots such as those in Amazon warehouses (Edwards), which mostly affects lowpaying jobs, the predictive and decisionmaking powers of AI—in the form of machinelearning algorithms—will affect every employment sector, hitting educated workers the hardest (Muro et al.). Those “with graduate or professional degrees will be almost four times as exposed to AI as workers with just a high school degree” (Muro et al.). This point was reiterated in a 29 January 2020 update from the Brookings Institution, further underscoring the 1 3 5 . 3 ]

1 citations