scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Technology and Society Magazine in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors highlight the potential of AI to address global challenges and help achieve targets like the United Nation’s sustainable development goals (SDGs), but there are many barriers that stand between aspirations to be responsible and the translation of these aspirations into concrete practicalities.
Abstract: As artificial intelligence (AI) permeates across social and economic life, its ethical and governance implications have come to the forefront. Active debates surround AI’s role in labor displacement, autonomous vehicles, military, misinformation, healthcare, education, and more. As societies collectively grapple with these challenges, new opportunities for AI to proactively contribute to social good (AI4SG) and equity (AI4Eq) have also been proposed [1] , [2] , such as Microsoft’s AI for Earth program. These efforts highlight the potential of AI to address global challenges and help achieve targets like the United Nation’s sustainable development goals (SDGs) [3] . Yet, whether AI efforts are directed explicitly at social good and equity or not, there are many barriers that stand between aspirations to be responsible and the translation of these aspirations into concrete practicalities.

20 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed the concept of AI for social good (AI4SG), which is a set of AI-powered systems and capabilities applied to improve public welfare.
Abstract: We live in an era where problems are global in scale (e.g., climate change) and solutions must also be coordinated on a global scale in the international context. The 2030 agenda for sustainable development goals (SDGs) was ratified in 2015 as a continuation of the millennium development goals (MDGs). In this sense, the SDGs, as MDGs were in their day, are a global mechanism that urges governments to coordinate to address global problems. At the core of the SDGs is “to achieve a better and more sustainable future for all.” The SDGs consist in a series of 17 goals with 169 targets which, for the first time, identify the fight against poverty as a necessity for sustainable development. The SDGs consider the ecological, social, and economic dimensions as interdependent for sustainable development. With the progress and advances of artificial intelligence (AI) technologies, many researchers are exploring the possibility of their use to tackle societal problems. This is what many people nowadays call “AI for social good” (AI4SG). The concept behind AI4SG is very simple: AI-powered systems and capabilities applied to improve public welfare [1] . Although there are different forms of classification of AI4SG initiatives (in terms of data, modeling, or decision-making) “projects addressing AI4SG vary significantly” [2] and the AI behind these projects may have been designed for the “good” but, in practice, it could end up going “bad.” More importantly, not everyone would agree on what is a good result. The main motivation for any application of the AI4SG is to solve social problems.

13 citations


Journal ArticleDOI
TL;DR: The increasing trend of mental health issues, especially among younger people, is not a new phenomenon as mentioned in this paper, and decision-makers are recognizing its devastating effect, resulting in mental health well-being appearing in goal 3 of the 17 UN sustainable development goals (SDGs) [1].
Abstract: The increasing trend of mental health issues, especially among younger people, is not a new phenomenon. World organizations, leaders, and decision-makers are recognizing its devastating effect, resulting in mental health well-being appearing in goal 3 of the 17 UN sustainable development goals (SDGs) [1] . Among mental health issues, stress, anxiety, and depression (SAD) seem to be on the forefront—reaching 74% for disabling stress [2] , 28% for anxiety disorder [3] , and 48% for depression [4] , in some groups. What is more, between 76% and 85% of people in low- and middle-income countries receive no treatment for their disorder [5] , whereas the treatment coverage for, for example, depression is only 33% in high-income countries [6] . Mental health issues have large, multifaceted effects on the patient, on their immediate surroundings (family or caretakers), and on the wider society [7] . Individuals face decreased quality of life, poor educational outcomes, lowered productivity and potential poverty, social problems, abuse vulnerabilities, and additional health problems. Caretakers face increased emotional and physical challenges as well as decreased household income and increased financial costs. Society faces the loss of several gross domestic product (GDP) percentage points and billions of dollars annually, alongside with exacerbating public health issues and corrosion of social cohesion. All these lead to an increasingly stronger positive reinforcement loop—SAD increasingly perpetuates SAD. Too often, mental health issues directly result in the worst possible outcome and loss of human life, as many countries struggle with a high suicide rate [8] . It has been recognized that the reasons for increasing SAD include a severe lack of mental health professionals and regulations [9] as well as unequal access to mental health care [10] . These factors make the field ripe for technological and other scientific therapy-based interventions, as individuals with mental health issues prefer therapies to medication [11] .

12 citations


Journal ArticleDOI
TL;DR: The COVID-19 pandemic has exposed and exacerbated existing global inequalities, and the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society.
Abstract: As secretary general of the United Nations, Antonio Guterres said during the 2020 Nelson Mandela Annual Lecture, “COVID-19 has been likened to an X-ray, revealing fractures in the fragile skeleton of the societies we have built.” Without a doubt, the COVID-19 pandemic has exposed and exacerbated existing global inequalities. Whether at the local, national, or international scale, the gap between the privileged and the vulnerable is growing wider, resulting in a broad increase in inequality across all dimensions of society. The disease has strained health systems, social support programs, and the economy as a whole, drawing an ever-widening distinction between those with access to treatment, services, and job opportunities and those without. Global lockdown restrictions have led to increases in childcare and housework responsibilities, and most of the burden has fallen on women, further increasing existing gender inequality [1] , [2] . Indigenous populations worldwide find themselves more vulnerable to infection, many times with less access to health services or hygiene measures and limited updated scientific information about the virus and measures that can be taken to mitigate it [3] . Inequality has also pervaded the education sector, with only a subset of students able to attend safe in-person schooling or access online education when needed.

10 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a first and provisional analysis of the proposed regulation, and observe that it prioritizes fundamental rights and incorporates some human rights principles, such as accountability, and the inclusion of governance through supervisory authorities to implement and enforce the regulation.
Abstract: Editor’s note: This article was written before the publication by the EU Commission of its proposal for an artificial intelligence (AI) regulation [29] . In a first and provisional analysis of the proposed regulation, we observe that the proposed regulation incorporates some of the basic principles laid down in our article: it prioritizes fundamental rights and incorporates some human rights principles, such as accountability, and the inclusion of governance through supervisory authorities to implement and enforce the regulation. Nevertheless, we still feel that many of the suggestions present in our article, which would help to operationalize the regulation, are not addressed. One example is the reduced scope of the regulation to a list of “high risk applications,” leaving without a legal framework all other AI applications. We believe that the principles that inspire the regulation should also be applied in “lower risk applications.” Defining only the compliance process for AI developers, but leaving open the specific technical requirements that these high risk applications shall meet leaves untouched the existing gap between legal language and engineering practice. There are no described mechanisms by which all stakeholders (other than developers and implementers) can influence AI development, monitor their performance or claim redress if harmed. These shortcomings and other issues presented in our article leave the door open to loopholes that we hope the European Parliament can fix during the legislative process.

10 citations


Journal ArticleDOI
TL;DR: In this article, a special issue dedicated to the theme of public interest technology (PIT) is presented, which highlights the importance of PIT practitioners serving as transdisciplinary intermediaries between the community and the STEM disciplines and technical teams.
Abstract: This special issue is dedicated to the theme of public interest technology (PIT) [1] . PIT acknowledges that technological potential can be harnessed to satisfy the needs of civil society. In other words, technology can be seen as a public good that can benefit all, through an open democratic system of governance, with open data initiatives, open technologies, and open systems/ecosystems designed for the collective good, as defined by respective communities that will be utilizing them. Just like in the established field of public interest law (PIL) [2] , [3] and public interest journalism (PIJ) [4] , we can consider potential fields around the idea of PIT [5] , [6] , such as public interest co-design (PITco), even public interest engagement (PITengage) or public interest consulting (PIC). For decades, public interest engineers (PIEs) have volunteered their time to collaborate in meaningful participative engagements. These engineers have self-organized some impressive collectives including Engineers Without Borders, ASCE Disaster Assistance Volunteer Program, Appropriate Infrastructure Development Group, Architecture for Humanity, Bridges to Prosperity, Bridging the Gap Africa, Engineers for a Sustainable World, GISCorps, Habitat for Humanity, National Engineering Projects in Community Service, just to name a few. These collectives and initiatives call attention to the primary role of a PIT practitioner. That is, the importance of PIT practitioners serving as transdisciplinary intermediaries between the community and the STEM disciplines and technical teams, emphasizing the importance of justice, equity, and inclusion in the design and deployment of new technologies [7] that allow for positive social transformation and empowerment [8] .

9 citations


Journal ArticleDOI
TL;DR: The European Union (EU) is committed to the 2030 Agenda and the sustainable development goals (SDGs), which the UN itself has recognized cannot be achieved without a people-focused, science-based, digital revolution.
Abstract: The United Nations (UN) 2030 Agenda and other movements toward setting global goals such as the Paris Agreement and the European Green Deal/U.S. Green New Deal are laying the groundwork for a transformation beyond purely market-based economics toward sustainability and inclusiveness [1] , in which technological innovation and, in particular, artificial intelligence (AI) can play a central role. The European Union (EU) is committed to the 2030 Agenda and the sustainable development goals (SDGs), which the UN itself has recognized cannot be achieved without a people-focused, science-based, digital revolution [2] . This commitment to the 2030 Agenda should entail promoting an inclusive and sustainable AI strategy, rather than a strategy with a narrow focus on competitiveness [3] , [4] . In order for AI to contribute to achieving the SDGs, a systemic approach to the development of AI solutions is required [5] – [9] . Conversely, the SDGs provide an ideal framework to test the desirability of AI solutions [10] . Europe’s multicultural character and its framework of international collaboration give it a head start toward becoming a global reference in the promotion of an inclusive and sustainable AI. Sharing the experiences and practices of such a European AI could make a significant contribution to achieving the SDGs.

7 citations


Journal ArticleDOI
TL;DR: IEEE 7000 as discussed by the authors is a long-awaited standard promising organizations the “ethical specs” that seem to be overdue in engineering roadmaps, which guides developers in making their products and services compatible with the ethical values of the communities in which technical product and services are placed and used.
Abstract: IEEE 7000™ is a long-awaited standard promising organizations the “ethical specs” [1] that seem to be overdue in engineering roadmaps. One hundred and fifty-four experts were, at some point, involved in IEEE 7000, 34 being Workgroup (WG) members. Seventy-seven experts balloted for its publication in 2021 (93% acceptance rate). In this article, I want to give a short overview of what to expect from IEEE 7000 and how it came about. The 79-page normative standard, involved hundreds of online discussions over five years, engaging self-selected individuals stemming from many cultures (Europe, Middle East, the United States, Australia, and Latin America) and a broad set of professional backgrounds (engineers, philosophers, theologists, consultants, etc.). There was one core question to be answered: How can tech organizations of whatever size and industry build more ethical systems? But what is “ethical?” When laymen hear the word “ethics,” they easily confound it with “morals.” This is not how the IEEE 7000 WG refers to it. Instead, IEEE 7000 guides developers in making their products and services compatible with the ethical values of the communities in which technical products and services are placed and used. The standard gives step-by-step guidance to organizations on how to care for stakeholder values from the early conception of a system all through its development (and depending on its reading, during later deployment). Organizations that envision building technology for humanity (instead of only profit) get an answer to the Kantian question “How should I act”? I, the engineer, and I, the manager. How should the organization prepare for an IT project? How should it elicit and prioritize values? How can it ensure that prioritized values are ending up in the system-of-interest (SOI)? And how can the organizational and technical engineers transparently share their value mission and effort?

7 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that far preferable to an ethics framework, adopting a human rights framework for AI-DDD offers the potential for a robust and enforceable set of guidelines for the pursuit of AI4Eq.
Abstract: We are all aware of the huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, for example. We are equally aware that AI can drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) [1] as presented in this magazine, calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind. A call for equity rests on the human rights principle of equality and nondiscrimination. AI design, development, and deployment (AI-DDD) can and should be harnessed to reduce inequality and increase the share of the world’s population that is able to live in dignity and fully realize their human potential. This commentary argues, first, that far preferable to an ethics framework, adopting a human rights framework for AI-DDD offers the potential for a robust and enforceable set of guidelines for the pursuit of AI4Eq. Second, the commentary introduces the work of IEEE in proposing practical recommendations for AI4Eq, so that people living in high-income countries (HICs), low- and middle-income countries (LMICs), alike, share AI applications’ widespread benefit to humanity.

6 citations


Journal ArticleDOI
TL;DR: The call center sector is, at least anecdotally, characterized by pressurized, target-driven completion of routine tasks often performed by temporary, over-qualified personnel with little personal control, investment, and engagement as mentioned in this paper.
Abstract: The observation that the COVID-19 pandemic has disrupted workplace relationships and working practices is trite; it is nonetheless true. One significant change has been that the massive increase in call-center employment in the past 20 years has been mirrored during the pandemic by a corresponding increase in remote working or working-from-home. However, the call-center sector is, at least anecdotally, characterized by pressurized, target-driven completion of routine tasks often performed by temporary, over-qualified personnel with little personal control, investment, and engagement. Consequently, manager–employee relationships are often strained and, in particular, mistrustful.

6 citations


Journal ArticleDOI
TL;DR: The last few years have seen a large number of initiatives on artificial intelligence (AI) ethics: intergovernmental-institution initiatives such as “Ethics Guidelines for Trustworthy AI” from the high-level expert group on AI of the European Commission and the Organisation for Economic Cooperation and Development (OECD) Council Recommendation on Artificial Intelligence as mentioned in this paper.
Abstract: The last few years have seen a large number of initiatives on artificial intelligence (AI) ethics: intergovernmental-institution initiatives such as “Ethics Guidelines for Trustworthy AI” from the high-level expert group on AI of the European Commission [1] or the Organisation for Economic Cooperation and Development (OECD) Council Recommendation on Artificial Intelligence [2] , government initiatives such as that of the U.K. Parliament Select Committee on Artificial Intelligence [3] , industry initiatives on AI ethical codes such as those of Google, IBM, Microsoft, and Intel, academic initiatives such as the Montreal declaration for the responsible development of AI [4] , the Stanford University 100 Year Study on AI [5] or the Alan Turing Institute’s “Understanding Artificial Intelligence Ethics and Safety” [6] , and finally professional body initiatives such as the IEEE Global Initiative on Ethics of Autonomous/Intelligent Systems (A/IS) [7] . These initiatives, while acknowledging the potential of A/IS technologies to contribute to global socioeconomic solutions, highlight the increasing challenges posed by these technologies in the ethical, moral, legal, humanitarian, and sociopolitical domains.

Journal ArticleDOI
TL;DR: There are many domains of human endeavor which invoke the public interest, for example, environmental sustainability, law, journalism, and, perhaps most pointedly in 2020-2021, health.
Abstract: There are many domains of human endeavor which invoke the “public interest,” for example, environmental sustainability, law, journalism, and, perhaps most pointedly in 2020–2021, health. All of these domains require some sort of tradeoff between different and potentially competing stakeholder priorities. For example, public interest in environmental sustainability, with respect to air quality, potable water, and arable land, can be in contention with requirements of manufacturing, transport, and consumer demand. Advocacy for social justice through public interest law might set a disadvantaged or disempowered group against a privileged or powerful one [1] , [2] . Similarly, journalistic reporting in the public interest must consider holding the powerful accountable for their actions and decisions and the potential impact on society against basic rights to privacy and ethical practices in investigative journalism. Choices in public health sometimes appeal to the concept of procedural justice and can involve a multiperspective tradeoff between individual risk and collective benefit, personal preference and state mandate, financial costs and effectiveness of treatments, and speed and caution (and note these are tradeoffs not false dichotomies, as some would have it).

Journal ArticleDOI
TL;DR: In this article, the authors define digital discrimination as the unfair or unequal treatment of an individual (or group) based on certain protected characteristics (also known as protected attributes) such as income, education, gender, or ethnicity.
Abstract: Operating at a large scale and impacting large groups of people, automated systems can make consequential and sometimes contestable decisions. Automated decisions can impact a range of phenomena, from credit scores to insurance payouts to health evaluations. These forms of automation can become problematic when they place certain groups or people at a systematic disadvantage. These are cases of discrimination—which is legally defined as the unfair or unequal treatment of an individual (or group) based on certain protected characteristics (also known as protected attributes) such as income, education, gender, or ethnicity. When the unfair treatment is caused by automated decisions, usually taken by intelligent agents or other AI-based systems, the topic of digital discrimination arises. Digital discrimination is prevalent in a diverse range of fields, such as in risk assessment systems for policing and credit scores [1] , [2] .

Journal ArticleDOI
TL;DR: In this paper, the authors argue that while the negative effects of technological innovations may be unprecedented, they can be foreseen, and, more importantly, mitigated through more intentional and skillful engineering.
Abstract: Daily headlines stress the ways modern technologies disclose their most dystopian possibilities; this magazine is replete with examples of innovative technologies that prompt considerations of their unethical applications. Numerous approaches have already been proposed to advance critical thinking about the social, cultural, environmental, and economic implications of tech innovation, such as tech literacy and philosophy of technology. What these intellectual traditions have shown is that while the negative effects of technological innovations may be unprecedented, they can be foreseen, and, more importantly, mitigated through more intentional and skillful engineering. Nevertheless, systematic efforts to address these impacts remain peripheral to the engineering profession, with technological artifacts deemed value-neutral, and intervention often seen as luddite and unenforceable [1] . While this situation suggests a need for systemic changes across academic and industry contexts, it also points to an immediate need to address the uptake of critical thinking about the implications of tech innovation within the engineering community.

Journal ArticleDOI
TL;DR: The U.S. intelligence workforce includes a variety of IT-related workers as mentioned in this paper, and the interplay of intrinsic qualities and human choices is further shaped by social, political, and economic interests that inscribe the situation with intended and unintended opportunities and limitations.
Abstract: Science and Technology studies (STS) scholars have long argued for a deeper appreciation of the way technologies embody political, moral, and social choices along with their specific technical capabilities. In particular, research on information technology (IT) workers has pointed out that: 1) technology is not neutral but embodies intrinsic characteristics that enable new human experiences and foreclose others; 2) within these new “horizons of the possible,” individuals and groups construct meaning and make choices, further shaping the situation; and 3) the interplay of intrinsic qualities and human choices is further shaped by social, political, and economic interests that inscribe the situation with their intended and unintended opportunities and limitations [1] . The U.S. intelligence workforce includes a variety of IT-related workers. Although attention has been devoted to discussing the social and ethical implications of artificial intelligence (AI) and use of big-data platforms for the public and private sector workforces [2] – [6] , few research studies have looked at the social implications of these technologies for the future of the defense and security workforce [7] , especially within the U.S. intelligence community (IC), which constitutes a unique and varied set of employees, actors, institutions, missions, and policies.

Journal ArticleDOI
TL;DR: In the early 1970s, we started to dream of the leisure society in which, thanks to technological progress and consequent increase in productivity, working hours would be minimized and we would all live in abundance as discussed by the authors.
Abstract: From the 1970s onward, we started to dream of the leisure society in which, thanks to technological progress and consequent increase in productivity, working hours would be minimized and we would all live in abundance. We all could devote our time almost exclusively to personal relationships, contact with nature, sciences, the arts, playful activities, and so on. Today, this utopia seems more unattainable than it did then. Since the 21st century, we have seen inequalities increasingly accentuated: of the increase in wealth in the United States between 2006 and 2018, adjusted for inflation and population growth, more than 87% went to the richest 10% of the population, and the poorest 50% lost wealth [1] . Following the crisis of 2008, social inequalities, rights violations, planetary degradation, and the climate emergency worsened and increased (see [2] ). In 2019, the world’s 2153 billionaires had more wealth than 4.6 billion people [3] . The World Bank estimates that COVID-19 will push up to 150 million people into extreme poverty [4] .

Journal ArticleDOI
TL;DR: In this article, mental health constitutes of emotional, social, and psychological well-being, affecting how a person thinks, feels, and acts, and addressing issues surrounding mental health is critical in ensuring one's happiness, productivity, and, ultimately, safety.
Abstract: As enrollment in computer science (CS) programs increases and technical jobs become more available, the mental health of computer scientists and engineers requires wider discussion. Mental health constitutes of emotional, social, and psychological well-being, affecting how a person thinks, feels, and acts. Addressing issues surrounding mental health is critical in ensuring one’s happiness, productivity, and, ultimately, safety.

Journal ArticleDOI
TL;DR: In this article, the authors used the electric power sector in India as a case study to evaluate the suitability of the sharing economy for public utility and service delivery in the developing world.
Abstract: It is an open question whether or not the sharing economy model—utilizing unused for-profit private resources by sharing them for public use through a digital intermediary—could be adopted for public utility and service delivery in the developing world. This article addresses the question using the electric power sector in India as a case study. Six major components of the sharing economy are taken up for analysis: suppliers/service providers; infrastructure facilities; customers; public policy/regulations; platform; and pricing structure. This article discusses the status, challenges, and possible future directions.

Journal ArticleDOI
TL;DR: The inception of artificial intelligence (AI) dates back to the 1950s, with the latest wave of AI featuring unprecedented machine learning capabilities, including deep learning as discussed by the authors, which is disrupting society in both beneficial and disempowering ways, exemplified in medical and scientific advances and enabling oppressive surveillance.
Abstract: The inception of artificial intelligence (AI) dates back to the 1950s, with the latest wave of AI featuring unprecedented “machine learning” capabilities, including “deep learning.” With these increased capacities, AI is disrupting society in both beneficial and disempowering ways, exemplified in medical and scientific advances and enabling oppressive surveillance [1] . The rate of adoption and potential of this wave of AI are novel, as are the socio-technical problems developing with emerging technologies powered by AI.

Journal ArticleDOI
TL;DR: In the last decade, there has been an explosion in the progress and applications of artificial intelligence (AI) in our society as discussed by the authors, which has raised numerous questions about the potential of AI in the future and its implications in our lives.
Abstract: In the last decade, there has been an explosion in the progress and applications of artificial intelligence (AI) in our society. For the first time, the applications of AI have left the laboratory to reach society in a broad, visible, and relevant way. This fact has raised numerous questions about the potential of AI in the future and its implications in our lives. The benefits of AI do not come alone; they also bring with them responsibilities that if not considered properly can become misuses, intentional or not.

Journal ArticleDOI
TL;DR: The fourth industrial revolution (4IR) is a vision of a future in which emerging technologies change "the very essence of our human experience" as discussed by the authors, and it was initially introduced in a book of the same name authored by the World Economic Forum's (WEF) founder Klaus Schwab and launched at the WEF’s 2016 Davos conference.
Abstract: The Fourth Industrial Revolution (4IR) is a vision of a future in which emerging technologies change “the very essence of our human experience” [1] . It was initially introduced in a book of the same name authored by the World Economic Forum’s (WEF) founder Klaus Schwab and launched at the WEF’s 2016 Davos conference. The idea has since been used to promote a wide range of emerging technologies and has informed discussion in many forums, including within IEEE publications.

Journal ArticleDOI
TL;DR: In this paper, the authors offer two statements, given in different historical times and in different epistemic contexts, which reveal two radically different visions of what a problem is and how we should engage with it.
Abstract: Our work starts by offering two statements, given in different historical times and in different epistemic contexts, which reveal two radically different visions of what a problem is and how we should engage with it.

Journal ArticleDOI
TL;DR: In this paper, a human-centered approach is proposed to appropriately address the needs, wants, opinions, behaviors, and psychosocial aspects of older adults desiring to age in place.
Abstract: Scientists and engineers are exploring technological solutions to place inside people’s homes to solve challenges associated with rising healthcare needs and an aging population. This trend in technology is known informally as smart living. The technologies being investigated may hold a promising future for the elderly population, allowing them to continue to live inside their home while aging. However, when novel devices are introduced to support aging in place, designs often fail to consider the struggles older individuals face in their everyday lives. Excluding such considerations means continuing to develop technologies that do not truly serve the needs of older adults. Earth is projected to be home to 9.3 billion people by 2050, 2 billion of which are likely to be 65 or older [1] . Therefore, there is a pronounced need for a human-centered approach to appropriately address the needs, wants, opinions, behaviors, and psychosocial aspects of older adults desiring to age in place.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the potential impacts of AVs in rural areas, especially related to feasibility and accessibility, and propose a model to evaluate the feasibility of AV deployment in these areas.
Abstract: Although much research has been devoted to the effects of autonomous vehicles (AVs) on urban areas, little work has been dedicated to the potential impacts of AVs in rural areas, especially related to feasibility and accessibility [1] . Due to the lack of reliable public transportation, automobiles play a crucial role for rural residents to commute for work, shopping, and other reasons. According to the U.S. Bureau of Labor Statistics, rural households have on average more vehicles than urban households [2] . In 2015, the average rural household spent about 13.7% of their income on vehicle purchases, maintenance, and repairs in comparison to 8.3% for urban households [2] . As the cost of vehicles is one of the top concerns for many rural residents [5] , there will be concerns about the affordability of AVs in these areas as their initial prices might be high [1] . Given the current struggles with affording and maintaining vehicles for rural residents, rural residents may not be able to afford or maintain personal AVs, at least not in the beginning. There is also a concern whether rural communities will have access to funding to build the necessary transportation infrastructure to deploy AVs.

Journal ArticleDOI
TL;DR: The COVID-19 pandemic has loomed over the world for the better part of a year now; yet, many still cannot shake the disbelief that it is here.
Abstract: The COVID-19 pandemic has loomed over the world for the better part of a year now; yet, many still cannot shake the disbelief that it is here. Nonetheless, countries around the world continue to be ravaged by death, and the healthcare workers battle on. As vaccine distribution makes its way to the mainstream, I cannot help but wonder, will people even take the vaccine? In an already divided country, where many are refusing to wear masks due to disbelief, violation of liberty, or mere quarantine fatigue, what will become of those that disobey if vaccination orders become mandatory? Public health emergencies may seem novel, but that is not the case. Even with modern technology and the most brilliant minds, certain diseases continue to baffle the scientific community [1, pp. 611–612] . Furthermore, new ones appear and seem to render the world at the same mercy as the diseases of centuries before [1, pp. 618–619] . On the other hand, the evolution of vaccination has been successful on many fronts as well. Vaccines for polio, measles, rubella, mumps, and varicella are just a few of the vaccines that have been used successfully for several decades [2, S5, S6] .

Journal ArticleDOI
TL;DR: The Social Implications of technology are a global concern as discussed by the authors, and the unintended consequences of human innovations are even more widespread, if you include space junk and radio noise, which are ubiquitous and are not confined to particular cultures or jurisdictions.
Abstract: The Social Implications of technology are a global concern. Technological systems span all of the continents and connect nearly every human. The unintended consequences of human innovations are even more widespread, if you include space junk and radio noise. Associated ethical dilemmas are ubiquitous and are not confined to particular cultures or jurisdictions. We might expect that people all over the world would want to discuss technology and its social implications. But that is not the case. At least not with us.

Journal ArticleDOI
TL;DR: In this article, the authors argue that three major technology elements contribute to Taiwan's success in COVID-19: quarantine and cellphone tracking policy, mask manufacturing management and distribution system, and a bunch of open-source software developed by civil society.
Abstract: Due to the proximity to China and millions of international travelers yearly, Taiwan was thought to be at a great risk to be hit hard by the COVID-19. However, it so far managed to prevent the massive community transmission among its 23 million citizens. What leads to this success? What kinds of technology are used? Built upon the previous research and report, I argue that three major technology elements contribute to its success. First, the quarantine and cellphone tracking policy which blocks the transmission of imported cases. Second, the mask manufacturing management and distribution system help slow the spread of the disease. Third, a bunch of open-source software developed by civil society keeps people informed and even increase their voluntary compliance.

Journal ArticleDOI
TL;DR: In this regard, the recently released United Nations (UN) report on extreme poverty and human rights warns of the risk of a digital dystopia driving a growing inequality that is facilitating the creation of a vast digital subclass as discussed by the authors.
Abstract: Intelligent technologies offer the potential to generate unprecedented levels of prosperity for all, while posing increasing challenges in the ethical, moral, legal, humanitarian, and sociopolitical spheres. In this regard, the recently released the United Nations (UN) report on extreme poverty and human rights1 warns of the risk of a digital dystopia driving a growing inequality that is facilitating the creation of a vast digital subclass . This report provides many well-documented examples in different countries of how dehumanized smart technologies are creating barriers to accessing a wide range of social rights for those who lack Internet access and digital skills.

Journal ArticleDOI
TL;DR: The challenge of how cities can be designed and developed in an inclusive and sustainable direction is monumental as discussed by the authors, but the impact of such solutions will be significantly reduced without long-term, widespread adoption by citizens.
Abstract: The challenge of how cities can be designed and developed in an inclusive and sustainable direction is monumental. Smart city technologies currently offer the most promising solution for long-term sustainability, but the impact of such solutions will be significantly reduced without long-term, widespread adoption by citizens.

Journal ArticleDOI
TL;DR: The pervasiveness of communication and information technology (CIT) systems in nearly every aspect of work and personal life gives the system providers influence over human thought, feeling, and behavior with a magnitude not widely foreseen until a few years ago as discussed by the authors.
Abstract: Over recent years, it has become increasingly apparent that engineering activities must proceed with consideration of the human values that they potentially affect. The pervasiveness of communication and information technology (CIT) systems in nearly every aspect of work and personal life gives the system providers influence over human thought, feeling, and behavior with a magnitude not widely foreseen until a few years ago. Increasingly complex software, including applications of artificial intelligence that is poorly explainable, are integrated with the physical world through sensors, actuators, control systems, and Internet of Things (IoT)-enabled products. Such systems can pose direct threats to the physical safety of human beings and can negatively impact the environment. The design, deployment, and operation of transportation systems and chemical, nuclear, and other dangerous industrial plants have always required attention to human values such as safety in addition to economic values. However, recent events such as the Boeing 737 Max crashes and data breaches of financial information impacting broad swathes of nations’ populations have brought public attention to the importance of including human values such as safety and privacy in the design, test, and deployment of CIT and hybrid software/control/actuation systems. This is especially the case for autonomous systems that can operate without—or can override—human decisions and control inputs [1] .