scispace - formally typeset
Search or ask a question

Showing papers on "Applications of artificial intelligence published in 2018"


Journal ArticleDOI
TL;DR: Recent breakthroughs in AI technologies and their biomedical applications are outlined, the challenges for further progress in medical AI systems are identified, and the economic, legal and social implications of AI in healthcare are summarized.
Abstract: Artificial intelligence (AI) is gradually changing medical practice. With recent progress in digitized data acquisition, machine learning and computing infrastructure, AI applications are expanding into areas that were previously thought to be only the province of human experts. In this Review Article, we outline recent breakthroughs in AI technologies and their biomedical applications, identify the challenges for further progress in medical AI systems, and summarize the economic, legal and social implications of AI in healthcare.

1,315 citations


Journal ArticleDOI
TL;DR: An intelligent learning model called “Brain Intelligence (BI)” is developed that generates new ideas about events without having experienced them by using artificial life with an imagine function and will be tested on automatic driving, precision medical care, and industrial robots.
Abstract: Artificial intelligence (AI) is an important technology that supports daily social life and economic activities. It contributes greatly to the sustainable growth of Japan's economy and solves various social problems. In recent years, AI has attracted attention as a key for growth in developed countries such as Europe and the United States and developing countries such as China and India. The attention has been focused mainly on developing new artificial intelligence information communication technology (ICT) and robot technology (RT). Although recently developed AI technology certainly excels in extracting certain patterns, there are many limitations. Most ICT models are overly dependent on big data, lack a self-idea function, and are complicated. In this paper, rather than merely developing next-generation artificial intelligence technology, we aim to develop a new concept of general-purpose intelligence cognition technology called "Beyond AI". Specifically, we plan to develop an intelligent learning model called "Brain Intelligence (BI)" that generates new ideas about events without having experienced them by using artificial life with an imagine function. We will also conduct demonstrations of the developed BI intelligence learning model on automatic driving, precision medical care, and industrial robots.

880 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: Ray as mentioned in this paper is a distributed system that implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine and employs a distributed scheduler and a distributed and fault-tolerant store to manage the control state.
Abstract: The next generation of AI applications will continuously interact with the environment and learn from these interactions. These applications impose new and demanding systems requirements, both in terms of performance and flexibility. In this paper, we consider these requirements and present Ray--a distributed system to address them. Ray implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine. To meet the performance requirements, Ray employs a distributed scheduler and a distributed and fault-tolerant store to manage the system's control state. In our experiments, we demonstrate scaling beyond 1.8 million tasks per second and better performance than existing specialized systems for several challenging reinforcement learning applications.

600 citations


Journal ArticleDOI
TL;DR: Surgeons are well positioned to help integrate AI into modern practice and should partner with data scientists to capture data across phases of care and to provide clinical context, for AI has the potential to revolutionize the way surgery is taught and practiced.
Abstract: Objective:The aim of this review was to summarize major topics in artificial intelligence (AI), including their applications and limitations in surgery. This paper reviews the key capabilities of AI to help surgeons understand and critically evaluate new AI applications and to contribute to new deve

515 citations


Journal ArticleDOI
TL;DR: This white paper on AI in radiology will inform CAR members and policymakers on key terminology, educational needs of members, research and development, partnerships, potential clinical applications, implementation, structure and governance, role of radiologists, and potential impact of AI on radiology in Canada.
Abstract: Artificial intelligence (AI) is rapidly moving from an experimental phase to an implementation phase in many fields, including medicine. The combination of improved availability of large datasets, increasing computing power, and advances in learning algorithms has created major performance breakthroughs in the development of AI applications. In the last 5 years, AI techniques known as deep learning have delivered rapidly improving performance in image recognition, caption generation, and speech recognition. Radiology, in particular, is a prime candidate for early adoption of these techniques. It is anticipated that the implementation of AI in radiology over the next decade will significantly improve the quality, value, and depth of radiology's contribution to patient care and population health, and will revolutionize radiologists' workflows. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI working group with the mandate to discuss and deliberate on practice, policy, and patient care issues related to the introduction and implementation of AI in imaging. This white paper provides recommendations for the CAR derived from deliberations between members of the AI working group. This white paper on AI in radiology will inform CAR members and policymakers on key terminology, educational needs of members, research and development, partnerships, potential clinical applications, implementation, structure and governance, role of radiologists, and potential impact of AI on radiology in Canada.

305 citations


Journal ArticleDOI
TL;DR: A general overview of AI and how it can be used to improve health outcomes in resource-poor settings is provided and some of the current ethical debates around patient safety and privacy are described.
Abstract: The field of artificial intelligence (AI) has evolved considerably in the last 60 years. While there are now many AI applications that have been deployed in high-income country contexts, use in resource-poor settings remains relatively nascent. With a few notable exceptions, there are limited examples of AI being used in such settings. However, there are signs that this is changing. Several high-profile meetings have been convened in recent years to discuss the development and deployment of AI applications to reduce poverty and deliver a broad range of critical public services. We provide a general overview of AI and how it can be used to improve health outcomes in resource-poor settings. We also describe some of the current ethical debates around patient safety and privacy. Despite current challenges, AI holds tremendous promise for transforming the provision of healthcare services in resource-poor settings. Many health system hurdles in such settings could be overcome with the use of AI and other complementary emerging technologies. Further research and investments in the development of AI tools tailored to resource-poor settings will accelerate realising of the full potential of AI for improving global health.

293 citations


Proceedings Article
13 Feb 2018
TL;DR: Wang et al. as discussed by the authors developed a new method termed as "WAGE" to discretize both training and inference, where weights, activations, gradients, and errors among layers are shifted and linearly constrained to low-bitwidth integers.
Abstract: Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as "WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.

233 citations


Journal ArticleDOI
Hani Hagras1
TL;DR: The author introduces XAI concepts, and gives an overview of areas in need of further exploration—such as type-2 fuzzy logic systems—to ensure such systems can be fully understood and analyzed by the lay user.
Abstract: Recent increases in computing power, coupled with rapid growth in the availability and quantity of data have rekindled our interest in the theory and applications of artificial intelligence (AI). However, for AI to be confidently rolled out by industries and governments, users want greater transparency through explainable AI (XAI) systems. The author introduces XAI concepts, and gives an overview of areas in need of further exploration—such as type-2 fuzzy logic systems—to ensure such systems can be fully understood and analyzed by the lay user.

226 citations


Journal ArticleDOI
TL;DR: The legal framework regulating medical devices and data protection in Europe and in the United States is analyzed, assessing developments that are currently taking place and issues of accountability, both legal and ethical are considered.
Abstract: Worldwide interest in artificial intelligence (AI) applications is growing rapidly. In medicine, devices based on machine/deep learning have proliferated, especially for image analysis, presaging new significant challenges for the utility of AI in healthcare. This inevitably raises numerous legal and ethical questions. In this paper we analyse the state of AI regulation in the context of medical device development, and strategies to make AI applications safe and useful in the future. We analyse the legal framework regulating medical devices and data protection in Europe and in the United States, assessing developments that are currently taking place. The European Union (EU) is reforming these fields with new legislation (General Data Protection Regulation [GDPR], Cybersecurity Directive, Medical Devices Regulation, In Vitro Diagnostic Medical Device Regulation). This reform is gradual, but it has now made its first impact, with the GDPR and the Cybersecurity Directive having taken effect in May, 2018. As regards the United States (U.S.), the regulatory scene is predominantly controlled by the Food and Drug Administration. This paper considers issues of accountability, both legal and ethical. The processes of medical device decision-making are largely unpredictable, therefore holding the creators accountable for it clearly raises concerns. There is a lot that can be done in order to regulate AI applications. If this is done properly and timely, the potentiality of AI based technology, in radiology as well as in other fields, will be invaluable. TEACHING POINTS: • AI applications are medical devices supporting detection/diagnosis, work-flow, cost-effectiveness. • Regulations for safety, privacy protection, and ethical use of sensitive information are needed. • EU and U.S. have different approaches for approving and regulating new medical devices. • EU laws consider cyberattacks, incidents (notification and minimisation), and service continuity. • U.S. laws ask for opt-in data processing and use as well as for clear consumer consent.

222 citations


Journal ArticleDOI
14 Mar 2018
TL;DR: This work believes that AI can eliminate many repetitive tasks to clear the way for human-to-human bonding and the application of emotional intelligence and judgment in healthcare.
Abstract: Artificial intelligence (AI) has recently surpassed human performance in several domains, and there is great hope that in healthcare, AI may allow for better prevention, detection, diagnosis, and treatment of disease. While many fear that AI will disrupt jobs and the physician–patient relationship, we believe that AI can eliminate many repetitive tasks to clear the way for human-to-human bonding and the application of emotional intelligence and judgment. We review several recent studies of AI applications in healthcare that provide a view of a future where healthcare delivery is a more unified, human experience.

222 citations


Posted Content
TL;DR: This work reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback or backward connections and believes this to be an important first step towards making the powerful technique ofGenerative replay scalable to real-world continual learning applications.
Abstract: A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one. Numerous methods for alleviating catastrophic forgetting are currently being proposed, but differences in evaluation protocols make it difficult to directly compare their performance. To enable more meaningful comparisons, here we identified three distinct scenarios for continual learning based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as "soft targets") achieved superior performance in all three scenarios. Addressing the issue of efficiency, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback or backward connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications.

Journal ArticleDOI
TL;DR: The computational complexity of LARS formulas and programs, their relationship to Linear Temporal Logic (LTL) and the well-known Continuous Query Language (CQL), and its potential as stream reasoning language itself are studied.

Posted Content
TL;DR: This document starts with a discussion of model-based reasoning and explains why conditioning as a foundational computation is central to the fields of probabilistic machine learning and artificial intelligence, and introduces a simple first-order Probabilistic programming language whose programs define static-computation-graph, finite-variable-cardinality models.
Abstract: This document is designed to be a first-year graduate-level introduction to probabilistic programming. It not only provides a thorough background for anyone wishing to use a probabilistic programming system, but also introduces the techniques needed to design and build these systems. It is aimed at people who have an undergraduate-level understanding of either or, ideally, both probabilistic machine learning and programming languages. We start with a discussion of model-based reasoning and explain why conditioning as a foundational computation is central to the fields of probabilistic machine learning and artificial intelligence. We then introduce a simple first-order probabilistic programming language (PPL) whose programs define static-computation-graph, finite-variable-cardinality models. In the context of this restricted PPL we introduce fundamental inference algorithms and describe how they can be implemented in the context of models denoted by probabilistic programs. In the second part of this document, we introduce a higher-order probabilistic programming language, with a functionality analogous to that of established programming languages. This affords the opportunity to define models with dynamic computation graphs, at the cost of requiring inference methods that generate samples by repeatedly executing the program. Foundational inference algorithms for this kind of probabilistic programming language are explained in the context of an interface between program executions and an inference controller. This document closes with a chapter on advanced topics which we believe to be, at the time of writing, interesting directions for probabilistic programming research; directions that point towards a tight integration with deep neural network research and the development of systems for next-generation artificial intelligence applications.

Journal ArticleDOI
TL;DR: In this article, the authors have reviewed the applications of artificial intelligence (AI) in the hiring process and its practical implications and highlighted the strategic shift in recruitment industry caused due to the adoption of AI in the recruitment process.
Abstract: This paper aims to review the applications of artificial intelligence (AI) in the hiring process and its practical implications. This paper highlights the strategic shift in recruitment industry caused due to the adoption of AI in the recruitment process.,This paper is prepared by independent academicians who have synthesized their views by a review of the latest reports, articles, research papers and other relevant literature.,This paper describes the impact of developments in the field of AI on the hiring process and the recruitment industry. The application of AI for managing the recruitment process is leading to efficiency as well as qualitative gains for both clients and candidates.,This paper offers strategic insights into automation of the recruitment process and presents practical ideas for implementation of AI in the recruitment industry. It also discusses the strategic implications of the usage of AI in the recruitment industry.,This article describes the role of technological advancements in AI and its application for creating value for the recruitment industry as well as the clients. It saves the valuable reading time of practitioners and researchers by highlighting the AI applications in the recruitment industry in a concise and simple format.

Journal ArticleDOI
TL;DR: Future work in the field should concentrate on creating seamless integration of AI systems with current endoscopy platforms and electronic medical records, developing training modules to teach clinicians how to use AI tools, and determining the best means for regulation and approval of new AI technology.
Abstract: Artificial intelligence (AI) enables machines to provide unparalleled value in a myriad of industries and applications. In recent years, researchers have harnessed artificial intelligence to analyze large-volume, unstructured medical data and perform clinical tasks, such as the identification of diabetic retinopathy or the diagnosis of cutaneous malignancies. Applications of artificial intelligence techniques, specifically machine learning and more recently deep learning, are beginning to emerge in gastrointestinal endoscopy. The most promising of these efforts have been in computer-aided detection and computer-aided diagnosis of colorectal polyps, with recent systems demonstrating high sensitivity and accuracy even when compared to expert human endoscopists. AI has also been utilized to identify gastrointestinal bleeding, to detect areas of inflammation, and even to diagnose certain gastrointestinal infections. Future work in the field should concentrate on creating seamless integration of AI systems with current endoscopy platforms and electronic medical records, developing training modules to teach clinicians how to use AI tools, and determining the best means for regulation and approval of new AI technology.

Journal ArticleDOI
TL;DR: The necessity and difficulty of describing tasks for intelligence test, checking all the tasks that may encounter in Intelligence test, designing simulation-based test, and setting appropriate test performance evaluation indices are explained.
Abstract: To meet the urgent requirement of reliable artificial intelligence applications, we discuss the tight link between artificial intelligence and intelligence test in this paper. We highlight the role of tasks in intelligence test for all kinds of artificial intelligence. We explain the necessity and difficulty of describing tasks for intelligence test, checking all the tasks that may encounter in intelligence test, designing simulation-based test, and setting appropriate test performance evaluation indices. As an example, we present how to design reliable intelligence test for intelligent vehicles. Finally, we discuss the future research directions of intelligence test.

01 Jan 2018
TL;DR: Possible ways to avoid being forced to build systems based on many ad-hoc rules when it comes to moral decision making for AI applications with a moral component.
Abstract: The generality of decision and game theory has enabled domain-independent progress in AI research For example, a better algorithm for finding good policies in (PO)MDPs can be instantly used in a variety of applications But such a general theory is lacking when it comes to moral decision making For AI applications with a moral component, are we then forced to build systems based on many ad-hoc rules? In this paper we discuss possible ways to avoid this conclusion

Journal ArticleDOI
TL;DR: A review of existing artificial intelligence techniques and their applications in drilling fluid engineering is given and ANN was found to meet all the listed criteria except for its slow speed of convergence while ANN, GA, SVM and fuzzy logic were all found to be robust against noise.

Posted Content
TL;DR: This article shows how the concepts of anti-discrimination law may be combined with algorithmic audits and data protection impact assessments in an effort to unlock the algorithmic black box.
Abstract: Empirical evidence is mounting that artificial intelligence applications driven by machine learning threaten to discriminate against legally protected groups. As ever more decisions are subjected to algorithmic processes, discrimination by algorithms is rightly recognized by policymakers around the world as a key challenge for contemporary societies. This article suggests that algorithmic bias raises intricate questions for EU law. The existing categories of EU anti-discrimination law do not provide an easy fit for algorithmic decision making, and the statistical basis of machine learning generally offers companies a fast-track to justification. Furthermore, victims won’t be able to prove their case without access to the data and the algorithms, which they generally lack. To remedy these problems, this article suggests an integrated vision of anti-discrimination and data protection law to enforce fairness in the digital age. More precisely, it shows how the concepts of anti-discrimination law may be combined with the enforcement tools of the GDPR to unlock the algorithmic black box. In doing so, the law should harness a growing literature in computer science on algorithmic fairness that seeks to ensure equal protection at the data and code level. The interplay of anti-discrimination law, data protection law and algorithmic fairness therefore facilitates “equal protection by design”. In the end, however, recourse to technology does not prevent the law from making hard normative choices about the implementation of formal or substantive concepts of equality. Understood in this way, the deployment of artificial intelligence not only raises novel risks, but also harbors novel opportunities for consciously designing fair market exchange.

Journal ArticleDOI
Wei Lu1, Yan Tong1, Yue Yu1, Yiqiao Xing1, Changzheng Chen1, Yin Shen1 
TL;DR: The basic workflow for building an AI model is presented and applications of AI in the diagnosis of eye diseases are reviewed and future work should focus on setting up systematic AI platforms to diagnose general eye diseases based on multimodal data in the real world.
Abstract: With the emergence of unmanned plane, autonomous vehicles, face recognition, and language processing, the artificial intelligence (AI) has remarkably revolutionized our lifestyle. Recent studies indicate that AI has astounding potential to perform much better than human beings in some tasks, especially in the image recognition field. As the amount of image data in imaging center of ophthalmology is increasing dramatically, analyzing and processing these data is in urgent need. AI has been tried to apply to decipher medical data and has made extraordinary progress in intelligent diagnosis. In this paper, we presented the basic workflow for building an AI model and systematically reviewed applications of AI in the diagnosis of eye diseases. Future work should focus on setting up systematic AI platforms to diagnose general eye diseases based on multimodal data in the real world.

MonographDOI
TL;DR: This paper argues in favour of learning from successful Internet companies, opening access to data and developing interactivity with the users rather than just broadcasting data, and identified a window of opportunity for Europe to invest in the emerging new paradigm of computing distributed towards the edges of the network.
Abstract: We are only at the beginning of a rapid period of transformation of our economy and society due to the convergence of many digital technologies. Artificial Intelligence (AI) is central to this change and offers major opportunities to improve our lives. The recent developments in AI are the result of increased processing power, improvements in algorithms and the exponential growth in the volume and variety of digital data. Many applications of AI have started entering into our every-day lives, from machine translations, to image recognition, and music generation, and are increasingly deployed in industry, government, and commerce. Connected and autonomous vehicles, and AI-supported medical diagnostics are areas of application that will soon be commonplace. There is strong global competition on AI among the US, China, and Europe. The US leads for now but China is catching up fast and aims to lead by 2030. For the EU, it is not so much a question of winning or losing a race but of finding the way of embracing the opportunities offered by AI in a way that is human-centred, ethical, secure, and true to our core values. The EU Member States and the European Commission are developing coordinated national and European strategies, recognising that only together we can succeed. We can build on our areas of strength including excellent research, leadership in some industrial sectors like automotive and robotics, a solid legal and regulatory framework, and very rich cultural diversity also at regional and sub-regional levels. It is generally recognised that AI can flourish only if supported by a robust computing infrastructure and good quality data: • With respect to computing, we identified a window of opportunity for Europe to invest in the emerging new paradigm of computing distributed towards the edges of the network, in addition to centralised facilities. This will support also the future deployment of 5G and the Internet of Things. • With respect to data, we argue in favour of learning from successful Internet companies, opening access to data and developing interactivity with the users rather than just broadcasting data. In this way, we can develop ecosystems of public administrations, firms, and civil society enriching the data to make it fit for AI applications responding to European needs. We should embrace the opportunities afforded by AI but not uncritically. The black box characteristics of most leading AI techniques make them opaque even to specialists. AI systems are currently limited to narrow and well-defined tasks, and their technologies inherit imperfections from their human creators, such as the well-recognised bias effect present in data. We should challenge the shortcomings of AI and work towards strong evaluation strategies, transparent and reliable systems, and good human-AI interactions. Ethical and secure-by-design algorithms are crucial to build trust in this disruptive technology, but we also need a broader engagement of civil society on the values to be embedded in AI and the directions for future development. This social engagement should be part of the effort to strengthen our resilience at all levels from local, to national and European, across institutions, industry and civil society. Developing local ecosystems of skills, computing, data, and applications can foster the engagement of local communities, respond to their needs, harness local creativity and knowledge, and build a human-centred, diverse, and socially driven AI. We still know very little about how AI will impact the way we think, make decisions, relate to each other, and how it will affect our jobs. This uncertainty can be a source of concern but is also a sign of opportunity. The future is not yet written. We can shape it based on our collective vision of what future we would like to have. But we need to act together and act fast.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a wearable affective robot that integrates the affective robots, social robots, brain wearable, and Wearable 2.0 wearable devices, which can improve the human health on the spirit level, meeting the fashion requirements at the same time.
Abstract: With the development of the artificial intelligence (AI), the AI applications have influenced and changed people’s daily life greatly. Here, a wearable affective robot that integrates the affective robot, social robot, brain wearable, and Wearable 2.0 is proposed for the first time. The proposed wearable affective robot is intended for a wide population, and we believe that it can improve the human health on the spirit level, meeting the fashion requirements at the same time. In this paper, the architecture and design of an innovative wearable affective robot, which is dubbed as Fitbot, are introduced in terms of hardware and algorithm’s perspectives. In addition, the important functional component of the robot-brain wearable device is introduced from the aspect of the hardware design, EEG data acquisition and analysis, user behavior perception, and algorithm deployment. Then, the EEG-based cognition of user’s behavior is realized. Through the continuous acquisition of the in-depth, in-breadth data, the Fitbot we present can gradually enrich user’s life modeling and enable the wearable robot to recognize user’s intention and further understand the behavioral motivation behind the user’s emotion. The learning algorithm for the life modeling embedded in Fitbot can achieve better user’s experience of affective social interaction. Finally, the application service scenarios and some challenging issues of a wearable affective robot are discussed.

Posted Content
TL;DR: Empirically, this work demonstrates the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.
Abstract: Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as "WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.

Journal ArticleDOI
TL;DR: Different applications of artificial intelligence technologies in several domains of business administration, including finance, retail industry, manufacturing industry, and enterprise management are summarized and it is concluded that the rapid development of artificial Intelligence will show its great impact on more fields.

Journal ArticleDOI
TL;DR: An artificial intelligence atomic force microscope (AI-AFM) that is capable of not only pattern recognition and feature identification in ferroelectric materials and electrochemical systems, but can also respond to classification via adaptive experimentation with additional probing at critical domain walls and grain boundaries, all in real time on the fly without human interference is demonstrated.
Abstract: Artificial intelligence (AI) and machine learning have promised to revolutionize the way we live and work, and one of the particularly promising areas for AI is image analysis. Nevertheless, many current AI applications focus on the post-processing of data, while in both materials sciences and medicine, it is often critical to respond to the data acquired on the fly. Here we demonstrate an artificial intelligence atomic force microscope (AI-AFM) that is capable of not only pattern recognition and feature identification in ferroelectric materials and electrochemical systems, but can also respond to classification via adaptive experimentation with additional probing at critical domain walls and grain boundaries, all in real time on the fly without human interference. Key to our success is a highly efficient machine learning strategy based on a support vector machine (SVM) algorithm capable of high fidelity pixel-by-pixel recognition instead of relying on the data from full mapping, making real time classification and control possible during scanning, with which complex electromechanical couplings at the nanoscale in different material systems can be resolved by AI. For AFM experiments that are often tedious, elusive, and heavily rely on human insight for execution and analysis, this is a major disruption in methodology, and we believe that such a strategy empowered by machine learning is applicable to a wide range of instrumentations and broader physical machineries.

Journal ArticleDOI
TL;DR: The feasibility of using AI to extract valuable information during a humanitarian health crisis is proven in many cases, and there is a lack of research on how to integrate the use of AI into the work-flow and large-scale deployments of humanitarian aid during a health crisis.

Proceedings ArticleDOI
27 Jun 2018
TL;DR: In this article, the authors compared the performance of the Deep Neural Network (DNN) with the classical Random Forest (RF) machine learning algorithm for malware classification, and found that the classical RF accuracy outperformed the DNN accuracy.
Abstract: Recently, Deep Learning has been showing promising results in various Artificial Intelligence applications like image recognition, natural language processing, language modeling, neural machine translation, etc. Although, in general, it is computationally more expensive as compared to classical machine learning techniques, their results are found to be more effective in some cases. Therefore, in this paper, we investigated and compared one of the Deep Learning Architecture called Deep Neural Network (DNN) with the classical Random Forest (RF) machine learning algorithm for the malware classification. We studied the performance of the classical RF and DNN with 2, 4 & 7 layers architectures with the four different feature sets, and found that irrespective of the features inputs, the classical RF accuracy outperforms the DNN.

Proceedings ArticleDOI
TL;DR: In this article, the authors compared the performance of the Deep Neural Network (DNN) with the classical Random Forest (RF) machine learning algorithm for malware classification, and found that the classical RF accuracy outperformed the DNN accuracy.
Abstract: Recently, Deep Learning has been showing promising results in various Artificial Intelligence applications like image recognition, natural language processing, language modeling, neural machine translation, etc. Although, in general, it is computationally more expensive as compared to classical machine learning techniques, their results are found to be more effective in some cases. Therefore, in this paper, we investigated and compared one of the Deep Learning Architecture called Deep Neural Network (DNN) with the classical Random Forest (RF) machine learning algorithm for the malware classification. We studied the performance of the classical RF and DNN with 2, 4 & 7 layers architectures with the four different feature sets, and found that irrespective of the features inputs, the classical RF accuracy outperforms the DNN.

Journal ArticleDOI
TL;DR: The history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation are explored and a new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.
Abstract: Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.

Journal ArticleDOI
TL;DR: This review proposes a comprehensive framework for addressing the challenge of characterising novel complex threats and relevant counter-measures in the field of intrusion detection, which is typically performed online, and security investigation, performed offline.
Abstract: Behind firewalls, more and more cybersecurity attacks are specifically targeted to the very network where they are taking place. This review proposes a comprehensive framework for addressing the challenge of characterising novel complex threats and relevant counter-measures. Two kinds of attacks are particularly representative of this issue: zero-day attacks that are not publicly disclosed and multi-step attacks that are built of several individual steps, some malicious and some benign. Two main approaches are developed in the artificial intelligence field to track these attacks: statistics and machine learning. Statistical approaches include rule-based and outlier-detection-based solutions. Machine learning includes the detection of behavioural anomalies and event sequence tracking. Applications of artificial intelligence cover the field of intrusion detection, which is typically performed online, and security investigation, performed offline.