scispace - formally typeset
Search or ask a question

Showing papers on "Architecture published in 2019"


Proceedings ArticleDOI
16 Apr 2019
TL;DR: The adopted Neural Architecture Search is adopted and a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections is discovered, named NAS-FPN, which achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models.
Abstract: Current state-of-the-art convolutional architectures for object detection are manually designed. Here we aim to learn a better architecture of feature pyramid network for object detection. We adopt Neural Architecture Search and discover a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections. The discovered architecture, named NAS-FPN, consists of a combination of top-down and bottom-up connections to fuse features across scales. NAS-FPN, combined with various backbone models in the RetinaNet framework, achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models. NAS-FPN improves mobile detection accuracy by 2 AP compared to state-of-the-art SSDLite with MobileNetV2 model in [32] and achieves 48.3 AP which surpasses Mask R-CNN [10] detection accuracy with less computation time.

900 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: Li et al. as mentioned in this paper proposed to search the network level structure in addition to the cell level structure, which formed a hierarchical architecture search space and achieved state-of-the-art performance without any ImageNet pretraining.
Abstract: Recently, Neural Architecture Search (NAS) has successfully identified neural network architectures that exceed human designed ones on large-scale image classification. In this paper, we study NAS for semantic image segmentation. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our architecture searched specifically for semantic image segmentation, attains state-of-the-art performance without any ImageNet pretraining.

863 citations


Posted Content
TL;DR: NAS-FPN as mentioned in this paper combines a combination of top-down and bottom-up connections to fuse features across scales and achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models.
Abstract: Current state-of-the-art convolutional architectures for object detection are manually designed. Here we aim to learn a better architecture of feature pyramid network for object detection. We adopt Neural Architecture Search and discover a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections. The discovered architecture, named NAS-FPN, consists of a combination of top-down and bottom-up connections to fuse features across scales. NAS-FPN, combined with various backbone models in the RetinaNet framework, achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models. NAS-FPN improves mobile detection accuracy by 2 AP compared to state-of-the-art SSDLite with MobileNetV2 model in [32] and achieves 48.3 AP which surpasses Mask R-CNN [10] detection accuracy with less computation time.

666 citations


Posted Content
TL;DR: This paper presents an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure, and solves two issues, namely, heavier computational overheads and weaker search stability, which are solved using search space approximation and regularization.
Abstract: Recently, differentiable search methods have made major progress in reducing the computational costs of neural architecture search. However, these approaches often report lower accuracy in evaluating the searched architecture or transferring it to another dataset. This is arguably due to the large gap between the architecture depths in search and evaluation scenarios. In this paper, we present an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. This brings two issues, namely, heavier computational overheads and weaker search stability, which we solve using search space approximation and regularization, respectively. With a significantly reduced search time (~7 hours on a single GPU), our approach achieves state-of-the-art performance on both the proxy dataset (CIFAR10 or CIFAR100) and the target dataset (ImageNet). Code is available at this https URL.

370 citations


Posted Content
TL;DR: This paper proves that the biased evaluation of candidate models within a predefined search space is due to inherent unfairness in the supernet training, and proposes two levels of constraints: expectation fairness and strict fairness.
Abstract: One of the most critical problems in weight-sharing neural architecture search is the evaluation of candidate models within a predefined search space. In practice, a one-shot supernet is trained to serve as an evaluator. A faithful ranking certainly leads to more accurate searching results. However, current methods are prone to making misjudgments. In this paper, we prove that their biased evaluation is due to inherent unfairness in the supernet training. In view of this, we propose two levels of constraints: expectation fairness and strict fairness. Particularly, strict fairness ensures equal optimization opportunities for all choice blocks throughout the training, which neither overestimates nor underestimates their capacity. We demonstrate that this is crucial for improving the confidence of models' ranking. Incorporating the one-shot supernet trained under the proposed fairness constraints with a multi-objective evolutionary search algorithm, we obtain various state-of-the-art models, e.g., FairNAS-A attains 77.5% top-1 validation accuracy on ImageNet. The models and their evaluation codes are made publicly available online this http URL .

304 citations


Journal ArticleDOI
TL;DR: The Blockchain architecture, which is an emerging scheme for constructing the distributed networks, is introduced to reshape the traditional IIoT architecture to form a new multicenter partially decentralized architecture that provides better security and privacy protection than the traditional architecture.
Abstract: Through the Industrial Internet of Things (IIoT), a smart factory has entered the booming period. However, as the number of nodes and network size become larger, the traditional IIoT architecture can no longer provide effective support for such enormous system. Therefore, we introduce the Blockchain architecture, which is an emerging scheme for constructing the distributed networks, to reshape the traditional IIoT architecture. First, the major problems of the traditional IIoT architecture are analyzed, and the existing improvements are summarized. Second, we introduce a security and privacy model to help design the Blockchain-based architecture. On this basis, we decompose and reorganize the original IIoT architecture to form a new multicenter partially decentralized architecture. Then, we introduce some relative security technologies to improve and optimize the new architecture. After that we design the data interaction process and the algorithms of the architecture. Finally, we use an automatic production platform to discuss the specific implementation. The experimental results show that the proposed architecture provides better security and privacy protection than the traditional architecture. Thus, the proposed architecture represents a significant improvement of the original architecture, which provides a new direction for the IIoT development.

242 citations


Journal ArticleDOI
TL;DR: An end-to-end deep learning architecture, termed as Deep Passenger Flow (DeepPF), to forecast the metro inbound/outbound passenger flow, which achieves a high prediction accuracy due to the ease of integrating multi-source data.
Abstract: This study aims to combine the modeling skills of deep learning and the domain knowledge in transportation into prediction of metro passenger flow. We present an end-to-end deep learning architecture, termed as Deep Passenger Flow (DeepPF), to forecast the metro inbound/outbound passenger flow. The architecture of the model is highly flexible and extendable; thus, enabling the integration and modeling of external environmental factors, temporal dependencies, spatial characteristics, and metro operational properties in short-term metro passenger flow prediction. Furthermore, the proposed framework achieves a high prediction accuracy due to the ease of integrating multi-source data. Numerical experiments demonstrate that the proposed DeepPF model can be extended to general conditions to fit the diverse constraints that exist in the transportation domain.

229 citations


Posted Content
TL;DR: This survey provides a formalism which unifies and categorizes the landscape of existing methods along with a detailed analysis that compares and contrasts the different approaches.
Abstract: The growing interest in both the automation of machine learning and deep learning has inevitably led to the development of a wide variety of automated methods for neural architecture search. The choice of the network architecture has proven to be critical, and many advances in deep learning spring from its immediate improvements. However, deep learning techniques are computationally intensive and their application requires a high level of domain knowledge. Therefore, even partial automation of this process helps to make deep learning more accessible to both researchers and practitioners. With this survey, we provide a formalism which unifies and categorizes the landscape of existing methods along with a detailed analysis that compares and contrasts the different approaches. We achieve this via a comprehensive discussion of the commonly adopted architecture search spaces and architecture optimization algorithms based on principles of reinforcement learning and evolutionary algorithms along with approaches that incorporate surrogate and one-shot models. Additionally, we address the new research directions which include constrained and multi-objective architecture search as well as automated data augmentation, optimizer and activation function search.

183 citations


Book ChapterDOI
01 Jan 2019
TL;DR: An overview of existing work in this field of research is provided and neural architecture search methods are categorized according to three dimensions: search space, search strategy, and performance estimation strategy.
Abstract: Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.

178 citations



01 Dec 2019
TL;DR: In this article, the authors use advances made in kinematic restoration software in the last decade with a systematic reconstruction protocol for developing a more quantitative restoration of the Mediterranean region for the last 240 million years.
Abstract: Abstract The basins and orogens of the Mediterranean region ultimately result from the opening of oceans during the early break-up of Pangea since the Triassic, and their subsequent destruction by subduction accommodating convergence between the African and Eurasian Plates since the Jurassic. The region has been the cradle for the development of geodynamic concepts that link crustal evolution to continental break-up, oceanic and continental subduction, and mantle dynamics in general. The development of such concepts requires a first-order understanding of the kinematic evolution of the region for which a multitude of reconstructions have previously been proposed. In this paper, we use advances made in kinematic restoration software in the last decade with a systematic reconstruction protocol for developing a more quantitative restoration of the Mediterranean region for the last 240 million years. This restoration is constructed for the first time with the GPlates plate reconstruction software and uses a systematic reconstruction protocol that limits input data to marine magnetic anomaly reconstructions of ocean basins, structural geological constraints quantifying timing, direction, and magnitude of tectonic motion, and tests and iterations against paleomagnetic data. This approach leads to a reconstruction that is reproducible, and updatable with future constraints. We first review constraints on the opening history of the Atlantic (and Red Sea) oceans and the Bay of Biscay. We then provide a comprehensive overview of the architecture of the Mediterranean orogens, from the Pyrenees and Betic-Rif orogen in the west to the Caucasus in the east and identify structural geological constraints on tectonic motions. We subsequently analyze a newly constructed database of some 2300 published paleomagnetic sites from the Mediterranean region and test the reconstruction against these constraints. We provide the reconstruction in the form of 12 maps being snapshots from 240 to 0 Ma, outline the main features in each time-slice, and identify differences from previous reconstructions, which are discussed in the final section.

Posted Content
TL;DR: A simple and one-shot solution to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size) is presented.
Abstract: We study how to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot solution, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs). Code and models will be available at: this https URL

Journal ArticleDOI
TL;DR: This article strives to present a systematic introduction of DAO, including its concept and characteristics, research framework, typical implementations, challenges, and future trends, including a novel reference model for DAO which employs a five-layer architecture.
Abstract: Decentralized autonomy is a long-standing research topic in information sciences and social sciences. The self-organization phenomenon in natural ecosystems, the Cyber Movement Organizations (CMOs) on the Internet, and the Distributed Artificial Intelligence (DAI), and so on, can all be regarded as its early manifestations. In recent years, the rapid development of blockchain technology has spawned the emergence of the so-called Decentralized Autonomous Organization [DAO, sometimes labeled as Decentralized Autonomous Corporation (DAC)], which is a new organization form that the management and operational rules are typically encoded on blockchain in the form of smart contracts, and can autonomously operate without centralized control or third-party intervention. DAO is expected to overturn the traditional hierarchical management model and significantly reduce organizations’ costs on communication, management, and collaboration. However, DAO still faces many challenges, such as security and privacy issue, unclear legal status, and so on. In this article, we strive to present a systematic introduction of DAO, including its concept and characteristics, research framework, typical implementations, challenges, and future trends. Especially, a novel reference model for DAO which employs a five-layer architecture is proposed. This article is aimed at providing helpful guidance and reference for future research efforts.

Journal ArticleDOI
TL;DR: In this paper, a mid-range theory is developed to support the self-motivation of gig-workers through the thoughtful design of their digital labor platforms and the integration of two social media tools (namely, social networking and social badging), and the proposed theory provides novel insight into how gigorganizations can leverage widely accessible social media technology to motivate platform workers in the absence of human supervision and support.
Abstract: High-quality employee motivation can contribute to an organization’s long-term success by supporting employees’ well-being and performance. Nevertheless, there is a paucity of research concerning how organizations motivate workers in non-traditional work contexts. In the algocratic context of the gig-economy, the purpose of this paper is to understand the role that technology can play in motivating workers.,Drawing on the self-determination theory, job-characteristic theory and enterprise social media research, this conceptual paper explores how the architecture of the digital labor platforms underlying the gig-economy (and the characteristics of jobs mediated through these IT artifacts) can impact key antecedents of self-motivation.,Combining theory and empirical evidence, this paper develops a mid-range theory demonstrating how organizations can support the self-motivation of gig-workers through the thoughtful design of their digital labor platforms and the integration of two social media tools (namely, social networking and social badging).,This paper answers calls for psychologically-based research exploring the consequences of gig-work as well as research studying the impacts of advanced technologies in interaction with work contexts on motivation. In theorizing around a large set of social-contextual variables operating at different levels of analysis, this paper demonstrates that individual-level motivation can be influenced by both task-based and organizational-level factors, in addition to individual-level factors.,The proposed theory provides novel insight into how gig-organizations can leverage widely accessible social media technology to motivate platform workers in the absence of human supervision and support. Theoretical and practical implications are discussed.

Journal ArticleDOI
TL;DR: An approach to construct similarity networks from high-throughput antibody repertoire sequencing data is introduced, and it is shown that the networks are redundant, robust and reproducible across individuals.
Abstract: The architecture of mouse and human antibody repertoires is defined by the sequence similarity networks of the clones that compose them. The major principles that define the architecture of antibody repertoires have remained largely unknown. Here, we establish a high-performance computing platform to construct large-scale networks from comprehensive human and murine antibody repertoire sequencing datasets (>100,000 unique sequences). Leveraging a network-based statistical framework, we identify three fundamental principles of antibody repertoire architecture: reproducibility, robustness and redundancy. Antibody repertoire networks are highly reproducible across individuals despite high antibody sequence dissimilarity. The architecture of antibody repertoires is robust to the removal of up to 50-90% of randomly selected clones, but fragile to the removal of public clones shared among individuals. Finally, repertoire architecture is intrinsically redundant. Our analysis provides guidelines for the large-scale network analysis of immune repertoires and may be used in the future to define disease-associated and synthetic repertoires.


Journal ArticleDOI
TL;DR: A three-layered fog architecture is discussed and potential security threats and solutions at each layer are highlighted and open research issues are discussed at all three layers of the fog hierarchy.
Abstract: The perception of fog computing is to bring a virtual presence into day-to-day objects. The lowest layer of the fog architecture is the Internet of Things (IoT), which created a revolution by changing ordinary objects into smart objects that automatically sense and process data. In the IoT, smart objects connected over the Internet communicate with each other and exchange data with the fog server to improve services to customers. There are some challenges to achieving the benefits of the IoT. This article discusses a three-layered fog architecture and highlights potential security threats and solutions at each layer. Finally, open research issues are discussed at all three layers of the fog hierarchy.

Journal ArticleDOI
TL;DR: Students in this study were likely to report satisfaction with elements of their experience when any concerns about posting on Facebook were well managed, a sense of community was experienced by students through Facebook and they felt encouraged to learn through their Facebook engagement.
Abstract: The study was carried out in collaboration with the coordinator of an undergraduate architecture course at a Western Australian university. The course had recently been re-designed to increase participation of students in on-campus lectures and improve the depth of engagement of students in the learning program. The major focus of the redesign was the incorporation of a Facebook group to encourage active engagement of students. The study aimed to examine the response of students to this re-design, with a view to identifying elements of their learning experience that could be the focus of improvement in future iterations of the learning design. Elements of the student learning experience explored were access to information and resources, support and motivation, participation and collaboration, assessment and feedback as well reflection and knowledge construction. In addition to examining student responses to the redesign, the study aimed to identify aspects of their Facebook experience that may influence their perception about their overall course experience. Multiple regression analyses showed that students in this study were likely to report satisfaction with elements of their experience when any concerns about posting on Facebook were well managed, a sense of community was experienced by students through Facebook and they felt encouraged to learn through their Facebook engagement.

Journal ArticleDOI
TL;DR: It is argued that smart sustainable cities are becoming knowable, controllable, and tractable in new dynamic ways thanks to urban science, responsive to the data generated about their systems and domains by reacting to the analytical outcome of many aspects of urbanity.
Abstract: We are moving into an era where instrumentation, datafication, and computerization are routinely pervading the very fabric of cities, coupled with the interlinking, integration, and coordination of their systems and domains. As a result, vast troves of data are generated and exploited to operate, manage, organize, and regulate urban life, or a deluge of contextual and actionable data is produced, analyzed, and acted upon in real time in relation to various urban processes and practices. This data-driven approach to urbanism is increasingly becoming the mode of production for smart sustainable cities. In other words, a new era is presently unfolding wherein smart sustainable urbanism is increasingly becoming data-driven. However, topical studies tend to deal mostly with data-driven smart urbanism while barely exploring how this approach can improve and advance sustainable urbanism under what is labeled ‘data-driven smart sustainable cities.’ Having a threefold aim, this paper first examines how data-driven smart sustainable cities are being instrumented, datafied, and computerized so as to improve, advance, and maintain their contribution to the goals of sustainable development through more optimized processes and enhanced practices. Second, it highlights and substantiates the great potential of big data technology for enabling such contribution by identifying, synthesizing, distilling, and enumerating the key practical and analytical applications of this advanced technology in relation to multiple urban systems and domains with respect to operations, functions, services, designs, strategies, and policies. Third, it proposes, illustrates, and describes a novel architecture and typology of data-driven smart sustainable cities. The overall aim of this study suits thematic analysis as a research approach. I argue that smart sustainable cities are becoming knowable, controllable, and tractable in new dynamic ways thanks to urban science, responsive to the data generated about their systems and domains by reacting to the analytical outcome of many aspects of urbanity in terms of optimizing and enhancing operational functioning, management, planning, design, development, and governance in line with the goals of sustainable development. The proposed architecture, which can be replicated, tested, and evaluated in empirical research, will add additional depth to studies in the field. This study intervenes in the existing scholarly conversation by bringing new insights to and informing the ongoing debate on smart sustainable urbanism in light of big data science and analytics. This work serves to inform city stakeholders about the pivotal role of data-driven analytic thinking in smart sustainable urbanism practices, as well as draws special attention to the enormous benefits of the emerging paradigm of big data computing as to transforming the future form of such urbanism.

Journal ArticleDOI
TL;DR: In this article, the authors examined whether perceptions of naturalness in architecture are linked to objective visual patterns, and investigated how natural patterns influence aesthetic evaluations of architectural scenes, finding that natural patterns explained over half of the variance in scene naturalness ratings, while aesthetic preference ratings were found to relate closely to natural patterns in architecture.

Journal ArticleDOI
TL;DR: The primary aim of this article is to specify the mission, vision, and scope of research in HBI, an interdisciplinary domain of research interfacing Human-Computer Interaction with Architecture and Urban Design.
Abstract: Buildings and urban spaces increasingly incorporate artificial intelligence and new forms of interactivity, raising a wide span of research questions about the future of human experiences with, and within, built environments. We call this emerging area Human-Building Interaction (HBI) and introduce it as an interdisciplinary domain of research interfacing Human-Computer Interaction (HCI) with Architecture and Urban Design. HBI seeks to examine the involvement of HCI in studying and steering the evolution of built environments. Therefore, we need to ask foundational questions such as the following: what are the specific attributes of built environments that HCI researchers should take into account when shifting attention and scale from “artefacts” to “environments”? Are architecture and interaction design methods and processes compatible? Concretely, how can a team of interaction designers bring their tools to an architectural project, and collaborate with other stakeholders? Can and will architecture change the theory and practice of HCI? Furthermore, research in HBI should produce knowledge and practical guidelines by experimenting novel design instances that combine architecture and digital interaction. The primary aim of this article is to specify the mission, vision, and scope of research in HBI. As the introductory article to the TOCHI special issue, it also provides a summary of published manuscripts and describes their collective contribution to the development of this field.

Proceedings ArticleDOI
Haoran Miao1, Gaofeng Cheng1, Pengyuan Zhang1, Ta Li1, Yonghong Yan1 
15 Sep 2019
TL;DR: A stable monotonic chunkwise attention (sMoChA) to stream its attention branch and a truncated CTC prefix probability (T-CTC) to streamed its CTC branch to stream the hybrid CTC/attention ASR system without much word error rate degradation.
Abstract: The hybrid CTC/attention end-to-end automatic speech recognition (ASR) combines CTC ASR system and attention ASR system into a single neural network. Although the hybrid CTC/attention ASR system takes the advantages of both CTC and attention architectures in training and decoding, it remains challenging to be used for streaming speech recognition for its attention mechanism, CTC prefix probability and bidirectional encoder. In this paper, we propose a stable monotonic chunkwise attention (sMoChA) to stream its attention branch and a truncated CTC prefix probability (T-CTC) to stream its CTC branch. On the acoustic model side, we utilize the latencycontrolled bidirectional long short-term memory (LC-BLSTM) to stream its encoder. On the joint CTC/attention decoding side, we propose the dynamic waiting joint decoding (DWDJ) algorithm to collect the decoding hypotheses from the CTC and attention branches. Through the combination of the above methods, we stream the hybrid CTC/attention ASR system without much word error rate degradation.

Journal ArticleDOI
TL;DR: This paper identifies the most used types of every component and highlights usage trends in the established analysis period and determines a guide of complementary features from the type of components that should be considered when implementing a smart parking solution.
Abstract: The development and high growth of the Internet of Things (IoT) have improved quality of life and strengthened different areas in society. Many cities worldwide are looking forward to becoming smart. One of the most popular use cases in smart cities is the implementation of smart parking solutions, as they allow people to optimize time, reduce fuel consumption, and carbon dioxide emissions. Smart parking solutions have a defined architecture with particular components (sensors, communication protocols, and software solutions). Although there are only three components that compose a smart parking solution, it is important to mention that each component has many types that can be used in the deployment of these solutions. This paper identifies the most used types of every component and highlights usage trends in the established analysis period. It provides a complementary perspective and represents a very useful source of information. The scientific community could use this information to decide regarding the selection of types of components to implement a smart parking solution. For this purpose, herein we review several works related to smart parking solutions deployment. To achieve this goal, a semi-cyclic adaptation of the action research methodology combined with a systematic review is used to select papers related to the subject of study. The most relevant papers were reviewed to identify subcategories for each component; these classifications are presented in tables to mark the relevance of each paper accordingly. Trends of usage in terms of sensors, protocols and software solutions are analyzed and discussed in every section. In addition to the trends of usage, this paper determines a guide of complementary features from the type of components that should be considered when implementing a smart parking solution.

Journal ArticleDOI
TL;DR: The results of the implementation show that the EvacuSafe is a valuable tool for evacuation design and planning that provides a more comprehensive evaluation of the evacuation performance in comparison to the existing indices and safety measures in the industry.

Journal ArticleDOI
TL;DR: This paper presents a high‐throughput field‐based robotic phenotyping system which performed side‐view stereo imaging for dense sorghum plants with a wide range of plant heights throughout the growing season, and demonstrated the suitability of stereo vision for field-based three‐dimensional plant phenotypesing when recent advances in stereo matching algorithms were incorporated.
Abstract: Funding information National Institute of Food and Agriculture, Grant/Award Number: 2012‐67009‐19713; United States Department of Agriculture Abstract Sorghum (Sorghum bicolor) is known as a major feedstock for biofuel production. To improve its biomass yield through genetic research, manually measuring yield component traits (e.g. plant height, stem diameter, leaf angle, leaf area, leaf number, and panicle size) in the field is the current best practice. However, such laborious and time‐consuming tasks have become a bottleneck limiting experiment scale and data acquisition frequency. This paper presents a high‐throughput field‐based robotic phenotyping system which performed side‐view stereo imaging for dense sorghum plants with a wide range of plant heights throughout the growing season. Our study demonstrated the suitability of stereo vision for field‐based three‐dimensional plant phenotyping when recent advances in stereo matching algorithms were incorporated. A robust data processing pipeline was developed to quantify the variations or morphological traits in plant architecture, which included plot‐based plant height, plot‐based plant width, convex hull volume, plant surface area, and stem diameter (semiautomated). These image‐derived measurements were highly repeatable and showed high correlations with the in‐field manual measurements. Meanwhile, manually collecting the same traits required a large amount of manpower and time compared to the robotic system. The results demonstrated that the proposed system could be a promising tool for large‐scale field‐based high‐throughput plant phenotyping of bioenergy crops.

Posted Content
27 Mar 2019
TL;DR: A simple and one-shot solution to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size) is presented.
Abstract: We study how to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot solution, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs). Code and models will be available at: this https URL

Journal ArticleDOI
TL;DR: The main part of the article deals with the detailed proposal of the architecture framework and partial views on architecture (sub-architectures) and a methodology, including a proposal of appropriate steps, solutions and responsibilities for them.

Journal ArticleDOI
TL;DR: A parallel and multi-layer deep recognition architecture (PMDRA) is discussed, which could have more powerful and ubiquitous feature extraction ability because of the hierarchical utilization of the deep learning network.
Abstract: Human behavior recognition (HBR), as a critical link for further intelligent and real-time smart city application design, has attracted much more attention in recent years. Although the related technologies have been developed rapidly and many solid achievements have been already obtained, there is still a lot of space to deeply enhance the related research including the recognition structures, algorithms, and so on, to meet the increasing requirements of Smart City construction. In this article, we first review the conventional HBR structure, and analyze the problems and challenges for future smart city applications. Then a parallel and multi-layer deep recognition architecture (PMDRA) is discussed, which could have more powerful and ubiquitous feature extraction ability because of the hierarchical utilization of the deep learning network. Meanwhile, the quantity adjustment mechanism for DRUs and DLNUs could help for designing the actual architecture according to the requirements of real scenarios.

Journal ArticleDOI
21 Feb 2019
TL;DR: This work proposes that ethically critical machine reasoning should be proactive, transparent, and verifiable, and describes an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions.
Abstract: Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief–desire–intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.

Proceedings ArticleDOI
18 Mar 2019
TL;DR: This research study aimed to evaluate up to date position of IIOT in present scenario and provide some research recommendation for future researchers.
Abstract: For the past few years internet of things (IOT) has become of great interest for both academic and industrial research. The term industrial IOT evolved when IT is integrated with industrial automation and control system. The review in this paper provides with the insights into an overview of industrial internet of things (IIOT), evolution of IOT, transformation of IOT to IIOT, architecture of IIOT, different case studies & IIOT in business. This research study aimed to evaluate up to date position of IIOT in present scenario and providing some research recommendation for future researchers.