scispace - formally typeset
Search or ask a question

Showing papers on "Redundancy (engineering) published in 2015"


Journal ArticleDOI
TL;DR: A review of the latest achievements of modular multilevel converters regarding the mentioned research topics, new applications, and future trends is presented in this article, where the authors present several attractive features such as a modular structure, the capability of transformer-less operation, easy scalability in terms of voltage and current, low expense for redundancy and fault tolerant operation, high availability, utilization of standard components, and excellent quality of the output waveforms.
Abstract: Modular multilevel converters have several attractive features such as a modular structure, the capability of transformer-less operation, easy scalability in terms of voltage and current, low expense for redundancy and fault tolerant operation, high availability, utilization of standard components, and excellent quality of the output waveforms. These features have increased the interest of industry and research in this topology, resulting in the development of new circuit configurations, converter models, control schemes, and modulation strategies. This paper presents a review of the latest achievements of modular multilevel converters regarding the mentioned research topics, new applications, and future trends.

1,123 citations


Posted Content
TL;DR: This work presents a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes, and demonstrates on several benchmark data sets that HashingNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.
Abstract: As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.

1,039 citations


Journal ArticleDOI
TL;DR: In this paper, a distributed controller for secondary frequency and voltage control in islanded microgrids is proposed, which uses localized information and nearest-neighbor communication to collectively perform secondary control actions.
Abstract: In this paper, we present new distributed controllers for secondary frequency and voltage control in islanded microgrids. Inspired by techniques from cooperative control, the proposed controllers use localized information and nearest-neighbor communication to collectively perform secondary control actions. The frequency controller rapidly regulates the microgrid frequency to its nominal value while maintaining active power sharing among the distributed generators. Tuning of the voltage controller provides a simple and intuitive tradeoff between the conflicting goals of voltage regulation and reactive power sharing. Our designs require no knowledge of the microgrid topology, impedances, or loads. The distributed architecture allows for flexibility and redundancy, eliminating the need for a central microgrid controller. We provide a voltage stability analysis and present extensive experimental results validating our designs, verifying robust performance under communication failure and during plug-and-play operation.

600 citations


Journal ArticleDOI
TL;DR: In this paper, a simplified nearest level control balancing method for modular multilevel converter is presented, which neither requires individual sorting of the submodule voltages nor the redundancy of the switching states.
Abstract: In this paper, a simplified nearest level control balancing method for modular multilevel converter is presented. The proposed method neither requires individual sorting of the submodule voltages nor the redundancy of the switching states. Once the sorting of the submodules is done on the basis of the number of the submodules to be switched on, the identifications of the submodules can be carried out throughout the stages of the implementation of the this method. The proposed method also does not require the individual submodule status in the gate pulse generation stage. The gate logic in the presented method can be implemented with the help of the switching states of the voltage levels. Those simplifications and removing of the some of the stages by the proposed balancing method may ease and lead to the less processor time at the implementation level. The pictorial presentation further helps in consolidating the understanding of the different stages of the method. Rigorous simulations are carried out for open and one of the prominent closed-loop applications, i.e., modular multilevel converter-based high-voltage direct current to demonstrate the validity and effectiveness of the proposed simplified balancing method under normal and emergency conditions.

344 citations


Journal ArticleDOI
TL;DR: The experiments have shown that the redundancy, when used to ensure a decoupled apparent inertia at the end effector, allows enlarging the stability region in the impedance parameters space and improving the performance, and the variable impedance with a suitable modulation strategy for parameters' tuning outperforms the constant impedance.
Abstract: This paper presents an experimental study on human–robot comanipulation in the presence of kinematic redundancy. The objective of the work is to enhance the performance during human–robot physical interaction by combining Cartesian impedance modulation and redundancy resolution. Cartesian impedance control is employed to achieve a compliant behavior of the robot's end effector in response to forces exerted by the human operator. Different impedance modulation strategies, which take into account the human's behavior during the interaction, are selected with the support of a simulation study and then experimentally tested on a 7-degree-of-freedom KUKA LWR4. A comparative study to establish the most effective redundancy resolution strategy has been made by evaluating different solutions compatible with the considered task. The experiments have shown that the redundancy, when used to ensure a decoupled apparent inertia at the end effector, allows enlarging the stability region in the impedance parameters space and improving the performance. On the other hand, the variable impedance with a suitable modulation strategy for parameters’ tuning outperforms the constant impedance, in the sense that it enhances the comfort perceived by humans during manual guidance and allows reaching a favorable compromise between accuracy and execution time.

319 citations


Journal ArticleDOI
TL;DR: A survey on reliability protocols in WSNs is presented and several reliability schemes based on retransmission and redundancy techniques using different combinations of packet or event reliability in terms of recovering the lost data using hop-by-hop or end-to-end mechanisms are reviewed.

305 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This work explores the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection, which substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation.
Abstract: We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection. The circulant structure substantially reduces memory footprint and enables the use of the Fast Fourier Transform to speed up the computation. Considering a fully-connected neural network layer with d input nodes, and d output nodes, this method improves the time complexity from O(d2) to O(dlogd) and space complexity from O(d2) to O(d). The space savings are particularly important for modern deep convolutional neural network architectures, where fully-connected layers typically contain more than 90% of the network parameters. We further show that the gradient computation and optimization of the circulant projections can be performed very efficiently. Our experiments on three standard datasets show that the proposed approach achieves this significant gain in storage and efficiency with minimal increase in error rate compared to neural networks with unstructured projections.

299 citations


Journal ArticleDOI
TL;DR: New distributed controllers for secondary frequency and voltage control in islanded microgrids Inspired by techniques from cooperative control, the proposed controllers use localized information and nearest-neighbor communication to collectively perform secondary control actions.
Abstract: In this work we present new distributed controllers for secondary frequency and voltage control in islanded microgrids. Inspired by techniques from cooperative control, the proposed controllers use localized information and nearest-neighbor communication to collectively perform secondary control actions. The frequency controller rapidly regulates the microgrid frequency to its nominal value while maintaining active power sharing among the distributed generators. Tuning of the voltage controller provides a simple and intuitive trade-off between the conflicting goals of voltage regulation and reactive power sharing. Our designs require no knowledge of the microgrid topology, impedances or loads. The distributed architecture allows for flexibility and redundancy, and eliminates the need for a central microgrid controller. We provide a voltage stability analysis and present extensive experimental results validating our designs, verifying robust performance under communication failure and during plug-and-play operation.

231 citations


Journal ArticleDOI
01 Jan 2015-Database
TL;DR: A combination of hierarchical clustering and nearest neighbor graph representation is exercised, with judiciously selected cutoff values, thereby consolidating 3215 human pathways from 12 sources into a set of 1073 SuperPaths, showing a substantial enhancement of the Super pathologists’ capacity to infer gene-to-gene relationships when compared with individual pathway sources, separately or taken together.
Abstract: The study of biological pathways is key to a large number of systems analyses. However, many relevant tools consider a limited number of pathway sources, missing out on many genes and gene-to-gene connections. Simply pooling several pathways sources would result in redundancy and the lack of systematic pathway interrelations. To address this, we exercised a combination of hierarchical clustering and nearest neighbor graph representation, with judiciously selected cutoff values, thereby consolidating 3215 human pathways from 12 sources into a set of 1073 SuperPaths. Our unification algorithm finds a balance between reducing redundancy and optimizing the level of pathway-related informativeness for individual genes. We show a substantial enhancement of the SuperPaths’ capacity to infer gene-to-gene relationships when compared with individual pathway sources, separately or taken together. Further, we demonstrate that the chosen 12 sources entail nearly exhaustive gene coverage. The computed SuperPaths are presented in a new online database, PathCards, showing each SuperPath, its constituent network of pathways, and its contained genes. This provides researchers with a rich, searchable systems analysis resource. Database URL: http://pathcards.genecards.org/

206 citations


Proceedings ArticleDOI
15 Jun 2015
TL;DR: This paper presents the first exact analysis of systems with redundancy, and finds that, in many cases, redundancy outperforms JSQ and Opt-Split with respect to overall response time, making it an attractive solution.
Abstract: Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to run a request on multiple servers and wait for the first completion (discarding all remaining copies of the request). However there is no exact analysis of systems with redundancy. This paper presents the first exact analysis of systems with redundancy. We allow for any number of classes of redundant requests, any number of classes of non-redundant requests, any degree of redundancy, and any number of heterogeneous servers. In all cases we derive the limiting distribution on the state of the system. In small (two or three server) systems, we derive simple forms for the distribution of response time of both the redundant classes and non-redundant classes, and we quantify the "gain" to redundant classes and "pain" to non-redundant classes caused by redundancy. We find some surprising results. First, the response time of a fully redundant class follows a simple Exponential distribution and that of the non-redundant class follows a Generalized Hyperexponential. Second, fully redundant classes are "immune" to any pain caused by other classes becoming redundant. We also compare redundancy with other approaches for reducing latency, such as optimal probabilistic splitting of a class among servers (Opt-Split) and Join-the-Shortest-Queue (JSQ) routing of a class. We find that, in many cases, redundancy outperforms JSQ and Opt-Split with respect to overall response time, making it an attractive solution.

175 citations


Journal ArticleDOI
TL;DR: A novel coordinated power controller design framework is proposed to optimize the active power output of multiple generators in a distributed network and the distributed control and management strategies enhance the redundancy and the plug-and-play capability in microgrids.
Abstract: A novel coordinated power controller design framework is proposed to optimize the active power output of multiple generators in a distributed network. Each bus in the distributed generation systems includes two function modules: a distributed economic dispatch (DED) module and a cooperative control (CC) module. By virtue of the distributed consensus theory, a DED algorithm is proposed and utilized to calculate the optimal active power generation references for each generator. The CC module receives and tracks the active power generation references such that the generation–demand balance is guaranteed at minimum operating cost while satisfying all generation constraints. The distributed control and management strategies enhance the redundancy and the plug-and-play capability in microgrids. Optimal properties and convergence rates of the proposed distributed algorithms are strictly proved. Simulation studies further demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: An overview of the most practical and frequently used torque control solutions based on null space projections, and generalizes the weighting matrix from the classical operational space approach and shows that an infinite number of weighting matrices exist to obtain dynamic consistency.
Abstract: One step on the way to approach human performance in robotics is to provide joint torque sensing and control for better interaction capabilities with the environment, and a large number of actuated degrees of freedom DOFs for improved versatility. However, the increasing complexity also raises the question of how to resolve the kinematic redundancy which is a direct consequence of the large number of DOFs. Here we give an overview of the most practical and frequently used torque control solutions based on null space projections. Two fundamental structures of task hierarchies are reviewed and compared, namely the successive and the augmented method. Then the projector itself is investigated in terms of its consistency. We analyze static, dynamic, and the new concept of stiffness consistency. In the latter case, stiffness information is used in the pseudoinversion instead of the inertia matrix. In terms of dynamic consistency, we generalize the weighting matrix from the classical operational space approach and show that an infinite number of weighting matrices exist to obtain dynamic consistency. In this context we also analyze another dynamically consistent null space projector with slightly different structure and properties. The redundancy resolutions are finally compared in several simulations and experiments. A thorough discussion of the theoretical and empirical results completes this survey.

Proceedings ArticleDOI
27 May 2015
TL;DR: A new approach named factorized learning is introduced that pushes ML computations through joins and avoids redundancy in both I/O and computations and is often substantially faster than the alternatives, but is not always the fastest, necessitating a cost-based approach.
Abstract: Enterprise data analytics is a booming area in the data management industry. Many companies are racing to develop toolkits that closely integrate statistical and machine learning techniques with data management systems. Almost all such toolkits assume that the input to a learning algorithm is a single table. However, most relational datasets are not stored as single tables due to normalization. Thus, analysts often perform key-foreign key joins before learning on the join output. This strategy of learning after joins introduces redundancy avoided by normalization, which could lead to poorer end-to-end performance and maintenance overheads due to data duplication. In this work, we take a step towards enabling and optimizing learning over joins for a common class of machine learning techniques called generalized linear models that are solved using gradient descent algorithms in an RDBMS setting. We present alternative approaches to learn over a join that are easy to implement over existing RDBMSs. We introduce a new approach named factorized learning that pushes ML computations through joins and avoids redundancy in both I/O and computations. We study the tradeoff space for all our approaches both analytically and empirically. Our results show that factorized learning is often substantially faster than the alternatives, but is not always the fastest, necessitating a cost-based approach. We also discuss extensions of all our approaches to multi-table joins as well as to Hive.

Journal ArticleDOI
TL;DR: A new feature selection framework to globally minimize the feature redundancy with maximizing the given feature ranking scores, which can come from any supervised or unsupervised methods is proposed.
Abstract: Feature selection has been an important research topic in data mining, because the real data sets often have high-dimensional features, such as the bioinformatics and text mining applications Many existing filter feature selection methods rank features by optimizing certain feature ranking criterions, such that correlated features often have similar rankings These correlated features are redundant and don’t provide large mutual information to help data mining Thus, when we select a limited number of features, we hope to select the top non-redundant features such that the useful mutual information can be maximized In previous research, Ding et al recognized this important issue and proposed the minimum Redundancy Maximum Relevance Feature Selection (mRMR) model to minimize the redundancy between sequentially selected features However, this method used the greedy search, thus the global feature redundancy wasn’t considered and the results are not optimal In this paper, we propose a new feature selection framework to globally minimize the feature redundancy with maximizing the given feature ranking scores, which can come from any supervised or unsupervised methods Our new model has no parameter so that it is especially suitable for practical data mining application Experimental results on benchmark data sets show that the proposed method consistently improves the feature selection results compared to the original methods Meanwhile, we introduce a new unsupervised global and local discriminative feature selection method which can be unified with the global feature redundancy minimization framework and shows superior performance

Journal ArticleDOI
TL;DR: In this paper, a modular multilevel converter control system, based on converter energy storage, is proposed for two different control modes: active power and dc voltage, which decouples the submodule (SM) capacitor voltages from the dc bus voltage.
Abstract: A modular multilevel converter control system, based on converter energy storage, is proposed in this paper for two different control modes: active power and dc voltage. The proposed control system decouples the submodule (SM) capacitor voltages from the dc bus voltage. One of the practical applications is the management of active redundant SMs. A practical HVDC system with 401-level MMCs, including 10% redundancy in MMC SMs, is used for validating and demonstrating the advantages of the proposed control system. This paper also presents a novel capacitor voltage balancing control based on $\max$ – $\min$ functions. It is used to drastically reduce the number of switchings for each SM and enhances computational efficiency.

Proceedings ArticleDOI
24 Aug 2015
TL;DR: The concept of e-dominant dataset is defined, which is only a small data set and can represent the vast information carried by big sensory data with the information loss rate being less than e, where e can be arbitrarily small.
Abstract: The amount of sensory data manifests an explosive growth due to the increasing popularity of Wireless Sensor Networks. The scale of the sensory data in many applications has already exceeds several petabytes annually, which is beyond the computation and transmission capabilities of the conventional WSNs. On the other hand, the information carried by big sensory data has high redundancy because of strong correlation among sensory data. In this paper, we define the concept of e-dominant dataset, which is only a small data set and can represent the vast information carried by big sensory data with the information loss rate being less than e, where e can be arbitrarily small. We prove that drawing the minimum e-dominant dataset is polynomial time solvable and provide a centralized algorithm with 0(n3) time complexity. Furthermore, a distributed algorithm with constant complexity (O(l)) is also designed. It is shown that the result returned by the distributed algorithm can satisfy the e requirement with a near optimal size. Finally, the extensive real experiment results and simulation results are carried out. The results indicate that all the proposed algorithms have high performance in terms of accuracy and energy efficiency.

Journal ArticleDOI
Harish Garg1
TL;DR: The objective of this paper is to solve the reliability redundancy allocation problems of series–parallel system under the various nonlinear resource constraints using the penalty guided based biogeography based optimization.
Abstract: The objective of this paper is to solve the reliability redundancy allocation problems of series–parallel system under the various nonlinear resource constraints using the penalty guided based biogeography based optimization. In this type of problem both the number of redundant components and the corresponding component reliability in each subsystem are to be decided simultaneously so as to maximize the reliability of the system. A parameter-free penalty function has been taken which encourages the algorithm to explore within the feasible region and the near feasible region, and discourage the infeasible solutions. Four benchmark problems with the reliability, redundancy allocation problems are taken to demonstrate the approach and it has been shown by comparison that the solutions by the approach are better than that of solutions available in the literature. Finally statistical simulation has been performed for supremacy the approach.

Proceedings ArticleDOI
27 Aug 2015
TL;DR: The sensor-independent fusion scheme allows for an efficient sensor replacement and realizes redundancy by using probabilistic and generic interfaces, and the performance of the experimental vehicle that was realized during the project is presented along with its software modules.
Abstract: The project “Autonomous Driving” at Ulm University aims at advancing highly-automated driving with close-to-market sensors while ensuring easy exchangeability of the particular components. In this contribution, the experimental vehicle that was realized during the project is presented along with its software modules. To achieve the mentioned goals, a sophisticated fusion approach for robust environment perception is essential. Apart from the necessary motion planning algorithms, this paper thus focuses on the sensor-independent fusion scheme. It allows for an efficient sensor replacement and realizes redundancy by using probabilistic and generic interfaces. Redundancy is ensured by utilizing multiple sensors of different types in crucial modules like grid mapping, localization and tracking. Furthermore, the combination of the module outputs to a consistent environment model is achieved by employing their probabilistic representation. The performance of the vehicle is discussed using the experience from numerous autonomous driving tests on public roads.

Proceedings Article
25 Jul 2015
TL;DR: This paper attempts to build a strong summarizer DivSelect+CNNLM by presenting new algorithms to optimize each of them, and proposes CNNLM, a novel neural network language model (NNLM) based on convolutional neural network (CNN), to project sentences into dense distributed representations, then models sentence redundancy by cosine similarity.
Abstract: Extractive document summarization aims to conclude given documents by extracting some salient sentences. Often, it faces two challenges: 1) how to model the information redundancy among candidate sentences; 2) how to select the most appropriate sentences. This paper attempts to build a strong summarizer DivSelect+CNNLM by presenting new algorithms to optimize each of them. Concretely, it proposes CNNLM, a novel neural network language model (NNLM) based on convolutional neural network (CNN), to project sentences into dense distributed representations, then models sentence redundancy by cosine similarity. Afterwards, it formulates the selection process as an optimization problem, constructing a diversified selection process (DivSelect) with the aim of selecting some sentences which have high prestige, meantime, are dissimilar with each other. Experimental results on DUC2002 and DUC2004 benchmark data sets demonstrate the effectiveness of our approach.

Journal ArticleDOI
TL;DR: Four RRAP benchmarks are used to display the applicability of the proposed PSSO that advances the strengths of both PSO and SSO to enable optimizing the RRAP that belongs to mixed-integer nonlinear programming.

Proceedings ArticleDOI
09 Mar 2015
TL;DR: This work proposes a novel family of single-tier ECC mechanisms called Bamboo ECC to simultaneously address the conflicting requirements of increasing reliability while maintaining or decreasing error protection overheads and shows the significant error coverage and memory lifespan improvements of B bamboo ECC relative to existing SEC-DED, chipkill-correct and double-chipkill-Correct schemes.
Abstract: Growing computer system sizes and levels of integration have made memory reliability a primary concern, necessitating strong memory error protection. As such, large-scale systems typically employ error checking and correcting codes to trade redundant storage and bandwidth for increased reliability. While stronger memory protection will be needed to meet reliability targets in the future, it is undesirable to further increase the amount of storage and bandwidth spent on redundancy. We propose a novel family of single-tier ECC mechanisms called Bamboo ECC to simultaneously address the conflicting requirements of increasing reliability while maintaining or decreasing error protection overheads. Relative to the state-of-the-art single-tier error protection, Bamboo ECC codes have superior correction capabilities, all but eliminate the risk of silent data corruption, and can also increase redundancy at a fine granularity, enabling more adaptive graceful downgrade schemes. These strength, safety, and flexibility advantages translate to a significantly more reliable memory system. To demonstrate this, we evaluate a family of Bamboo ECC organizations in the context of conventional 72b and 144b DRAM channels and show the significant error coverage and memory lifespan improvements of Bamboo ECC relative to existing SEC-DED, chipkill-correct and double-chipkill-correct schemes.

Journal ArticleDOI
TL;DR: Considering the synthetical effects of the ac system and the dc system, the authors in this article proposed two indexes, the dynamic redundancy and the utilization ratio of the submodules, and applied the nearest level control as the modulation method.
Abstract: Considering the synthetical effects of the ac system and the dc system, this paper proposes two novel indexes, the dynamic redundancy and the utilization ratio of the submodules. On this basis, the nearest level control is applied as the modulation method and an optimized control strategy based on the dynamic redundancy for the modular multilevel converter (MMC) is proposed. One of the main innovations is that the reference value of the capacitor voltage is derived according to the maximum output voltage of each converter arm and the safety margin which can be adjusted artificially. Unlike previous strategies, the redundancy can be adjusted dynamically, and the utilization ratio of the submodules can be effectively improved. In addition, the capacitor voltage and the inner stress are reduced, and the fault ride-through capability of the system can be enhanced. In particular, the strategy under abnormal operating conditions is also detailed in this paper. A model of two-terminal MMC-high-voltage direct current system is built in PSCAD/EMTDC, and the simulation result proves the validity and the feasibility of the proposed strategy.

Proceedings Article
04 May 2015
TL;DR: CosTLO is designed to satisfy any application's goals for latency variance by estimating the latency variance offered by any particular configuration, efficiently searching through the configuration space to select a cost-effective configuration among the ones that can offer the desired latency variance.
Abstract: We present CosTLO, a system that reduces the high latency variance associated with cloud storage services by augmenting GET/PUT requests issued by end-hosts with redundant requests, so that the earliest response can be considered. To reduce the cost overhead imposed by redundancy, unlike prior efforts that have used this approach, CosTLO combines the use of multiple forms of redundancy. Since this results in a large number of configurations in which CosTLO can issue redundant requests, we conduct a comprehensive measurement study on S3 and Azure to identify the configurations that are viable in practice. Informed by this study, we design CosTLO to satisfy any application's goals for latency variance by 1) estimating the latency variance offered by any particular configuration, 2) efficiently searching through the configuration space to select a cost-effective configuration among the ones that can offer the desired latency variance, and 3) preserving data consistency despite CosTLO's use of redundant requests. We show that, for the median PlanetLab node, CosTLO can halve the latency variance associated with fetching content from Amazon S3, with only a 25% increase in cost.

Journal ArticleDOI
TL;DR: It is shown that the singularities of this type of mechanism are governed by the orientation of a passive link connecting the redundant leg to the platform and that the latter orientation is easily controlled using the kinematic redundancy, thereby alleviating all direct kinematics singularities.
Abstract: This paper introduces a novel family of singularity-free kinematically redundant planar parallel mechanisms that have unlimited rotational capabilities. The proposed mechanisms are akin to conventional three-degree-of-freedom planar parallel mechanisms. By introducing a novel kinematically redundant arrangement, four-degree-of-freedom parallel mechanisms are obtained that can completely alleviate singularities and provide unlimited rotational capabilities. The kinematics of the mechanisms are derived, and the Jacobian matrices are obtained. It is shown that the singularities of this type of mechanism are governed by the orientation of a passive link connecting the redundant leg to the platform and that the latter orientation is easily controlled using the kinematic redundancy, thereby alleviating all direct kinematic singularities. An example mechanism is proposed, and a prototype is demonstrated. Example trajectories that include full cycle rotations are shown. The prototype also illustrates the use of the kinematic redundancy for an auxiliary task, namely grasping.

Journal ArticleDOI
TL;DR: In this letter, direction-of-arrival (DOA) estimation of a mixture of coherent and uncorrelated targets is performed using sparse reconstruction and active nonuniform arrays for performance evaluation of the proposed sparsity-based active sensing approach.
Abstract: In this letter, direction-of-arrival (DOA) estimation of a mixture of coherent and uncorrelated targets is performed using sparse reconstruction and active nonuniform arrays. The data measurements from multiple transmit and receive elements can be considered as observations from the sum coarray corresponding to the physical transmit/receive arrays. The vectorized covariance matrix of the sum coarray observations emulates the received data at a virtual array whose elements are given by the difference coarray of the sum coarray (DCSC). Sparse reconstruction is used to fully exploit the significantly enhanced degrees-of-freedom offered by the DCSC for DOA estimation. Simulated data from multiple-input multiple-output minimum redundancy arrays and transmit/receive co-prime arrays are used for performance evaluation of the proposed sparsity-based active sensing approach.

Journal ArticleDOI
TL;DR: This paper investigates fast and coordinated data backup in geographically distributed (geo-distributed) optical inter-DC networks and proposes heuristics based on adaptive reconfiguration (AR), finding AR-TwoStep-ILP achieves the best tradeoff between DC-B-Wnd and operational complexity and it is also the most time-efficient one.
Abstract: In an optical inter-datacenter (inter-DC) network, for preventing data loss, a cloud system usually leverages multiple DCs for obtaining sufficient data redundancy. In order to improve the data-transfer efficiency of the regular DC backup, this paper investigates fast and coordinated data backup in geographically distributed (geo-distributed) optical inter-DC networks. By considering a mutual backup model, in which DCs can serve as the backup sites of each other, we study how to finish the regular DC backup within the shortest time duration (i.e., DC backup window (DC-B-Wnd)). Specifically, we try to minimize DC-B-Wnd with joint optimization of the backup site selection and the data-transfer paths. An integer linear programming (ILP) model is first formulated, and then we propose several heuristics to reduce the time complexity. Moreover, in order to explore the tradeoff between DC-B-Wnd and operational complexity, we propose heuristics based on adaptive reconfiguration (AR). Extensive simulations indicate that among all the proposed heuristics, AR-TwoStep-ILP achieves the best tradeoff between DC-B-Wnd and operational complexity and it is also the most time-efficient one.

Journal ArticleDOI
TL;DR: Hardware and software techniques to facilitate reliable docking of elements in the presence of estimation and actuation errors are described, and how these local variable stiffness connections may be used to control the structural properties of the larger assembly are considered.
Abstract: We present the methodology, algorithms, system design, and experiments addressing the self-assembly of large teams of autonomous robotic boats into floating platforms. Identical self-propelled robotic boats autonomously dock together and form connected structures with controllable variable stiffness. These structures can self-reconfigure into arbitrary shapes limited only by the number of rectangular elements assembled in brick-like patterns. An $O(m^{2})$ complexity algorithm automatically generates assembly plans which maximize opportunities for parallelism while constructing operator-specified target configurations with $m$ components. The system further features an $O(n^{3})$ complexity algorithm for the concurrent assignment and planning of trajectories from $n$ free robots to the growing structure. Such peer-to-peer assembly among modular robots compares favorably to a single active element assembling passive components in terms of both construction rate and potential robustness through redundancy. We describe hardware and software techniques to facilitate reliable docking of elements in the presence of estimation and actuation errors, and we consider how these local variable stiffness connections may be used to control the structural properties of the larger assembly. Assembly experiments validate these ideas in a fleet of 0.5 m long modular robotic boats with onboard thrusters, active connectors, and embedded computers.

Journal ArticleDOI
TL;DR: A reinforcement learning based mechanism to perform value-redundancy filtering and load-balancing routing according to the values and distribution of data flows in order to improve the energy efficiency and self-adaptability to environmental changes for WSNs.
Abstract: Software defined wireless networks (SDWNs) present an innovative framework for virtualized network control and flexible architecture design of wireless sensor networks (WSNs). However, the decoupled control and data planes and the logically centralized control in SDWNs may cause high energy consumption and resource waste during system operation, hindering their application in WSNs. In this paper, we propose a software defined WSN (SDWSN) prototype to improve the energy efficiency and adaptability of WSNs for environmental monitoring applications, taking into account the constraints of WSNs in terms of energy, radio resources, and computational capabilities, and the value redundancy and distributed nature of data flows in periodic transmissions for monitoring applications. Particularly, we design a reinforcement learning based mechanism to perform value-redundancy filtering and load-balancing routing according to the values and distribution of data flows, respectively, in order to improve the energy efficiency and self-adaptability to environmental changes for WSNs. The optimal matching rules in flow table are designed to curb the control signaling overhead and balance the distribution of data flows for achieving in-network fusion in data plane with guaranteed quality of service (QoS). Experiment results show that the proposed SDWSN prototype can effectively improve the energy efficiency and self-adaptability of environmental monitoring WSNs with QoS.

Journal ArticleDOI
TL;DR: A general scheme for dealing with feature selection with “controlled redundancy” (CoR) and a new more effective training scheme named mFSMLP-CoR, which not only improves the performance of the system, but also significantly reduces the dependency of the network's behavior on the initialization of connection weights.
Abstract: We first present a feature selection method based on a multilayer perceptron (MLP) neural network, called feature selection MLP (FSMLP). We explain how FSMLP can select essential features and discard derogatory and indifferent features. Such a method may pick up some useful but dependent (say correlated) features, all of which may not be needed. We then propose a general scheme for dealing with feature selection with “controlled redundancy” (CoR). The proposed scheme, named as FSMLP-CoR, can select features with a controlled redundancy both for classification and function approximation/prediction type problems. We have also proposed a new more effective training scheme named mFSMLP-CoR. The idea is general in nature and can be used with other learning schemes also. We demonstrate the effectiveness of the algorithms using several data sets including a synthetic data set. We also show that the selected features are adequate to solve the problem at hand. Here, we have considered a measure of linear dependency to control the redundancy. The use of nonlinear measures of dependency, such as mutual information, is straightforward. Here, there are some advantages of the proposed schemes. They do not require explicit evaluation of the feature subsets. Here, feature selection is integrated into designing of the decision-making system. Hence, it can look at all features together and pick up whatever is necessary. Our methods can account for possible nonlinear subtle interactions between features, as well as that between features, tools, and the problem being solved. They can also control the level of redundancy in the selected features. Of the two learning schemes, mFSMLP-CoR, not only improves the performance of the system, but also significantly reduces the dependency of the network's behavior on the initialization of connection weights.

Posted Content
TL;DR: A general redundancy strategy is designed that achieves a good latency-cost trade-off for an arbitrary service time distribution and generalizes and extends some results in the analysis of fork-join queues.
Abstract: In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers, and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps understand when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks, and time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e. log of the tail probability is convex) then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e. log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues.