scispace - formally typeset
Search or ask a question

Showing papers on "Redundancy (engineering) published in 2021"


Journal ArticleDOI
Alex Bateman, Maria Jesus Martin, Sandra Orchard, Michele Magrane, Rahat Agivetova, Shadab Ahmad, Emanuele Alpi, Emily H Bowler-Barnett, Ramona Britto, Borisas Bursteinas, Hema Bye-A-Jee, Ray Coetzee, Austra Cukura, Alan Wilter Sousa da Silva, Paul Denny, Tunca Doğan, ThankGod Ebenezer, Jun Fan, Leyla Jael Garcia Castro, Penelope Garmiri, George Georghiou, Leonardo Gonzales, Emma Hatton-Ellis, Abdulrahman Hussein, Alexandr Ignatchenko, Giuseppe Insana, Rizwan Ishtiaq, Petteri Jokinen, Vishal Joshi, Dushyanth Jyothi, Antonia Lock, Rodrigo Lopez, Aurelien Luciani, Jie Luo, Yvonne Lussi, Alistair MacDougall, Fábio Madeira, Mahdi Mahmoudy, Manuela Menchi, Alok Mishra, Katie Moulang, Andrew Nightingale, Carla Susana Oliveira, Sangya Pundir, Guoying Qi, Shriya Raj, Daniel Rice, Milagros Rodriguez Lopez, Rabie Saidi, Joseph Sampson, Tony Sawford, Elena Speretta, Edward Turner, Nidhi Tyagi, Preethi Vasudev, Vladimir Volynkin, Kate Warner, Xavier Watkins, Rossana Zaru, Hermann Zellner, Alan Bridge, Sylvain Poux, Nicole Redaschi, Lucila Aimo, Ghislaine Argoud-Puy, Andrea H. Auchincloss, Kristian B. Axelsen, Parit Bansal, Delphine Baratin, Marie-Claude Blatter, Jerven Bolleman, Emmanuel Boutet, Lionel Breuza, Cristina Casals-Casas, Edouard de Castro, Kamal Chikh Echioukh, Elisabeth Coudert, Béatrice A. Cuche, M Doche, Dolnide Dornevil, Anne Estreicher, Maria Livia Famiglietti, Marc Feuermann, Elisabeth Gasteiger, Sebastien Gehant, Vivienne Baillie Gerritsen, Arnaud Gos, Nadine Gruaz-Gumowski, Ursula Hinz, Chantal Hulo, Nevila Hyka-Nouspikel, Florence Jungo, Guillaume Keller, Arnaud Kerhornou, Vicente Lara, Philippe Le Mercier, Damien Lieberherr, Thierry Lombardot, Xavier D. Martin, Patrick Masson, Anne Morgat, Teresa Batista Neto, Salvo Paesano, Ivo Pedruzzi, Sandrine Pilbout, Lucille Pourcel, Monica Pozzato, Manuela Pruess, Catherine Rivoire, Christian J. A. Sigrist, K Sonesson, Andre Stutz, Shyamala Sundaram, Michael Tognolli, Laure Verbregue, Cathy H. Wu, Cecilia N. Arighi, Leslie Arminski, Chuming Chen, Yongxing Chen, John S. Garavelli, Hongzhan Huang, Kati Laiho, Peter B. McGarvey, Darren A. Natale, Karen E. Ross, C. R. Vinayaka, Qinghua Wang, Yuqi Wang, Lai-Su L. Yeh, Jian Zhang, Patrick Ruch, Douglas Teodoro 
TL;DR: The UniProtKB responded to the COVID-19 pandemic through expert curation of relevant entries that were rapidly made available to the research community through a dedicated portal and a credit-based publication submission interface was developed.
Abstract: Abstract The aim of the UniProt Knowledgebase is to provide users with a comprehensive, high-quality and freely accessible set of protein sequences annotated with functional information. In this article, we describe significant updates that we have made over the last two years to the resource. The number of sequences in UniProtKB has risen to approximately 190 million, despite continued work to reduce sequence redundancy at the proteome level. We have adopted new methods of assessing proteome completeness and quality. We continue to extract detailed annotations from the literature to add to reviewed entries and supplement these in unreviewed entries with annotations provided by automated systems such as the newly implemented Association-Rule-Based Annotator (ARBA). We have developed a credit-based publication submission interface to allow the community to contribute publications and annotations to UniProt entries. We describe how UniProtKB responded to the COVID-19 pandemic through expert curation of relevant entries that were rapidly made available to the research community through a dedicated portal. UniProt resources are available under a CC-BY (4.0) license via the web at https://www.uniprot.org/.

4,001 citations


Proceedings ArticleDOI
20 Jun 2021
TL;DR: In this article, the authors proposed a network pruning approach that identifies structural redundancy of a CNN and prunes filters in the selected layer(s) with the most redundancy, based on this finding, which significantly outperforms the previous state-of-the-art.
Abstract: Convolutional neural network (CNN) pruning has become one of the most successful network compression approaches in recent years. Existing works on network pruning usually focus on removing the least important filters in the network to achieve compact architectures. In this study, we claim that identifying structural redundancy plays a more essential role than finding unimportant filters, theoretically and empirically. We first statistically model the network pruning problem in a redundancy reduction perspective and find that pruning in the layer(s) with the most structural redundancy outperforms pruning the least important filters across all layers. Based on this finding, we then propose a network pruning approach that identifies structural redundancy of a CNN and prunes filters in the selected layer(s) with the most redundancy. Experiments on various benchmark network architectures and datasets show that our proposed approach significantly outperforms the previous state-of-the-art.

115 citations


Posted Content
Jure Zbontar1, Li Jing1, Ishan Misra1, Yann LeCun1, Stéphane Deny1 
TL;DR: Barlow Twins as mentioned in this paper proposes to measure the cross-correlation matrix between the outputs of two identical networks fed with distorted versions of a sample, and make it as close to the identity matrix as possible.
Abstract: Self-supervised learning (SSL) is rapidly closing the gap with supervised methods on large computer vision benchmarks. A successful approach to SSL is to learn embeddings which are invariant to distortions of the input sample. However, a recurring issue with this approach is the existence of trivial constant solutions. Most current methods avoid such solutions by careful implementation details. We propose an objective function that naturally avoids collapse by measuring the cross-correlation matrix between the outputs of two identical networks fed with distorted versions of a sample, and making it as close to the identity matrix as possible. This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors. The method is called Barlow Twins, owing to neuroscientist H. Barlow's redundancy-reduction principle applied to a pair of identical networks. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. Intriguingly it benefits from very high-dimensional output vectors. Barlow Twins outperforms previous methods on ImageNet for semi-supervised classification in the low-data regime, and is on par with current state of the art for ImageNet classification with a linear classifier head, and for transfer tasks of classification and object detection.

99 citations


Journal ArticleDOI
TL;DR: This work uses an idea of error-correcting feedback structure to capture the local features of point clouds comprehensively and applies CNN-based structures in high-level feature spaces to learn local geometric context implicitly.
Abstract: As the basic task of point cloud analysis, classification is fundamental but always challenging. To address some unsolved problems of existing methods, we propose a network that captures geometric features of point clouds for better representations. To achieve this, on the one hand, we enrich the geometric information of points in low-level 3D space explicitly. On the other hand, we apply CNN-based structures in high-level feature spaces to learn local geometric context implicitly. Specifically, we leverage an idea of error-correcting feedback structure to capture the local features of point clouds comprehensively. Furthermore, an attention module based on channel affinity assists the feature map to avoid possible redundancy by emphasizing its distinct channels. The performance on both synthetic and real-world point clouds datasets demonstrate the superiority and applicability of our network. Comparing with other state-of-the-art methods, our approach balances accuracy and efficiency.

95 citations


Journal ArticleDOI
TL;DR: This work proposes a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure.
Abstract: Federated learning enables training a global model from data located at the client nodes, without data sharing and moving client data to a centralized server. Performance of federated learning in a multi-access edge computing (MEC) network suffers from slow convergence due to heterogeneity and stochastic fluctuations in compute power and communication link qualities across clients. We propose a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure. CodedFedL enables coded computing for non-linear federated learning by efficiently exploiting distributed kernel embedding via random Fourier features that transforms the training task into computationally favourable distributed linear regression. Furthermore, clients generate local parity datasets by coding over their local datasets, while the server combines them to obtain the global parity dataset. Gradient from the global parity dataset compensates for straggling gradients during training, and thereby speeds up convergence. For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays. We also characterize the leakage in data privacy when clients share their local parity datasets with the server. Additionally, we analyze the convergence rate and iteration complexity of CodedFedL under simplifying assumptions, by treating CodedFedL as a stochastic gradient descent algorithm. Finally, for demonstrating gains that CodedFedL can achieve in practice, we conduct numerical experiments using practical network parameters and benchmark datasets, in which CodedFedL speeds up the overall training time by up to $15\times $ in comparison to the benchmark schemes.

80 citations


Proceedings ArticleDOI
20 Jun 2021
TL;DR: Hu et al. as mentioned in this paper proposed a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks and aligned the manifold relationship between instances and the pruned sub-networks.
Abstract: Neural network pruning is an essential approach for reducing the computational complexity of deep models so that they can be well deployed on resource-limited devices. Compared with conventional methods, the recently developed dynamic pruning methods determine redundant filters variant to each input instance which achieves higher acceleration. Most of the existing methods discover effective subnetworks for each instance independently and do not utilize the relationship between different inputs. To maximally excavate redundancy in the given network architecture, this paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks (dubbed as ManiDP). We first investigate the recognition complexity and feature similarity between images in the training set. Then, the manifold relationship between instances and the pruned sub-networks will be aligned in the training procedure. The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost compared to the state-of-the-art methods. For example, our method can reduce 55.3% FLOPs of ResNet-34 with only 0.57% top-1 accuracy degradation on ImageNet. The code will be available at https://github.com/huawei-noah/Pruning/tree/master/ManiDP.

69 citations


Journal ArticleDOI
TL;DR: This article proposes a data-driven CMG scheme and the corresponding novel dynamic neural network (DNN), which exploits learning and control simultaneously to complete the kinematic control of manipulators with model unknown.
Abstract: Redundant manipulators are no doubt indispensable devices in industrial production. There are various works on the redundancy resolution of redundant manipulators in performing a given task with the manipulator model information known. However, it becomes knotty for researchers to precisely control redundant manipulators with unknown model to complete a cyclic-motion generation (CMG) task, to some extent. Inspired by this problem, this article proposes a data-driven CMG scheme and the corresponding novel dynamic neural network (DNN), which exploits learning and control simultaneously to complete the kinematic control of manipulators with model unknown. It is worth mentioning that the proposed method is capable of accurately estimating the Jacobian matrix in order to obtain the structure information of the manipulator and theoretically eliminates the tracking errors. Theoretical analyses prove the convergence of the learning and control parts under the necessary noise conditions. Computer simulation results and comparisons of different controllers illustrate the reliability and superior performance of the proposed method with strong learning ability and control ability. This article is greatly significant for redundancy resolution of redundant manipulators with unknown models or unknown loads in practice.

67 citations


Journal ArticleDOI
TL;DR: The feedback attention modules are developed for the first time to enhance the attention map with the semantic knowledge from the high-level layer of the dense model, and the spatial attention module is strengthened by considering multiscale spatial information.
Abstract: Hyperspectral image classification (HSIC) methods based on convolutional neural network (CNN) continue to progress in recent years. However, high complexity, information redundancy, and inefficient description still are the main barriers to the current HSIC networks. To address the mentioned problems, we present a spatial-spectral dense CNN framework with a feedback attention mechanism called FADCNN for HSIC in this article. The proposed architecture assembles the spectral-spatial feature in a compact connection style to extract sufficient information independently with two separate dense CNN networks. Specifically, the feedback attention modules are developed for the first time to enhance the attention map with the semantic knowledge from the high-level layer of the dense model, and we strengthen the spatial attention module by considering multiscale spatial information. To further improve the computation efficiency and the discrimination of the feature representation, the band attention module is designed to emphasize the weight of the bands that participated in the classification training. Besides, the spatial-spectral features are integrated and mined intensely for better refinement in the feature mining network. The extensive experimental results on real hyperspectral images (HSI) demonstrate that the proposed FADCNN architecture has significant advantages compared with other state-of-the-art methods.

64 citations


Journal ArticleDOI
TL;DR: The results indicate that the proposed adaptive variational mode decomposition (AVMD) method outperforms in separating impulsive multi-fault signals, thus being an efficient method for multi-Fault diagnosis of rotating machines.
Abstract: Vibration-based feature extraction of multiple transient fault signals is a challenge in the field of rotating machinery fault diagnosis. Variational mode decomposition (VMD) has great potential for multiple faults decoupling because of its equivalent filtering characteristics. However, the two key hyper-parameters of VMD, i.e., the number of modes and balancing parameter, require to be predefined, thereby resulting in sub-optimal decomposition performance. Although some studies focused on the adaptive parameter determination, the problems in these improved methods like mode redundancy or being sensitive to random impacts still need to be solved. To overcome these drawbacks, an adaptive variational mode decomposition (AVMD) method is developed in this paper. In the proposed method, a novel index called syncretic impact index (SII) is firstly introduced for better evaluation of the complex impulsive fault components of signals. It can exclude the effects of interference terms and concentrate on the fault impacts effectively. The optimal parameters of VMD are selected based on the index SII through the artificial bee colony (ABC) algorithm. The envelope power spectrum, proved to be more capable for fault feature extraction than the envelope spectrum, is applied in this study. Analysis on simulated signals and two experimental applications based on the proposed method demonstrates its effectiveness over other existing methods. The results indicate that the proposed method outperforms in separating impulsive multi-fault signals, thus being an efficient method for multi-fault diagnosis of rotating machines.

59 citations



Journal ArticleDOI
TL;DR: This paper formally establishes filter pruning as a multiobjective optimization problem, and proposes a knee-guided evolutionary algorithm (KGEA) that can automatically search for the solution with quality tradeoff between the scale of parameters and performance, in which both conflicting objectives can be optimized simultaneously.
Abstract: Deep neural networks (DNNs) have been regarded as fundamental tools for many disciplines. Meanwhile, they are known for their large-scale parameters, high redundancy in weights, and extensive computing resource consumptions, which pose a tremendous challenge to the deployment in real-time applications or on resource-constrained devices. To cope with this issue, compressing DNNs for accelerating its inference has drawn extensive interest recently. The basic idea is to prune parameters with little performance degradation. However, the overparameterized nature and the conflict between parameters reduction and performance maintenance make it prohibitive to manually search the pruning parameter space. In this paper, we formally establish filter pruning as a multiobjective optimization problem, and propose a knee-guided evolutionary algorithm (KGEA) that can automatically search for the solution with quality tradeoff between the scale of parameters and performance, in which both conflicting objectives can be optimized simultaneously. In particular, by incorporating a minimum Manhattan distance approach, the search effort in the proposed KGEA is explicitly guided toward the knee area, which greatly facilitates the manual search for a good tradeoff solution. Moreover, the parameter importance is directly estimated on the criterion of performance loss, which can robustly identify the redundancy. In addition to the knee solution, a performance-improved model can also be found in a fine-tuning-free fashion. The experiments on compressing fully convolutional LeNet and VGG-19 networks validate the superiority of the proposed algorithm over the state-of-the-art competing methods.

Journal ArticleDOI
TL;DR: Aiming at the tough requirements for safety and maintainability in mobile hydraulic systems, an active fault-tolerant control (FTC) system is proposed against the valve faults of an independent metering valve.
Abstract: Aiming at the tough requirements for safety and maintainability in mobile hydraulic systems, in this article, an active fault-tolerant control (FTC) system is proposed against the valve faults of an independent metering valve Not only moderate faults of activated valves, but also significant faults and unactivated valve faults are considered Without additional hardware redundancy, analytical redundancy is derived by the coordinate control of other fault-free valves Accordingly, an FTC system in parallel with a normal controller is designed based on the pressure feedback It consists of a set of reconfigurable controllers and a decision mechanism The control signals, control loops, and operating modes can all be precisely reconfigured to adaptively match unmodeled fault dynamics A bumpless transfer controller based on a latent tracking loop is designed to smooth the switching between the normal controller and FTC Consequently, random valve faults can be tolerated with minor degradations in motion tracking and energy-saving performance The feasibility of the FTC system is evaluated by a 2-ton excavator

Journal ArticleDOI
TL;DR: Three acceleration-level joint-drift-free ALJDF schemes for kinematic control of redundant manipulators are proposed and analyzed from perspectives of dynamics and kinematics with the corresponding tracking error analyses to enhance the product quality and production efficiency in industrial production.
Abstract: In this article, three acceleration-level joint-drift-free (ALJDF) schemes for kinematic control of redundant manipulators are proposed and analyzed from perspectives of dynamics and kinematics with the corresponding tracking error analyses. First, the existing ALJDF schemes for kinematic control of redundant manipulators are systematized into a generalized acceleration-level joint-drift-free scheme with a paradox pointing out the theoretical existence of the velocity error related to joint drift. Second, to remedy the deficiency of the existing solutions, a novel acceleration-level joint-drift-free (NALJDF) scheme is proposed to decouple Cartesian space error from joint space with the tracking error theoretically eliminated. Third, in consideration of the uncertainty at the dynamics level, a multi-index optimization acceleration-level joint-drift-free scheme is presented to reveal the influence of dynamics factors on the redundant manipulator control. Afterwards, theoretical analyses are provided to prove the stability and feasibility of the corresponding dynamic neural network with the tracking error deduced. Then, computer simulations, performance comparisons, and physical experiments on different redundant manipulators synthesized by the proposed schemes are conducted to demonstrate the high performance and superiority of the NALJDF scheme and the influence of dynamics parameters on robot control. This work is of great significance to enhance the product quality and production efficiency in industrial production.

Journal ArticleDOI
TL;DR: In this article, a self-attention convLSTM (SA-ConvLSTMs) neural network is proposed for wind farm forecasting, which replaces the fully connected layers inside the network structure to reduce the redundancy of the network and enhance its nonlinear modeling capability.
Abstract: Traditional long short-term memory (LSTM) neural networks generally face the challenge of low training efficiency and poor prediction accuracy for the remaining useful life (RUL) prediction due to their structure. In this study, a novel model called self-attention ConvLSTM (SA-ConvLSTM) neural network is proposed derived from ConvLSTM and a SA mechanism. First, convolution operators replace the fully connected layers inside the network structure to reduce the redundancy of the network and enhance its nonlinear modeling capability. Subsequently, a SA module is designed and embedded into the interior of the model by adaptively employing the corresponding important information to improve the prediction performance. Extensive experiments on the test rig and the actual wind farm confirmed that the developed SA-ConvLSTM has advantages over other conventional prediction methods in terms of convergence speed and prediction precision.

Journal ArticleDOI
TL;DR: A cost model is formulated as an optimization problem, the objective of which is to prompt the MEC server to judiciously allocate computing tasks to nearby MEC servers with the goal of achieving the minimal cost while the latency of tasks is guaranteed.
Abstract: Multiaccess edge computing (MEC) enables autonomous vehicles to handle time-critical and data-intensive computational tasks for emerging Internet-of-Vehicles (IoV) applications via computation offloading. However, a massive amount of data generated by colocated vehicles is typically redundant, introducing a critical issue due to limited network bandwidth. Moreover, on the edge server side, these computation-intensive tasks further impose severe pressure on the resource-finite MEC server, resulting in low-performance efficiency of applications. To solve these challenges, we model the data redundancy and collaborative task computing scheme to efficiently reduce the redundant data and utilize the idle resources in nearby MEC servers. First, the data redundancy problem is formulated as a set-covering problem according to the spatiotemporal coverage of captured images. Next, we exploit the submodular optimization technique to design an efficient algorithm to minimize the number of images transferred to the MEC servers without degrading the quality of IoV applications. To facilitate the task execution in the MEC server, we then propose a collaborative task computing scheme, where an MEC server intentionally encourages nearby resource-rich MEC servers to participate in a collaborative computing group. Accordingly, a cost model is formulated as an optimization problem, the objective of which is to prompt the MEC server to judiciously allocate computing tasks to nearby MEC servers with the goal of achieving the minimal cost while the latency of tasks is guaranteed. Experimental results show that the proposed scheme can efficiently mitigate data redundancy, conserve network bandwidth consumption, and achieve the lowest cost for processing tasks.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a multiview subspace clustering (MSC) algorithm that groups samples and removes data redundancy concurrently, which is employed to obtain the robust data representation of low redundancy for later clustering.
Abstract: Taking the assumption that data samples are able to be reconstructed with the dictionary formed by themselves, recent multiview subspace clustering (MSC) algorithms aim to find a consensus reconstruction matrix via exploring complementary information across multiple views. Most of them directly operate on the original data observations without preprocessing, while others operate on the corresponding kernel matrices. However, they both ignore that the collected features may be designed arbitrarily and hard guaranteed to be independent and nonoverlapping. As a result, original data observations and kernel matrices would contain a large number of redundant details. To address this issue, we propose an MSC algorithm that groups samples and removes data redundancy concurrently. In specific, eigendecomposition is employed to obtain the robust data representation of low redundancy for later clustering. By utilizing the two processes into a unified model, clustering results will guide eigendecomposition to generate more discriminative data representation, which, as feedback, helps obtain better clustering results. In addition, an alternate and convergent algorithm is designed to solve the optimization problem. Extensive experiments are conducted on eight benchmarks, and the proposed algorithm outperforms comparative ones in recent literature by a large margin, verifying its superiority. At the same time, its effectiveness, computational efficiency, and robustness to noise are validated experimentally.

Journal ArticleDOI
TL;DR: This is the first study that explores aDeep reinforcement learning model for hyperspectral image analysis, thus opening a new door for future research and showcasing the great potential of deep reinforcement learning in remote sensing applications.
Abstract: Band selection refers to the process of choosing the most relevant bands in a hyperspectral image. By selecting a limited number of optimal bands, we aim at speeding up model training, improving accuracy, or both. It reduces redundancy among spectral bands while trying to preserve the original information of the image. By now, many efforts have been made to develop unsupervised band selection approaches, of which the majorities are heuristic algorithms devised by trial and error. In this article, we are interested in training an intelligent agent that, given a hyperspectral image, is capable of automatically learning policy to select an optimal band subset without any hand-engineered reasoning. To this end, we frame the problem of unsupervised band selection as a Markov decision process, propose an effective method to parameterize it, and finally solve the problem by deep reinforcement learning. Once the agent is trained, it learns a band-selection policy that guides the agent to sequentially select bands by fully exploiting the hyperspectral image and previously picked bands. Furthermore, we propose two different reward schemes for the environment simulation of deep reinforcement learning and compare them in experiments. This, to the best of our knowledge, is the first study that explores a deep reinforcement learning model for hyperspectral image analysis, thus opening a new door for future research and showcasing the great potential of deep reinforcement learning in remote sensing applications. Extensive experiments are carried out on four hyperspectral data sets, and experimental results demonstrate the effectiveness of the proposed method. The code is publicly available.

Journal ArticleDOI
TL;DR: It is shown that neglecting either the effects of dynamic environments or the correlation among component lifetimes would underestimate the reliability of series systems and overestimateThe reliability of parallel systems.
Abstract: The working conditions of multicomponent systems are usually dynamic and stochastic. Reliability evaluation of such systems is challenging, since the components are generally positively correlated. Based on the cumulative exposure principle, we model the effects of the dynamic environments on the component lifetimes by a common stochastic time scale, and exponential dispersion process is utilized to describe the stochastic time scale. Then, the component lifetimes are shown to be positively quadrant dependent, and the joint survival function of the component lifetimes is derived, which includes the results of [1] as special cases. In this article, we show that neglecting either the effects of dynamic environments or the correlation among component lifetimes would underestimate the reliability of series systems and overestimate the reliability of parallel systems. We also investigate the problem of parameter redundancy of the model, and give some suggestions for data analysis. Simulation studies show that the unified model is flexible and useful for suggesting an optimal model given observed data.

Journal ArticleDOI
TL;DR: This paper examines the relative impact on supply chain responsiveness of adding flexibility and redundancy and aims to investigate the effectiveness of flexibility and redundancies in terms of minimality.
Abstract: This paper examines the relative impact on supply chain responsiveness of adding flexibility and redundancy. We seek to investigate the effectiveness of flexibility and redundancy in terms of minim...

Journal ArticleDOI
TL;DR: This paper presents a quantitative study of the influence of the corrosion effect on the redundancy of RC structures and investigates the time-dependent reliability and redundancy of the structure.

Journal ArticleDOI
TL;DR: A hybrid fault diagnosis method developed based on ReliefF, principal component analysis (PCA) and deep neural network (DNN) is used to diagnose the WT faults well and shows that the accuracies are all much higher than the comparison methods.
Abstract: A large amount of data would be generated during the operation of wind turbine (WT), which is easy to cause dimensional disaster, and if more than one WT fault occur, multiple sensors would alarm. To solve the problems of big data, inaccurate and untimely fault diagnosis and so on, a hybrid fault diagnosis method is developed based on ReliefF, principal component analysis (PCA) and deep neural network (DNN) in this paper. Firstly, the ReliefF method is used to select the fault features and reduce the data dimensions. Secondly, PCA algorithm is used to further reduce the data dimensions, which is mainly used to reduce the redundancy among the data and improve the accuracy of fault diagnosis. Finally, the ReliefF-PCA-DNN model is constructed, optimized and used for the fault case of a wind farm in Jilin Province. The experimental results show that, for the single fault, the accuracies of the proposed hybrid models are all more than 98.5% and for the multi faults, the accuracy of the proposed model is more than 96%, which both are all much higher than the comparison methods. So, the method could diagnose the WT faults well.

Journal ArticleDOI
TL;DR: A semi-supervised multi-view deep discriminant representation learning approach that incorporates the orthogonality and adversarial similarity constraints to reduce the redundancy of learned representations and to exploit the information contained in unlabeled data is proposed.
Abstract: Learning an expressive representation from multi-view data is a key step in various real-world applications. In this paper, we propose a semi-supervised multi-view deep discriminant representation learning (SMDDRL) approach. Unlike existing joint or alignment multi-view representation learning methods that cannot simultaneously utilize the consensus and complementary properties of multi-view data to learn inter-view shared and intra-view specific representations, SMDDRL comprehensively exploits the consensus and complementary properties as well as learns both shared and specific representations by employing the shared and specific representation learning network. Unlike existing shared and specific multi-view representation learning methods that ignore the redundancy problem in representation learning, SMDDRL incorporates the orthogonality and adversarial similarity constraints to reduce the redundancy of learned representations. Moreover, to exploit the information contained in unlabeled data, we design a semi-supervised learning framework by combining deep metric learning and density clustering. Experimental results on three typical multi-view learning tasks, i.e., webpage classification, image classification, and document classification demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: The optimal mission abort threshold in each attempt is investigated to minimize the expected total cost of mission failure and system failure.

Journal ArticleDOI
TL;DR: The results show the superiority of the presented mixed redundancy strategy in comparison with the traditional strategies, and the proposed heuristic algorithm performs better in terms of computational time compared to the full enumeration techniques.

Journal ArticleDOI
TL;DR: Two lower bounds that have very simple forms for feature redundancy and complementarity are introduced, and it is verified that they are closer to the optima than the existing lower bounds applied by some state-of-the-art information theoretic methods.

Journal ArticleDOI
TL;DR: Empirical results explicitly demonstrate the ability of the proposed FS scheme and its effectiveness in controlling redundancy, and the empirical simulations are observed to be consistent with the theoretical results.
Abstract: We propose a neural network-based feature selection (FS) scheme that can control the level of redundancy in the selected features by integrating two penalties into a single objective function. The Group Lasso penalty aims to produce sparsity in features in a grouped manner. The redundancy-control penalty, which is defined based on a measure of dependence among features, is utilized to control the level of redundancy among the selected features. Both the penalty terms involve the $L_{2,1}$ -norm of weight matrix between the input and hidden layers. These penalty terms are nonsmooth at the origin, and hence, one simple but efficient smoothing technique is employed to overcome this issue. The monotonicity and convergence of the proposed algorithm are specified and proved under suitable assumptions. Then, extensive experiments are conducted on both artificial and real data sets. Empirical results explicitly demonstrate the ability of the proposed FS scheme and its effectiveness in controlling redundancy. The empirical simulations are observed to be consistent with the theoretical results.

Journal ArticleDOI
TL;DR: A network combining multi-scale hierarchical feature fusion and mixed convolution attention to progressively and adaptively enhance the dehazing performance is proposed and shows that the proposed method outperforms state-of-the-art haze removal algorithms.
Abstract: Single image dehazing, which aims at restoring a haze-free image from its correspondingly unconstrained hazy scene, is highly challenging and has gained immense popularity in recent years. However, the images generated using existing haze-removal methods often contain haze, artifacts, and color distortions, which severely degrade the visual quality of the final images. To this end, we propose a network combining multi-scale hierarchical feature fusion and mixed convolution attention to progressively and adaptively enhance the dehazing performance. The haze levels and image structure information are accurately estimated by fusing multi-scale hierarchical features, thus the model restores images with less remaining haze. The proposed attention mechanism is capable of reducing feature redundancy, learning compact internal representations, highlighting task-relevant features and further helping the model to estimate images with sharper textural details and more vivid colors. Therefore, with the application of multi-scale features extracted from both diverse layers and filters, the dehazing performance is significantly improved. Furthermore, a deep semantic loss function is proposed to highlight more semantic information in deep features. The experimental results show that the proposed method outperforms state-of-the-art haze removal algorithms.

Journal ArticleDOI
TL;DR: In this article, the authors introduce the effective graph, a weighted graph that captures the nonlinear logical redundancy present in biochemical network regulation, signaling, and control, and demonstrate that redundant pathways are prevalent in biological models of biochemical regulation.
Abstract: The ability to map causal interactions underlying genetic control and cellular signaling has led to increasingly accurate models of the complex biochemical networks that regulate cellular function. These network models provide deep insights into the organization, dynamics, and function of biochemical systems: for example, by revealing genetic control pathways involved in disease. However, the traditional representation of biochemical networks as binary interaction graphs fails to accurately represent an important dynamical feature of these multivariate systems: some pathways propagate control signals much more effectively than do others. Such heterogeneity of interactions reflects canalization-the system is robust to dynamical interventions in redundant pathways but responsive to interventions in effective pathways. Here, we introduce the effective graph, a weighted graph that captures the nonlinear logical redundancy present in biochemical network regulation, signaling, and control. Using 78 experimentally validated models derived from systems biology, we demonstrate that 1) redundant pathways are prevalent in biological models of biochemical regulation, 2) the effective graph provides a probabilistic but precise characterization of multivariate dynamics in a causal graph form, and 3) the effective graph provides an accurate explanation of how dynamical perturbation and control signals, such as those induced by cancer drug therapies, propagate in biochemical pathways. Overall, our results indicate that the effective graph provides an enriched description of the structure and dynamics of networked multivariate causal interactions. We demonstrate that it improves explainability, prediction, and control of complex dynamical systems in general and biochemical regulation in particular.

Journal ArticleDOI
TL;DR: In this article, an improved social spider optimization (SSO) algorithm is proposed to reduce the energy consumption and improve the network coverage in heterogeneous wireless sensor networks (HWSNs).
Abstract: To overcome the problems of coverage blind areas and coverage redundancy when sensor nodes are deployed randomly in heterogeneous wireless sensor networks (HWSNs). An optimal coverage method for HWSNs based on an improved social spider optimization (SSO) algorithm is proposed, which can reduce the energy consumption and improve the network coverage. First, a mathematical model of HWSN coverage is established, which is a complex combinatorial optimization problem. To improve the global convergence speed of the proposed algorithm, a chaotic initialization method is used to generate the initial population. In addition, the SSO algorithm has a poor convergence speed and search ability, which is enhanced by improving the neighborhood search, global search, and matching radius. In the iterative optimization process, the optimal solution is ultimately obtained by simulating the movement law of the spider colony, i.e., according to the cooperation, mutual attraction, and mating process of female and male spiders. An improved SSO algorithm based on chaos, namely the CSSO algorithm, is proposed to apply to the optimal deployment of sensory nodes in HWSNs. On this basis, the optimization goals are to improve the network coverage and reduce network costs. The optimal deployment plan of nodes is searched via the proposed CSSO algorithm, which effectively prevents coverage blind spots and coverage redundancy in the network.

Proceedings ArticleDOI
17 Oct 2021
TL;DR: In this article, a cross-modal consensus network (CO2-Net) is proposed to reduce the task-irrelevant information redundancy in weakly-supervised temporal action localization.
Abstract: Weakly supervised temporal action localization (WS-TAL) is a challenging task that aims to localize action instances in the given video with video-level categorical supervision. Previous works use the appearance and motion features extracted from pre-trained feature encoder directly,e.g., feature concatenation or score-level fusion. In this work, we argue that the features extracted from the pre-trained extractors,e.g., I3D, which are trained for trimmed video action classification, but not specific for WS-TAL task, leading to inevitable redundancy and sub-optimization. Therefore, the feature re-calibration is needed for reducing the task-irrelevant information redundancy. Here, we propose a cross-modal consensus network(CO2-Net) to tackle this problem. In CO2-Net, we mainly introduce two identical proposed cross-modal consensus modules (CCM) that design a cross-modal attention mechanism to filter out the task-irrelevant information redundancy using the global information from the main modality and the cross-modal local information from the auxiliary modality. Moreover, we further explore inter-modality consistency, where we treat the attention weights derived from each CCM as the pseudo targets of the attention weights derived from another CCM to maintain the consistency between the predictions derived from two CCMs, forming a mutual learning manner. Finally, we conduct extensive experiments on two commonly used temporal action localization datasets, THUMOS14 and ActivityNet1.2, to verify our method, which we achieve state-of-the-art results. The experimental results show that our proposed cross-modal consensus module can produce more representative features for temporal action localization.