About: Node (circuits) is a research topic. Over the lifetime, 27440 publications have been published within this topic receiving 221080 citations.
Papers published on a yearly basis
TL;DR: This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added.
Abstract: Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879-892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs f"n(x)=@?i=1n@b"iG(x,a"i,b"i) can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such ''generalized'' SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.
TL;DR: It is shown that several known properties of A* retain their form and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases in which the estimates are also consistent, then A* is indeed optimal.
Abstract: This paper reports several properties of heuristic best-first search strategies whose scoring functions ƒ depend on all the information available from each candidate path, not merely on the current cost g and the estimated completion cost h. It is shown that several known properties of A* retain their form (with the minmax of f playing the role of the optimal cost), which helps establish general tests of admissibility and general conditions for node expansion for these strategies. On the basis of this framework the computational optimality of A*, in the sense of never expanding a node that can be skipped by some other algorithm having access to the same heuristic information that A* uses, is examined. A hierarchy of four optimality types is defined and three classes of algorithms and four domains of problem instances are considered. Computational performances relative to these algorithms and domains are appraised. For each class-domain combination, we then identify the strongest type of optimality that exists and the algorithm for achieving it. The main results of this paper relate to the class of algorithms that, like A*, return optimal solutions (i.e., admissible) when all cost estimates are optimistic (i.e., h ≤ h*). On this class, A* is shown to be not optimal and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases in which the estimates are also consistent, then A* is indeed optimal. Additionally, A* is also shown to be optimal over a subset of the latter class containing all best-first algorithms that are guided by path-dependent evaluation functions.
TL;DR: In this paper, it was shown that a singularity occurs in isoparametric finite elements if the mid-side nodes are moved sufficiently from their normal position to obtain a more accurate solution to the problem of determining the stress intensity at the tip of a crack.
Abstract: It is shown that a singularity occurs in isoparametric finite elements if the mid-side nodes are moved sufficiently from their normal position. By choosing the mid-side node positions on standard isoparametric elements so that the singularity occurs exactly at the corner of an element it is possible to obtain quite accurate solutions to the problem of determining the stress intensity at the tip of a crack. The solutions compare favourably with those obtained using some types of special crack tip elements, but are not as accurate as those given by a crack tip element based on the hybrid principle. However, the hybrid elements are more difficult to use.
TL;DR: This paper proposes a decentralized event-triggering mechanism that will be able to guarantee stability and performance for event-triggered controllers with larger minimum inter-event times than the existing results in the literature.
Abstract: Most event-triggered controllers available nowadays are based on static state-feedback controllers. As in many control applications full state measurements are not available for feedback, it is the objective of this paper to propose event-triggered dynamical output-based controllers. The fact that the controller is based on output feedback instead of state feedback does not allow for straightforward extensions of existing event-triggering mechanisms if a minimum time between two subsequent events has to be guaranteed. Furthermore, since sensor and actuator nodes can be physically distributed, centralized event-triggering mechanisms are often prohibitive and, therefore, we will propose a decentralized event-triggering mechanism. This event-triggering mechanism invokes transmission of the outputs in a node when the difference between the current values of the outputs in the node and their previously transmitted values becomes “large” compared to the current values and an additional threshold. For such event-triggering mechanisms, we will study closed-loop stability and L∞-performance and provide bounds on the minimum time between two subsequent events generated by each node, the so-called inter-event time of a node. This enables us to make tradeoffs between closed-loop performance on the one hand and communication load on the other hand, or even between the communication load of individual nodes. In addition, we will model the event-triggered control system using an impulsive model, which truly describes the behavior of the event-triggered control system. As a result, we will be able to guarantee stability and performance for event-triggered controllers with larger minimum inter-event times than the existing results in the literature. We illustrate the developed theory using three numerical examples.
••19 Oct 2003
TL;DR: The goal is to design tools that enable modestly-skilled programmers to isolate performance bottlenecks in distributed systems composed of black-box nodes by developing two very different algorithms for inferring the dominant causal paths through a distributed system from these traces.
Abstract: Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of "black-box" components: software from many different (perhaps competing) vendors, usually without source code available. Typical solutions-provider employees are not always skilled or experienced enough to debug these systems efficiently. Our goal is to design tools that enable modestly-skilled programmers (and experts, too) to isolate performance bottlenecks in distributed systems composed of black-box nodes.We approach this problem by obtaining message-level traces of system activity, as passively as possible and without any knowledge of node internals or message semantics. We have developed two very different algorithms for inferring the dominant causal paths through a distributed system from these traces. One uses timing information from RPC messages to infer inter-call causality; the other uses signal-processing techniques. Our algorithms can ascribe delay to specific nodes on specific causal paths. Unlike previous approaches to similar problems, our approach requires no modifications to applications, middleware, or messages.
Trending Questions (10)