scispace - formally typeset
Search or ask a question

Showing papers on "Katz centrality published in 2018"


Journal ArticleDOI
TL;DR: It is concluded that undertaking data reduction using unsupervised machine learning methods helps to choose appropriate variables (centrality measures) and identify the contribution proportions of the centrality measures with PCA as a prerequisite step of network analysis before inferring functional consequences, e.g., essentiality of a node.
Abstract: Numerous centrality measures have been introduced to identify “central” nodes in large networks. The availability of a wide range of measures for ranking influential nodes leaves the user to decide which measure may best suit the analysis of a given network. The choice of a suitable measure is furthermore complicated by the impact of the network topology on ranking influential nodes by centrality measures. To approach this problem systematically, we examined the centrality profile of nodes of yeast protein-protein interaction networks (PPINs) in order to detect which centrality measure is succeeding in predicting influential proteins. We studied how different topological network features are reflected in a large set of commonly used centrality measures. We used yeast PPINs to compare 27 common of centrality measures. The measures characterize and assort influential nodes of the networks. We applied principal component analysis (PCA) and hierarchical clustering and found that the most informative measures depend on the network’s topology. Interestingly, some measures had a high level of contribution in comparison to others in all PPINs, namely Latora closeness, Decay, Lin, Freeman closeness, Diffusion, Residual closeness and Average distance centralities. The choice of a suitable set of centrality measures is crucial for inferring important functional properties of a network. We concluded that undertaking data reduction using unsupervised machine learning methods helps to choose appropriate variables (centrality measures). Hence, we proposed identifying the contribution proportions of the centrality measures with PCA as a prerequisite step of network analysis before inferring functional consequences, e.g., essentiality of a node.

127 citations


Journal ArticleDOI
TL;DR: The possibilities of the linear threshold model for the definition of centrality measures to be used on weighted and labeled social networks are explored and a new centrality measure to rank the users of the network, the Linear Threshold Rank (LTR), and a centralization measure to determine to what extent the entire network has a centralized structure are explored.
Abstract: Centrality and influence spread are two of the most studied concepts in social network analysis. In recent years, centrality measures have attracted the attention of many researchers, generating a large and varied number of new studies about social network analysis and its applications. However, as far as we know, traditional models of influence spread have not yet been exhaustively used to define centrality measures according to the influence criteria. Most of the considered work in this topic is based on the independent cascade model. In this paper we explore the possibilities of the linear threshold model for the definition of centrality measures to be used on weighted and labeled social networks. We propose a new centrality measure to rank the users of the network, the Linear Threshold Rank (LTR), and a centralization measure to determine to what extent the entire network has a centralized structure, the Linear Threshold Centralization (LTC). We appraise the viability of the approach through several case studies. We consider four different social networks to compare our new measures with two centrality measures based on relevance criteria and another centrality measure based on the independent cascade model. Our results show that our measures are useful for ranking actors and networks in a distinguishable way.

50 citations


Journal ArticleDOI
TL;DR: The concept of systemic risk is applied to show that centrality metrics can be used for complex supply network risk assessment and indicate that these metrics are successful in identifying vulnerabilities in network structure even in simplified cases.
Abstract: The growth in size and complexity of supply chains has led to compounded risk exposure, which is hard to measure with existing risk management approaches. In this study, we apply the concept of systemic risk to show that centrality metrics can be used for complex supply network risk assessment. We review and select metrics, and set up an exemplary case applied to the material flow and contractual networks of Honda Acura. In the exemplary case study, geographical risk information is incorporated to selected systemic risk assessment metrics and results are compared to assessment without risk indicators in order to draw conclusions on how additional information can enhance systemic risk assessment in supply networks. Katz centrality is used to measure the node's risk spread using the World Risk Index. Authority and hub centralities are applied to measure the link risk spread using distances between geographical locations. Closeness is used to measure speed of disruption spread. Betweenness centrality is used to identify high-risk middlemen. Our results indicate that these metrics are successful in identifying vulnerabilities in network structure even in simplified cases, which risk practitioners can use to extend with historical data to gain more accurate insights into systemic risk exposure.

45 citations


Proceedings ArticleDOI
15 Apr 2018
TL;DR: If stakeholder groups agree on the central factors (per Katz centrality), they also tend to agree on simulation outcomes and thus share a paradigm, and this work suggests that fishery management is a case in point.
Abstract: Modeling approaches can support policy coherence by capturing the logistics of an intervention involving multiple individuals, or by identifying goals and preferences of each individual. An important intermediate step is to identify agreement among individuals. This may be achieved through intensive qualitative methods such as interviews, or by automatically comparing models. Current comparisons are limited as they either assess whether individuals think of the same factors, or see the same causal connections between factors. Systems science suggests that, to test whether individuals really share a paradigm, we should mobilize their whole models. Instead of comparing their whole models through multiple simulation scenarios, we suggested using network centrality. We performed experiments on mental models from 264 participants in the context of fishery management. Our results suggest that if stakeholder groups agree on the central factors (per Katz centrality), they also tend to agree on simulation outcomes and thus share a paradigm.

37 citations


Journal ArticleDOI
TL;DR: A new network centrality measure based on the concept of nonbacktracking walks, that is, walks not containing subsequences of the form uvu where u and v are any distinct connected vertices of the underlying graph, is introduced and studied.
Abstract: We introduce and study a new network centrality measure based on the concept of nonbacktracking walks, that is, walks not containing subsequences of the form uvu where u and v are any distinct connected vertices of the underlying graph. We argue that this feature can yield more meaningful rankings than traditional walk-based centrality measures. We show that the resulting Katz-style centrality measure may be computed via the so-called deformed graph Laplacian---a quadratic matrix polynomial that can be associated with any graph. By proving a range of new results about this matrix polynomial, we gain insights into the behavior of the algorithm with respect to its Katz-like parameter. The results also inform implementation issues. In particular we show that, in an appropriate limit, the new measure coincides with the nonbacktracking version of eigenvector centrality introduced by Martin, Zhang, and Newman in 2014. Rigorous analysis on star and star-like networks illustrates the benefits of the new approach,...

34 citations


Journal ArticleDOI
TL;DR: The theory of zeta functions provides an expression for the generating fu nction of nonbacktracking walk counts on a directed network and it is shown how this expression can be used to produce a centrality measure that eliminates backtracking walks at no cost.
Abstract: The theory of zeta functions provides an expression for the generating fu nction of nonbacktracking walk counts on a directed network. We show how this expression can be used to produce a centrality measure that eliminates backtracking walks at no cost. We also show that the radius of convergence of the generating function is determined by the spect rum of a three-by-three block matrix involving the original adjacency matrix. This giv es a means to choose appropriate values of the attenuation parameter. We find that three important a dditional benefits arise when we use this technique to eliminate traversals around the network that are unlikely to be of relevance. First, we obtain a larger range of choices for the attenuation para meter. Second, because the radius of convergence of the generating function is invariant under the remov al of certain types of nodes, we can gain computational efficiencies through reducing the dimension of t he resulting eigenvalue problem. Third, the dimension of the linear system defining the centrali ty measures may be reduced in the same manner. We show that the new centrality measure may be interp reted as standard Katz on a modified network, where self loops are added, and where nonreciproca l edges are augmented with negative weights. We also give a multilayer interpretation, wh ere negatively weighted walks between layers compensate for backtracking walks on the only non-emp ty layer. Studying the limit as the attenuation parameter approaches its upper bound allows us to propose an eigenvector-based nonbacktracking centrality measure in this directed network setting. We find that the two-by-two block matrix arising in previous studies focused on undirected networks must be extended to a new three-by-three block structure to allow for directed edges. We illustrat e the centrality measure on a synthetic network, where it is shown to eliminate a localization effect p resent in standard Katz centrality. Finally, we give results for real networks.

31 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors investigated systemic importance and its influential factors of Chinese financial institutions by complex network modeling method, and found that the larger comprehensive network centrality index is, the higher corresponding ranking for the node of networks is and the greater systemic importance of financial institution will be.
Abstract: From the perspective of volatility spillover, this paper investigates systemic importance and its influential factors of Chinese financial institutions by complex network modeling method. We first construct the volatility spillover networks by vector autoregressive models-multivariate generalized autoregressive conditional heteroscedastic models (VAR-MGARCH) in a BEKK form, and then construct a comprehensive network centrality index based on five network centralities (degree centrality, closeness centrality, betweenness centrality, modified Katz centrality and information centrality) to measure the financial institutions’ systemic importance. The results indicate that the larger comprehensive network centrality index is, the higher corresponding ranking for the node of networks is and the greater systemic importance of financial institution will be. Finally, we identify the major factors which affect systemic importance of the financial institutions with panel data regression analysis. We find that compared with the market factors, the accounting factors are more advantageous to identify important financial institutions. Specifically, financial institutions with lager size and higher assets growth rate tend to be associated with greater systemic importance.

20 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce the potential gain as a centrality measure that unifies many walk-based centrality metrics in graphs and captures the notion of node navigability, interpreted as the property of being reachable from anywhere else (in the graph) through short walks.
Abstract: Centrality metrics are a popular tool in Network Science to identify important nodes within a graph. We introduce the Potential Gain as a centrality measure that unifies many walk-based centrality metrics in graphs and captures the notion of node navigability, interpreted as the property of being reachable from anywhere else (in the graph) through short walks. Two instances of the Potential Gain (called the Geometric and the Exponential Potential Gain) are presented and we describe scalable algorithms for computing them on large graphs. We also give a proof of the relationship between the new measures and established centralities. The geometric potential gain of a node can thus be characterized as the product of its Degree centrality by its Katz centrality scores. At the same time, the exponential potential gain of a node is proved to be the product of Degree centrality by its Communicability index. These formal results connect potential gain to both the "popularity" and "similarity" properties that are captured by the above centralities.

20 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper analyzed the systemic importance of Chinese financial institutions and its influential factors based on return spillover network and found that financial institutions with larger tail risk of stock return, higher return on equity, lower turnover rate and lower assets growth rate tend to be associated with greater systemic importance.
Abstract: The objective of this study is to analyze systemic importance of Chinese financial institutions and its influential factors based on return spillover network. We first investigate the return spillover effects among financial institutions and construct the return spillover networks by Granger causality in vector autoregressive (VAR) models. Then we calculate six network centralities (degree centrality, closeness centrality, betweenness centrality, modified Katz centrality, eccentricity centrality and information centrality) to measure systemic importance of financial institutions. Because different centrality measures are correlated with each other, we use the principal component analysis method to obtain comprehensive information about systemic importance of financial institutions. Finally, we identify the major factors, including market and accounting variables, which affect systemic importance of the financial institutions with panel data regression analysis. We find that financial institutions with larger tail risk of stock return, higher return on equity, lower turnover rate and lower assets growth rate tend to be associated with greater systemic importance.

18 citations


Proceedings Article
25 Apr 2018
TL;DR: This paper proposes a novel axiomatization of the Eigenvector Centrality and the Katz Centrality based on six simple requirements, which highlights the similarities and differences between both centralities which may help in choosing the right centrality for a specific application.
Abstract: Feedback centralities are one of the key classes of centrality measures. They assess the importance of a vertex recursively, based on the importance of its neighbours. Feedback centralities includes the Eigenvector Centrality, as well as its variants, such as the Katz Centrality or the PageRank, and are used in various AI applications, such as ranking the importance of websites on the Internet and most influential users in the Twitter social network. In this paper, we study the theoretical underpinning of the feedback centralities. Specifically, we propose a novel axiomatization of the Eigenvector Centrality and the Katz Centrality based on six simple requirements. Our approach highlights the similarities and differences between both centralities which may help in choosing the right centrality for a specific application.

13 citations


Proceedings ArticleDOI
20 Apr 2018
TL;DR: A new algorithm “PrKatz” is proposed based on Katz centrality and a propagation probability threshold that provides a certain ability to influence users' successfully and demonstrates the performance of the proposed algorithm compared with the state of the art algorithms in term of influence spread.
Abstract: The influence maximization has attracted a lot of interest and attention in recent decades due to its various application such as advertising and spot suspicious users that may paralyze the functionality of a certain system and rumor control. The main purpose of this paper is to select influential users depending on the available budget that maximizes the influence coverage in the network. Existing work, mainly focus on designing methods based on centrality metrics due to their low time complexity and acceptable influence spread and since approaches based greedy algorithm suffer from high time complexity. In this paper, we propose a new algorithm “PrKatz” based on Katz centrality and a propagation probability threshold that provides a certain ability to influence users' successfully. The experimental results on large datasets demonstrate the performance of our proposed algorithm compared with the state of the art algorithms in term of influence spread.

Proceedings ArticleDOI
TL;DR: In this paper, a clear understanding of the main vertex centrality measures available, unveiling their similarities and differences in a large number of distinct social networks, is provided by means of an empirical analysis.
Abstract: Measures of complex network analysis, such as vertex centrality, have the potential to unveil existing network patterns and behaviors. They contribute to the understanding of networks and their components by analyzing their structural properties, which makes them useful in several computer science domains and applications. Unfortunately, there is a large number of distinct centrality measures and little is known about their common characteristics in practice. By means of an empirical analysis, we aim at a clear understanding of the main centrality measures available, unveiling their similarities and differences in a large number of distinct social networks. Our experiments show that the vertex centrality measures known as information, eigenvector, subgraph, walk betweenness and betweenness can distinguish vertices in all kinds of networks with a granularity performance at 95%, while other metrics achieved a considerably lower result. In addition, we demonstrate that several pairs of metrics evaluate the vertices in a very similar way, i.e. their correlation coefficient values are above 0.7. This was unexpected, considering that each metric presents a quite distinct theoretical and algorithmic foundation. Our work thus contributes towards the development of a methodology for principled network analysis and evaluation.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: In this paper, the problem of computing rankings for Katz centrality is considered and an algorithm that iteratively improves the upper and lower bounds on the Katz score of a node is proposed.
Abstract: Network analysis defines a number of centrality measures to identify the most central nodes in a network. Fast computation of those measures is a major challenge in algorithmic network analysis. Aside from closeness and betweenness, Katz centrality is one of the established centrality measures. In this paper, we consider the problem of computing rankings for Katz centrality. In particular, we propose upper and lower bounds on the Katz score of a given node. While previous approaches relied on numerical approximation or heuristics to compute Katz centrality rankings, we construct an algorithm that iteratively improves those upper and lower bounds until a correct Katz ranking is obtained. We extend our algorithm to dynamic graphs while maintaining its correctness guarantees. Experiments demonstrate that our static graph algorithm outperforms both numerical approaches and heuristics with speedups between 1.5 x and 3.5 x, depending on the desired quality guarantees. Our dynamic graph algorithm improves upon the static algorithm for update batches of less than 10000 edges. We provide efficient parallel CPU and GPU implementations of our algorithms that enable near real-time Katz centrality computation for graphs with hundreds of millions of nodes in fractions of seconds.

Journal ArticleDOI
TL;DR: This work presents an algorithm for updating Katz centrality scores in a dynamic graph that incrementally updates thecentrality scores as the underlying graph changes, and exploits properties of iterative solvers to obtain updated Katz scores in dynamic graphs.
Abstract: A variety of large datasets, such as social networks or biological data, can be represented as graphs. A common query in graph analysis is to identify the most important vertices in a graph. Centrality metrics are used to obtain numerical scores for each vertex in the graph. The scores are then translated to rankings identifying relative importance of vertices. In this work, we focus on Katz centrality, a linear algebra-based metric. In many real applications, since data are constantly being produced and changed, it is necessary to have a dynamic algorithm to update centrality scores with minimal computation when the graph changes. We present an algorithm for updating Katz centrality scores in a dynamic graph that incrementally updates the centrality scores as the underlying graph changes. Our proposed method exploits properties of iterative solvers to obtain updated Katz scores in dynamic graphs. Our dynamic algorithm improves performance and achieves speedups of over two orders of magnitude compared to a standard static algorithm while maintaining high quality of results.

Journal ArticleDOI
TL;DR: Simulations on real-world networks show that the proposed IM method has quite high accuracy on predicting both the preference of any normal agent and the final competition result, particularly in undirected networks.
Abstract: In this paper, we study the prediction problem of diffusion process on complex networks in competitive circumstances. With this problem solved, the competitors could timely intervene the diffusion process if needed such that an expected outcome might be obtained. We consider a model with two groups of competitors spreading opposite opinions on a network. A prediction method based on the mutual influences among the agents is proposed, called Influence Matrix (IM for short), and simulations on real-world networks show that the proposed IM method has quite high accuracy on predicting both the preference of any normal agent and the final competition result. For comparison purpose, classic centrality measures are also used to predict the competition result. It is shown that PageRank, Degree, Katz Centrality, and the IM method are suitable for predicting the competition result. More precisely, in undirected networks, the IM method performs better than these centrality measures when the competing group contains more than one agent; in directed networks, the IM method performs only second to PageRank.

Journal ArticleDOI
TL;DR: The alternative consensus protocol will be applied to the leader-following formation problems for discrete-time multi-agent systems with second-order dynamics and time-varying communication delay with the advantages of the robustness on time delay and H ∞ performance.
Abstract: This paper aims at constructing an improved structure for the consensus protocol in discrete-time multi-agent systems. A consensus protocol is proposed by using the centrality measure for agent which is determined by the information flow through multi-agent systems. In analyzing multi-agent systems, a graph representation is used. The concept of centrality measure which was introduced in the field of social network analysis is attempted to find the most central node within the graph. This work is intended to observe the Katz-centrality that is one of centrality measures. Here, the advantage of Katz-centrality is that a node’s centrality depends not only on how many others the node is connected to (its degree), but also on their centrality. The alternative consensus protocol will be applied to the leader-following formation problems for discrete-time multi-agent systems with second-order dynamics and time-varying communication delay. To achieve this, sufficient conditions for the aforementioned problems will be established in terms of linear matrix inequality by utilizing the Lyapunov method and some mathematical techniques. Finally, the discrete-time multi-agent systems modeled with a point-mass dynamics of motion for aircraft and its simulation results are given to illustrate the advantages of the alternative consensus protocol in point of the robustness on time delay and H ∞ performance.

Proceedings ArticleDOI
09 Jul 2018
TL;DR: The problem of optimally investing in nodes of a social network in a competitive setting, where two camps aim to maximize adoption of their opinions by the population, is studied and the existence of Nash equilibria under reasonable assumptions is shown.
Abstract: We study the problem of two competing camps aiming to maximize the adoption of their respective opinions, by optimally investing in nodes of a social network in multiple phases. The final opinion of a node in a phase acts as its biased opinion in the following phase. Using an extension of Friedkin-Johnsen model, we formulate the camps' utility functions, which we show to involve what can be interpreted as multiphase Katz centrality. We hence present optimal investment strategies of the camps, and the loss incurred if myopic strategy is employed. Simulations affirm that nodes attributing higher weightage to bias necessitate higher investment in initial phase. The extended version of this paper analyzes a setting where a camp's influence on a node depends on the node's bias; we show existence and polynomial time computability of Nash equilibrium.

Journal ArticleDOI
TL;DR: This work answers the question of which pairwise rankings are reliable given an approximate solution to the linear system, and is able to accurately guarantee ranking of vertices with an approximation to centrality metrics faster than current methods.

Posted Content
TL;DR: In this article, the authors study the problem of optimally investing in nodes of a social network in a competitive setting, where two camps aim to maximize adoption of their opinions by the population.
Abstract: We study the problem of optimally investing in nodes of a social network in a competitive setting, where two camps aim to maximize adoption of their opinions by the population. In particular, we consider the possibility of campaigning in multiple phases, where the final opinion of a node in a phase acts as its initial biased opinion for the following phase. Using an extension of the popular DeGroot-Friedkin model, we formulate the utility functions of the camps, and show that they involve what can be interpreted as multiphase Katz centrality. Focusing on two phases, we analytically derive Nash equilibrium investment strategies, and the extent of loss that a camp would incur if it acted myopically. Our simulation study affirms that nodes attributing higher weightage to initial biases necessitate higher investment in the first phase, so as to influence these biases for the terminal phase. We then study the setting in which a camp's influence on a node depends on its initial bias. For single camp, we present a polynomial time algorithm for determining an optimal way to split the budget between the two phases. For competing camps, we show the existence of Nash equilibria under reasonable assumptions, and that they can be computed in polynomial time.

Book ChapterDOI
29 Jan 2018
TL;DR: This work study the parameterized complexity of the NP-complete problems Closeness Improvement and Betweenness Improvement in which it is asked to improve a given vertex’ closeness or betweenness centrality by a given amount through adding a given number of edges to the network.
Abstract: The centrality of a vertex v in a network intuitively captures how important v is for communication in the network. The task of improving the centrality of a vertex has many applications, as a higher centrality often implies a larger impact on the network or less transportation or administration cost. In this work we study the parameterized complexity of the NP-complete problems Closeness Improvement and Betweenness Improvement in which we ask to improve a given vertex’ closeness or betweenness centrality by a given amount through adding a given number of edges to the network. Herein, the closeness of a vertex v sums the multiplicative inverses of distances of other vertices to v and the betweenness sums for each pair of vertices the fraction of shortest paths going through v. Unfortunately, for the natural parameter “number of edges to add” we obtain hardness results, even in rather restricted cases. On the positive side, we also give an island of tractability for the parameter measuring the vertex deletion distance to cluster graphs.

Posted Content
TL;DR: This paper constructs an algorithm that iteratively improves upper and lower bounds on the Katz score of a given node until a correct Katz ranking is obtained, and provides efficient parallel CPU and GPU implementations of the algorithm that enable near real-time Katz centrality computation for graphs with hundreds of millions of nodes in fractions of seconds.
Abstract: Network analysis defines a number of centrality measures to identify the most central nodes in a network. Fast computation of those measures is a major challenge in algorithmic network analysis. Aside from closeness and betweenness, Katz centrality is one of the established centrality measures. In this paper, we consider the problem of computing rankings for Katz centrality. In particular, we propose upper and lower bounds on the Katz score of a given node. While previous approaches relied on numerical approximation or heuristics to compute Katz centrality rankings, we construct an algorithm that iteratively improves those upper and lower bounds until a correct Katz ranking is obtained. We extend our algorithm to dynamic graphs while maintaining its correctness guarantees. Experiments demonstrate that our static graph algorithm outperforms both numerical approaches and heuristics with speedups between 1.5x and 3.5x, depending on the desired quality guarantees. Our dynamic graph algorithm improves upon the static algorithm for update batches of less than 10000 edges. We provide efficient parallel CPU and GPU implementations of our algorithms that enable near real-time Katz centrality computation for graphs with hundreds of millions of nodes in fractions of seconds.

Book ChapterDOI
11 Dec 2018
TL;DR: This article showed that the solution of a standard clearing model commonly used in contagion analyses for financial systems can be expressed as a specific form of a generalized Katz centrality measure under conditions that correspond to a system-wide shock.
Abstract: I show that the solution of a standard clearing model commonly used in contagion analyses for financial systems can be expressed as a specific form of a generalized Katz centrality measure under conditions that correspond to a system-wide shock. This result provides a formal explanation for earlier empirical results which showed that Katz-type centrality measures are closely related to contagiousness. It also allows assessing the assumptions that one is making when using such centrality measures as systemic risk indicators. I conclude that these assumptions should be considered too strong and that, from a theoretical perspective, clearing models should be given preference over centrality measures in systemic risk analyses.

Posted Content
TL;DR: In this paper, the authors show that a version of the friendship paradox holds rigorously for eigenvector centrality, i.e., on average, our friends are more important than us.
Abstract: The friendship paradox states that, on average, our friends have more friends than we do. In network terms, the average degree over the nodes can never exceed the average degree over the neighbours of nodes. This effect, which is a classic example of sampling bias, has attracted much attention in the social science and network science literature, with variations and extensions of the paradox being defined, tested and interpreted. Here, we show that a version of the paradox holds rigorously for eigenvector centrality: on average, our friends are more important than us. We then consider general matrix-function centrality, including Katz centrality, and give sufficient conditions for the paradox to hold. We also discuss which results can be generalized to the cases of directed and weighted edges. In this way, we add theoretical support for a field that has largely been evolving through empirical testing.

Journal ArticleDOI
TL;DR: This work examines node centrality measures such as degree, closeness, eigenvector, Katz and subgraph centrality for undirected networks, and shows that the logarithmic function in particular has potential as a centrality measure.
Abstract: Network is considered naturally as a wide range of different contexts, such as biological systems, social relationships as well as various technological scenarios. Investigation of the dynamic phenomena taking place in the network, determination of the structure of the network and community and description of the interactions between various elements of the network are the key issues in network analysis. One of the huge network structure challenges is the identification of the node(s) with an outstanding structural position within the network. The popular method for doing this is to calculate a measure of centrality. We examine node centrality measures such as degree, closeness, eigenvector, Katz and subgraph centrality for undirected networks. We show how the Katz centrality can be turned into degree and eigenvector centrality by considering limiting cases. Some existing centrality measures are linked to matrix functions. We extend this idea and examine the centrality measures based on general matrix functions and in particular, the logarithmic, cosine, sine, and hyperbolic functions. We also explore the concept of generalised Katz centrality. Various experiments are conducted for different networks generated by using random graph models. The results show that the logarithmic function in particular has potential as a centrality measure. Similar results were obtained for real-world networks.