scispace - formally typeset
Search or ask a question

Showing papers by "Yaniv Altshuler published in 2022"


Journal ArticleDOI
TL;DR: It is demonstrated that preferential principles do not apply to the temporal changes in node ranking, and that the ranking dynamics follows a non-monotonous curve, suggesting an inherent partition of the nodes into qualitatively distinct stability categories.
Abstract: Numerous studies over the past decades established that real-world networks typically follow preferential attachment and detachment principles. Subsequently, this implies that degree fluctuations monotonically increase while rising up the ‘degree ladder’, causing high-degree nodes to be prone for attachment of new edges and for detachment of existing ones. Despite the extensive study of node degrees (absolute popularity), many domains consider node ranks (relative popularity) as of greater importance. This raises intriguing questions—what dynamics are expected to emerge when observing the ranking of network nodes over time? Does the ranking of nodes present similar monotonous patterns to the dynamics of their corresponding degrees? In this paper, we show that surprisingly the answer is not straightforward. By performing both theoretical and empirical analyses, we demonstrate that preferential principles do not apply to the temporal changes in node ranking. We show that the ranking dynamics follows a non-monotonous curve, suggesting an inherent partition of the nodes into qualitatively distinct stability categories. These findings provide plausible explanations to observed yet hitherto unexplained phenomena, such as how superstars fortify their ranks despite massive fluctuations in their degrees, and how stars are more prone to rank instability.

6 citations


Journal ArticleDOI
TL;DR: In this article , the authors examine the dynamics of networks, focusing on stability characteristics of node popularity, and present their results using various empirical datasets, finding that such temporal aspects are governed by a power-law regime, and that these powerlaw regularities are equally likely across all node ages.
Abstract: Abstract The structure of networks has been a focal research topic over the past few decades. These research efforts have enabled the discovery of numerous structural patterns and regularities, bringing forth advancements in many fields. In particular, the ubiquitous power-law patterns evident in degree distributions, graph eigenvalues and human mobility patterns have provided the opportunity to model many different complex systems. However, regularities in the dynamical patterns of networks remain a considerably less explored terrain. In this study we examine the dynamics of networks, focusing on stability characteristics of node popularity, and present our results using various empirical datasets. Specifically, we address several intriguing questions – for how long are popular nodes expected to remain so? How much time is expected to pass between two consecutive popularity periods? What characterizes nodes which manage to maintain their popularity for long periods of time? Surprisingly, we find that such temporal aspects are governed by a power-law regime, and that these power-law regularities are equally likely across all node ages.

3 citations


Journal ArticleDOI
28 Feb 2022-Entropy
TL;DR: All current neural networks are vulnerable to mimicking attacks, even if they do not divulge anything but the most basic required output, and that the student model that mimics them cannot be easily detected using currently available techniques.
Abstract: As state-of-the-art deep neural networks are being deployed at the core level of increasingly large numbers of AI-based products and services, the incentive for “copying them” (i.e., their intellectual property, manifested through the knowledge that is encapsulated in them) either by adversaries or commercial competitors is expected to considerably increase over time. The most efficient way to extract or steal knowledge from such networks is by querying them using a large dataset of random samples and recording their output, which is followed by the training of a student network, aiming to eventually mimic these outputs, without making any assumption about the original networks. The most effective way to protect against such a mimicking attack is to answer queries with the classification result only, omitting confidence values associated with the softmax layer. In this paper, we present a novel method for generating composite images for attacking a mentor neural network using a student model. Our method assumes no information regarding the mentor’s training dataset, architecture, or weights. Furthermore, assuming no information regarding the mentor’s softmax output values, our method successfully mimics the given neural network and is capable of stealing large portions (and sometimes all) of its encapsulated knowledge. Our student model achieved 99% relative accuracy to the protected mentor model on the Cifar-10 test set. In addition, we demonstrate that our student network (which copies the mentor) is impervious to watermarking protection methods and thus would evade being detected as a stolen model by existing dedicated techniques. Our results imply that all current neural networks are vulnerable to mimicking attacks, even if they do not divulge anything but the most basic required output, and that the student model that mimics them cannot be easily detected using currently available techniques.

1 citations