scispace - formally typeset
Search or ask a question

Showing papers on "Hyperlink published in 2013"


Book
10 May 2013
TL;DR: Richard Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions and introduces a new vision and method for Internet research that applies them to the Web's objects of study, from tiny particles to large masses.
Abstract: In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such "methods of the medium" as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By "thinking along" with devices and the objects they handle, digital research methods can follow the evolving methods of the medium. Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates' social media "friends," and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web's objects of study, from tiny particles (hyperlinks) to large masses (social media).

534 citations


Journal ArticleDOI
TL;DR: This survey provides in–depth analysis and classification of social networks existing on the Internet together with studies on selected examples of different virtual communities.
Abstract: The rapid development and expansion of the Internet and the social–based services comprised by the common Web 2.0 idea provokes the creation of the new area of research interests, i.e. social networks on the Internet called also virtual or online communities. Social networks can be either maintained and presented by social networking sites like MySpace, LinkedIn or indirectly extracted from the data about user interaction, activities or achievements such as emails, chats, blogs, homepages connected by hyperlinks, commented photos in multimedia sharing system, etc. A social network is the set of human beings or rather their digital representations that refer to the registered users who are linked by relationships extracted from the data about their activities, common communication or direct links gathered in the internet–based systems. Both digital representations named in the paper internet identities as well as their relationships can be characterized in many different ways. Such diversity yields for building a comprehensive and coherent view onto the concept of internet–based social networks. This survey provides in–depth analysis and classification of social networks existing on the Internet together with studies on selected examples of different virtual communities.

171 citations


Patent
09 Jan 2013
TL;DR: In this paper, a method for live streaming online content, including video segments related to online information in a single online platform, is described, which includes generating, using at least one processor, a series of videos having a sequential order to display in a web browser and publishing the videos in the web browser.
Abstract: Systems and methods are disclosed for providing live-streaming online content, including video segments related to online information in a single online platform. In accordance with one implementation, a method is provided that includes generating, using at least one processor, a series of videos having a sequential order to display in a web browser and publishing the videos in the web browser. The method further includes determining online data that relates to each video in the series and displaying the related data in the web browser. Further, the method includes playing the videos in the web browser in a sequential order and updating the data that relates to a video while the video is playing in real-time. The related data may include user comments, social media comments, pictures, videos, webpages, or hyperlinks.

106 citations


Journal ArticleDOI
TL;DR: It is argued that beyond the apparent diversity and ad hoc methodologies that the reviewed studies propose, a unified framework exists that combines quantitative link counts, qualitative inquiries and valuation of field expertise to support link interpretation.
Abstract: The hyperlink is a fundamental feature of the web. This paper investigates how hyperlinks have been used as research objects in social sciences. Reviewing a body of literature belonging to sociology, political sciences, information sciences, geography or media studies, it particularly reflects on the study of hyperlinks as indicators of other social phenomena. Why are links counted and hyperlink networks measured? How are links interpreted? The paper then focuses on barriers and limitations to the study of links. It addresses the issue of unobtrusiveness, the importance of interpreting links in context, and the possibilities of large-scale, automatic link studies. We finally argue that beyond the apparent diversity and ad hoc methodologies that the reviewed studies propose, a unified framework exists. It combines quantitative link counts, qualitative inquiries and valuation of field expertise to support link interpretation.

105 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for computing semantic relatedness between words or texts by using knowledge from hypertext encyclopedias such as Wikipedia, where two types of weighted links between concepts are considered: one based on hyperlinks between the texts of the articles, and another based on the lexical similarity between them.

78 citations


22 May 2013
TL;DR: The methodology and findings provide valuable insights into modern traffic that can allow network administrators to better manage and protect their networks, traffic regulators to protect the rights of on-line users, and researchers to better understand the evolution of the traffic from modern websites.
Abstract: More and more applications and services move to the web and this has led to web traffic amounting to as much as 80% of all network traffic. At the same time, most traffic classification efforts stop once they correctly label a flow as web or HTTP. In this paper, we focus on understanding what happens “under the hood” of HTTP traffic. Our first contribution is ReSurf, a systematic approach to reconstruct web-surfing activity starting from raw network data with more than 91% recall and 95% precision over four real network traces. Our second contribution is an extensive analysis of web activity across these traces. By utilizing ReSurf, we study web-surfing behaviors in terms of user requests and transitions between websites (e.g. the click-through history of following hyperlinks). A surprising result is the prevalence of advertising and tracking services that are being accessed during web-surfing that are without the user's explicit consent. In our traces, we found that with 90% chance a user will access such a service after just three user requests (or “clicks”). We believe that our methodology and findings provide valuable insights into modern traffic that can allow: (a) network administrators to better manage and protect their networks, (b) traffic regulators to protect the rights of on-line users, and (c) researchers to better understand the evolution of the traffic from modern websites.

67 citations


Patent
03 Apr 2013
TL;DR: In this paper, the authors provide a commenting system for multiple users to provide and share comments to shared documents, where comments can scroll independently of the content in a content item or the comments can be linked to a location therein.
Abstract: Various embodiments provide a commenting system for multiple users to provide and share comments to shared documents. For example, users can share a web link to a collection of content items, such as documents, spreadsheets, photos, and any other media, with other users stored in an online content management system. The commenting system can provide a comment interface displayable alongside a respective content item and the comments can be saved for each user and the content item with associated comments can be synced across the multiple users. The comments can scroll independently of the content in a content item or the comments can be linked to a location therein and the scrolling of the comments can be linked to the scrolling of the content item such that corresponding comments are displayed.

55 citations


Proceedings ArticleDOI
16 Apr 2013
TL;DR: This research exploring integrated multimodal search and hyperlinking for multimedia data is described, based on the MediaEval 2012 Search and Hyperlinking task, where automatically created hyperlinks link each relevant item to related items within the collection.
Abstract: Searching for relevant webpages and following hyperlinks to related content is a widely accepted and effective approach to information seeking on the textual web. Existing work on multimedia information retrieval has focused on search for individual relevant items or on content linking without specific attention to search results. We describe our research exploring integrated multimodal search and hyperlinking for multimedia data. Our investigation is based on the MediaEval 2012 Search and Hyperlinking task. This includes a known-item search task using the Blip10000 internet video collection, where automatically created hyperlinks link each relevant item to related items within the collection. The search test queries and link assessment for this task was generated using the Amazon Mechanical Turk crowdsourcing platform. Our investigation examines a range of alternative methods which seek to address the challenges of search and hyperlinking using multimodal approaches. The results of our experiments are used to propose a research agenda for developing effective techniques for search and hyperlinking of multimedia content.

53 citations


Proceedings ArticleDOI
TL;DR: This paper combines classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community, and has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm.
Abstract: Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes.

53 citations


Journal ArticleDOI
TL;DR: Significant correlations between Web traffic data and organizational performance measures, specifically academic quality for universities and financial variables for businesses are found.

52 citations


Patent
30 Aug 2013
TL;DR: In this paper, an Internet service is configured to provide information to a user of the service in ranked order according to demographic profile information about the user provided by the user, such information might include advertising information and/or search results.
Abstract: An Internet service is configured to provide information to a user of the service in ranked order according to demographic profile information about the user provided by the user. Such information might include advertising information and/or search results (e.g., rendered as hyperlinks) to search queries posed by the user. The information may be returned in a ranked order according to reward credits offered by advertisers and/or content providers associated with the advertising information and/or web sites represented by the search results. A process for verifying whether or not an Internet operation (e.g., sending an e-mail message or accessing a web site) is being attempted by a human being or an automated process may be incorporated with the service by using a quiz process that requires user interaction.

Proceedings Article
01 Oct 2013
TL;DR: The system is based on first computing similarities between an input document and the texts of Wikipedia pages and then using a biased, hub-avoiding version of the Spreading Activation algorithm on the Wikipedia graph in order to associate the input document with skills.
Abstract: This paper presents a system that performs skill extraction from text documents. It outputs a list of professional skills that are relevant to a given input text. We argue that the system can be practical for hiring and management of personnel in an organization. We make use of the texts and the hyperlink graph of Wikipedia, as well as a list of professional skills obtained from the LinkedIn social network. The system is based on first computing similarities between an input document and the texts of Wikipedia pages and then using a biased, hub-avoiding version of the Spreading Activation algorithm on the Wikipedia graph in order to associate the input document with skills.

Journal ArticleDOI
Yajun Du1, Yufeng Hai1
TL;DR: A new method for measuring the similarity of formal concept analysis (FCA) concepts and a new notion of a web page's rank are proposed that use an information content approach based on users' web logs.

Patent
14 Nov 2013
TL;DR: In this article, the authors propose a commenting system for multiple users to provide and share comments to shared content items, such as documents, spreadsheets, photos, and any other media, with other users stored in an online content management system.
Abstract: Various embodiments provide a commenting system for multiple users to provide and share comments to shared content items. For example, users can share a web link to a collection of content items, such as documents, spreadsheets, photos, and any other media, with other users stored in an online content management system. To enable such functionality, the online content management system can expose an application programming interface to enable third-party service providers to develop and attach a comment interface to content items. Accordingly, such a commenting system can provide a comment interface for concurrent display alongside a respective content item in which users can provide comments to shared content items or to use as notes for their personal content items.

Proceedings ArticleDOI
04 Feb 2013
TL;DR: Experimental results show that NCDawareRank is more resistant to direct manipulation, alleviates the problems caused by the sparseness of the link graph and assigns more reasonable ranking scores to newly added pages, while maintaining the ability to be easily implemented on a large-scale and in a computationally efficient manner.
Abstract: Research about the topological characteristics of the hyperlink graph has shown that Web possesses a nested block structure, indicative of its innate hierarchical organization. This crucial observation opens the way for new approaches that can usefully regard Web as a Nearly Completely Decomposable(NCD) system; In recent years, such approaches gave birth to various efficient methods and algorithms that exploit NCD from a computational point of view and manage to considerably accelerate the extraction of the PageRank vector. However, very little have been done towards the qualitative exploitation of NCD.In this paper we propose NCDawareRank, a novel ranking method that uses the intuition behind NCD to generalize and refine PageRank. NCDawareRank considers both the link structure and the hierarchical nature of the Web in a way that preserves the mathematically attractive characteristics of PageRank and at the same time manages to successfully resolve many of its known problems, including Web Spamming Susceptibility and Biased Ranking of Newly Emerging Pages. Experimental results show that NCDawareRank is more resistant to direct manipulation, alleviates the problems caused by the sparseness of the link graph and assigns more reasonable ranking scores to newly added pages, while maintaining the ability to be easily implemented on a large-scale and in a computationally efficient manner.

Posted Content
TL;DR: This paper introduces the concepts related to web mining; an overview of different Web Content Mining tools is presented; and a comparative table of these tools based on some pertinent criteria is presented.
Abstract: Nowadays, the Web has become one of the most widespread platforms for information change and retrieval. As it becomes easier to publish documents, as the number of users, and thus publishers, increases and as the number of documents grows, searching for information is turning into a cumbersome and time-consuming operation. Due to heterogeneity and unstructured nature of the data available on the WWW, Web mining uses various data mining techniques to discover useful knowledge from Web hyperlinks, page content and usage log. The main uses of web content mining are to gather, categorize, organize and provide the best possible information available on the Web to the user requesting the information. The mining tools are imperative to scanning the many HTML documents, images, and text. Then, the result is used by the search engines. In this paper, we first introduce the concepts related to web mining; we then present an overview of different Web Content Mining tools. We conclude by presenting a comparative table of these tools based on some pertinent criteria.

Patent
11 Mar 2013
TL;DR: In this paper, a computer system enables a business to reduce risks from phishing electronic messages by replacing one or more original web links embedded in the electronic message with a replacement web link.
Abstract: A computer system enables a business to reduce risks from phishing electronic messages. One or more original web links embedded in the electronic message may be replaced with a replacement web link. If the determined risk score for the original webpage is large enough webpage and the user clicks on the embedded web link, a user is directed to an intermediate webpage rather than to the original webpage. The intermediate webpage may provide details about the original webpage so that the user can make an informed choice whether to proceed to the original website. For example, the intermediate webpage may provide pertinent information to a user such as the actual domain of the remote site, the country the site is hosted in, how long the site has been online, and a rendered screen capture of the remote website, and/or a confidence score.

Proceedings ArticleDOI
11 Aug 2013
TL;DR: The authors combine classic ideas in topic modeling with a variant of the mixed-membership block model, which has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm.
Abstract: Many data sets contain rich information about objects, as well as pairwise relations between them. For instance, in networks of websites, scientific papers, and other documents, each node has content consisting of a collection of words, as well as hyperlinks or citations to other nodes. In order to perform inference on such data sets, and make predictions and recommendations, it is useful to have models that are able to capture the processes which generate the text at each node and the links between them. In this paper, we combine classic ideas in topic modeling with a variant of the mixed-membership block model recently developed in the statistical physics community. The resulting model has the advantage that its parameters, including the mixture of topics of each document and the resulting overlapping communities, can be inferred with a simple and scalable expectation-maximization algorithm. We test our model on three data sets, performing unsupervised topic classification and link prediction. For both tasks, our model outperforms several existing state-of-the-art methods, achieving higher accuracy with significantly less computation, analyzing a data set with 1.3 million words and 44 thousand links in a few minutes.

Journal ArticleDOI
TL;DR: In this paper, the authors analyze the distribution of eigenvalues in the complex plane and show that eigenstates with significant eigenvalue modulus are located on well defined network communities.
Abstract: We study the properties of eigenvalues and eigenvectors of the Google matrix of the Wikipedia articles hyperlink network and other real networks. With the help of the Arnoldi method, we analyze the distribution of eigenvalues in the complex plane and show that eigenstates with significant eigenvalue modulus are located on well defined network communities. We also show that the correlator between PageRank and CheiRank vectors distinguishes different organizations of information flow on BBC and Le Monde web sites.

Book ChapterDOI
06 Oct 2013
TL;DR: This paper proposes HPLSF (Hyperlink Prediction using Latent Social Features), a hyperlink prediction algorithm for hypernetworks that exploits the homophily property of social networks and generalizes a structural SVM to learn using both observed features and latent features.
Abstract: Predicting the existence of links between pairwise objects in networks is a key problem in the study of social networks. However, relationships among objects are often more complex than simple pairwise relations. By restricting attention to dyads, it is possible that information valuable for many learning tasks can be lost. The hypernetwork relaxes the assumption that only two nodes can participate in a link, permitting instead an arbitrary number of nodes to participate in so-called hyperlinks or hyperedges, which is a more natural representation for complex, multi-party relations. However, the hyperlink prediction problem has yet to be studied. In this paper, we propose HPLSF (Hyperlink Prediction using Latent Social Features), a hyperlink prediction algorithm for hypernetworks. By exploiting the homophily property of social networks, HPLSF explores social features for hyperlink prediction. To handle the problem that social features are not always observable, a latent social feature learning scheme is developed. To cope with the arbitrary cardinality hyperlink issue in hypernetworks, we design a feature-embedding scheme to map the a priori arbitrarily-sized feature set associated with each hyperlink into a uniformly-sized auxiliary space. To address the fact that observed features and latent features may be not independent, we generalize a structural SVM to learn using both observed features and latent features. In experiments, we evaluate the proposed HPLSF framework on three large-scale hypernetwork datasets. Our results on the three diverse datasets demonstrate the effectiveness of the HPLSF algorithm. Although developed in the context of social networks, HPLSF is a general methodology and applies to arbitrary hypernetworks.

Journal ArticleDOI
TL;DR: Dalton et al. as discussed by the authors proposed a multimodal strategy for engaging students in close reading using an original text as the base, using an interactive hypertext version with hyperlinks from selected pieces of text.
Abstract: Understanding relies on the reader's ability to ‘read between the lines’ and connect textual evidence with their own experience, knowledge and beliefs. The Common Core State Standards highlight the importance of using text−based evidence to develop arguments and support interpretations, including analysis and appreciation of author's craft. In this article, Dalton suggests a multimodal strategy for engaging students’ in close reading. Using an original text as the base, students compose a multimodal hypertext version with hyperlinks from selected pieces of text to their multimodal commentary. Illustrations are remixed with the addition of speech and thought balloons. An example hypertext is presented, along with suggestions for teaching.

Journal ArticleDOI
TL;DR: This paper analyzed hyperlink data from over 100 political parties in six countries to show how political actors are using links to engage in a new form of "networked communication" to promote themselves to an online audience.
Abstract: This study analyses hyperlink data from over 100 political parties in six countries to show how political actors are using links to engage in a new form of ‘networked communication’ to promote themselves to an online audience. We specify three types of networked communication – identity reinforcement, force multiplication and opponent dismissal – and hypothesize variance in their performance based on key party variables of size and ideological outlook. We test our hypotheses using an original comparative hyperlink dataset. The findings support expectations that hyperlinks are being used for networked communication by parties, with identity reinforcement and force multiplication being more common than opponent dismissal. The results are important in demonstrating the wider communicative significance of hyperlinks, in addition to their structural properties as linkage devices for websites.

Posted Content
TL;DR: Posting URLs in disaster-related tweets increased rumor-spreading behavior even though the URLs lacked the hyperlink function, and some psychological factors that could explain this effect were identified.
Abstract: Twitter is an example of social media, which allows its users to post text messages, known as “tweets,” of up to 140 characters A tweet can include a shortened URL that provides further information that cannot be included in the tweet Does including URLs in tweets influence the forwarding of the tweets during disasters, in which social media is flooded with unverified information? We conducted an experiment to answer this question The results showed that posting URLs in disaster-related tweets increased rumor-spreading behavior even though the URLs lacked the hyperlink function We identified some psychological factors that could explain this effect We conclude by discussing the vulnerability of social media to rumor transmission in light of our results

Journal ArticleDOI
TL;DR: The dynamics of hyperlinks are expected to feedback on the system of indexing, referencing, and retrieval at the level of research practices, and citations are a codified form of referencing.
Abstract: Scientific literature is expected to contain a body of knowledge that can be indexed and retrieved using references and citations. References are subtexts which refer to a supertext, that is, the body of scientific literature. The Science Citation Index has provided an electronic representation of science at the supertextual level by aggregating the subtextual citations. As the supertext, however, becomes independently available in virtual reality (as a "hypertext"), subtext and supertext become increasingly different contexts. The dynamics of hyperlinks are expected to feedback on the system of indexing, referencing, and retrieval at the level of research practices. References can be considered as part of the retention mechanism of this evolving system of scientific communication, and citations are a codified form of referencing.

Patent
12 Dec 2013
TL;DR: In this paper, a system allows just-in-time checking of information about an email in which a hyperlink is embedded, by modifying the resource locator of the hyperlink.
Abstract: A system allows just-in-time checking of information about an email in which a hyperlink is embedded. Upon receipt of the email containing the hyperlink, the resource locator of the hyperlink is modified to allow checking the reputation of the email upon traversal of the hyperlink. At traversal of the hyperlink, the current reputation of the resource locator and the current reputation of the email are both determined, and one or more actions are performed responsive to the determination.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the time evolution of ranking and spectral properties of the Google matrix of English Wikipedia hyperlink network during years 2003-2011 and showed that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts.
Abstract: We study the time evolution of ranking and spectral properties of the Google matrix of English Wikipedia hyperlink network during years 2003–2011. The statistical properties of ranking of Wikipedia articles via PageRank and CheiRank probabilities, as well as the matrix spectrum, are shown to be stabilized for 2007–2011. A special emphasis is done on ranking of Wikipedia personalities and universities. We show that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts. The Wikipedia PageRank of universities recovers 80% of top universities of Shanghai ranking during the considered time period.

Journal ArticleDOI
TL;DR: It is shown that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts.
Abstract: We study the time evolution of ranking and spectral properties of the Google matrix of English Wikipedia hyperlink network during years 2003 - 2011. The statistical properties of ranking of Wikipedia articles via PageRank and CheiRank probabilities, as well as the matrix spectrum, are shown to be stabilized for 2007 - 2011. A special emphasis is done on ranking of Wikipedia personalities and universities. We show that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts. The Wikipedia PageRank of universities recovers 80 percents of top universities of Shanghai ranking during the considered time period.

Patent
30 Jan 2013
TL;DR: In this article, a method for realizing hyperlinks of electronic books is proposed, in which each pagination in the pagination link directories is linked to a respectively corresponding page, and selecting keywords in the electronic books and setting the hyperlinks for the keywords such that the keywords are linked to associated link targets.
Abstract: The invention relates to a method for realizing hyperlinks of electronic books, comprising the following steps of: building pagination link directories for the electronic books, wherein each pagination in the pagination link directories is linked to a respectively corresponding page; selecting keywords in the electronic books and setting the hyperlinks for the keywords such that the keywords are linked to associated link targets; and storing hyperlink setting information of the electronic books. By means of implementing the method for realizing the hyperlinks of the electronic books, disclosed by the invention, the pagination link directories can be automatically built for the electronic books and the hyperlinks can be freely set for the contents of the electronic books by users, thus users are facilitated to carry out quick jump reading, and the requirements of the users for free extended reading are satisfied.

Patent
15 Mar 2013
TL;DR: In this article, a method for content ranking for recommending content to a community of users is provided by identifying data sources associated with a user of the recommendation system, each data source comprises content items.
Abstract: The present application provides a method for content ranking for recommending content to a community of users. Recommending content to a community of users is provided by identifying data sources associated with a user of the recommendation system. Each data source comprises content items. Hyperlinks embedded in the content items from the data sources associated with the user are extracted. The hyperlinks and a set of user preferences are used to rank new time-sensitive content items associated with a plurality of data sources. A set of ranked time sensitive customized content items are presented to the user as recommended content.

Patent
11 Mar 2013
TL;DR: In this article, a method for composing and executing a plurality of hyperlink pipelines within a web browser is proposed, where the method comprises moving a first source hyperlink that corresponds to a first resource to a destination hyperlink corresponding to a second resource, merging the first source Hyperlink with the destination Hyperlink to create a first hyperlink pipeline, and executing the second Hyperlink pipeline such that the second hyperlink is invoked before the first resource and the third resource, and the first resources are invoked before each other.
Abstract: A method for composing and executing a plurality of hyperlink pipelines within a web browser, wherein the method comprises moving a first source hyperlink that corresponds to a first resource to a destination hyperlink that corresponds to a second resource, merging the first source hyperlink with the destination hyperlink to create a first hyperlink pipeline, moving a second source hyperlink that corresponds to a third resource to the first hyperlink pipeline, merging the second source hyperlink with the first hyperlink pipeline to create a second hyperlink pipeline, and executing the second hyperlink pipeline such that the second resource is invoked before the first resource and the third resource, and the first resource is invoked before the third resource.