scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Association for Information Science and Technology in 2006"


Journal IssueDOI
Chaomei Chen1
TL;DR: This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature, and makes substantial theoretical and methodological contributions to progressive knowledge domain visualization.
Abstract: This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature—an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981–2004) and terrorism (1990–2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified. © 2006 Wiley Periodicals, Inc.

2,521 citations


Journal ArticleDOI
TL;DR: It is concluded that while a mixed mode interviewing strategy should be considered when possible, e-mail interviewing can be in many cases a viable alternative to face-to-face and telephone interviewing.
Abstract: This article summarizes findings from studies that employed electronic mail (e-mail) for conducting indepth interviewing. It discusses the benefits of, and the challenges associated with, using e-mail interviewing in qualitative research. The article concludes that while a mixed mode interviewing strategy should be considered when possible, e-mail interviewing can be in many cases a viable alternative to face-to-face and telephone interviewing. A list of recommendations for carrying out effective e-mail interviews is presented.

721 citations


Journal IssueDOI
TL;DR: A framework for authorship identification of online messages to address the identity-tracing problem is developed and four types of writing-style features are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship ofonline messages.
Abstract: With the rapid proliferation of Internet technologies and applications, misuse of online messages for inappropriate or illegal purposes has become a major concern for society. The anonymous nature of online-message distribution makes identity tracing a critical problem. We developed a framework for authorship identification of online messages to address the identity-tracing problem. In this framework, four types of writing-style features (lexical, syntactic, structural, and content-specific features) are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship of online messages. To examine this framework, we conducted experiments on English and Chinese online-newsgroup messages. We compared the discriminating power of the four types of features and of three classification techniques: decision trees, backpropagation neural networks, and support vector machines. The experimental results showed that the proposed approach was able to identify authors of online messages with satisfactory accuracy of 70 to 95p. All four types of message features contributed to discriminating authors of online messages. Support vector machines outperformed the other two classification techniques in our experiments. The high performance we achieved for both the English and Chinese datasets showed the potential of this approach in a multiple-language context. © 2006 Wiley Periodicals, Inc.

619 citations


Journal ArticleDOI
TL;DR: In this paper, short-term Web usage impact predicts medium-term citation impact, and the physics e-print archive is used to test this. But the citation impact of an article can only be measured several years after it has been published.
Abstract: The use of citation counts to assess the impact of research articles is well established. However, the citation impact of an article can only be measured several years after it has been published. As research articles are increasingly accessed through the Web, the number of times an article is downloaded can be instantly recorded and counted. One would expect the number of times an article is read to be related both to the number of times it is cited and to how old the article is. This paper analyses how short-term Web usage impact predicts medium-term citation impact. The physics e-print archive -- arXiv.org -- is used to test this.

425 citations


Journal IssueDOI
TL;DR: The study extends the application of co-occurrence matrices to the Web environment, in which the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index.
Abstract: Co-occurrence matrices, such as cocitation, coword, and colink matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of these data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This article discusses the difference between a symmetrical cocitation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (such as the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical cocitation matrix but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment, in which the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index. A set of data collected with the Google Scholar search engine is analyzed by using both the traditional methods of multivariate analysis and the new visualization software Pajek, which is based on social network analysis and graph theory. © 2006 Wiley Periodicals, Inc.

329 citations


Journal ArticleDOI
TL;DR: It is shown how the h-index can be used to express the broad impact of a scholar’s research output over time in more nuanced fashion than straight citation counts.
Abstract: The authors apply a new bibliometric measure, the h-index (Hirsch, 2005), to the literature of information science Faculty rankings based on raw citation counts are compared with those based on h-counts There is a strong positive correlation between the two sets of rankings It is shown how the h-index can be used to express the broad impact of a scholar's research output over time in more nuanced fashion than straight citation counts

315 citations


Journal ArticleDOI
TL;DR: This second edition of Qualitative Research for the Information Professional: A Practical Handbook lives up to its title; it is indeed practical.
Abstract: Published just 7 years after the first edition was released, this second edition of Qualitative Research for the Information Professional: A Practical Handbook lives up to its title; it is indeed practical. Most general texts about qualitative research are long on theory and short on specific instructions (e.g., Denzin & Lincoln, 2000; Lincoln & Guba, 1985; Miles & Huberman, 1984). The opposite is true here. A qualitative research newcomer could conceivably read this text and then undertake a small-scale project on his or her own. The newcomer would be wise to supplement this highly pragmatic text with another text of greater theoretical value (such as the previously mentioned Denzin & Lincoln, 2000; Lincoln & Guba, 1985; or Miles & Huberman, 1984), because Gorman and Clayton’s volume discusses specific qualitative methods within a largely decontextualized framework, divorcing the methods from the various philosophical and sociological perspectives that underlie them. The book is divided into 14 chapters. Each chapter discusses a step in the qualitative research process or details a particular qualitative research method. Each chapter begins with Focus Questions, which briefly summarize the main themes, and concludes with suggestions for further reading. Each chapter also includes one or more Research Scenarios (some fictional, some apparently drawn from the authors’ own experiences, although citations to particular projects are not given) to illustrate the main points discussed in the chapter. For example, Chapter 6, Beginning Fieldwork, includes a research scenario entitled Gaining Entry by Fitting the Surroundings. It discusses various entry barriers met by a researcher who studied the status of librarians within the organizational culture of a theological college, and his search for a suitable key informant. Much of the book is comprised of very practical advice, such as:

295 citations


Journal ArticleDOI
TL;DR: The results show that journal literature is increasingly important in the natural and social sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s.
Abstract: Journal articles constitute the core documents for the diffusion of knowledge in the natural sciences. It has been argued that the same is not true for the social sciences and humanities where knowledge is more often disseminated in monographs that are not indexed in the journal-based databases used for bibliometric analysis. Previous studies have made only partial assessments of the role played by both serials and other types of literature. The importance of journal literature in the various scientific fields has therefore not been systematically characterized. The authors address this issue by providing a systematic measurement of the role played by journal literature in the building of knowledge in both the natural sciences and engineering and the social sciences and humanities. Using citation data from the CD-ROM versions of the Science Citation Index (SCI), Social Science Citation Index (SSCI), and Arts and Humanities Citation Index (AHCI) databases from 1981 to 2000 (Thomson ISI, Philadelphia, PA), the authors quantify the share of citations to both serials and other types of literature. Variations in time and between fields are also analyzed. The results show that journal literature is increasingly important in the natural and social sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s. Journal literature accounts for less than 50% of the citations in several disciplines of the social sciences and humanities; hence, special care should be used when using bibliometric indicators that rely only on journal literature.

267 citations


Journal IssueDOI
TL;DR: In this paper, a multidisciplinary approach was adopted to develop an integrative model of consumer trust in Internet shopping through synthesizing the three diverse trust literatures, including social psychological perspective, legal framework and third-party recognition.
Abstract: The importance of trust in building and maintaining consumer relationships in the online environment is widely accepted in the Information Systems literature. A key challenge for researchers is to identify antecedent variables that engender consumer trust in Internet shopping. This paper adopts a multidisciplinary approach and develops an integrative model of consumer trust in Internet shopping through synthesizing the three diverse trust literatures. The social psychological perspective guides us to include perceived trustworthiness of Internet merchants as the key determinant of consumer trust in Internet shopping. The sociological viewpoint suggests the inclusion of legal framework and third-party recognition in the research model. The views of personality theorists postulate a direct effect of propensity to trust on consumer trust in Internet shopping. The results of this study provide strong support for the research model and research hypotheses, and the high explanatory power illustrates the complementarity of the three streams of research on trust. This paper contributes to the conceptual and empirical understanding of consumer trust in Internet shopping. Implications of this study are noteworthy for both researchers and practitioners. © 2006 Wiley Periodicals, Inc.

223 citations


Journal ArticleDOI
TL;DR: The idea of domain transfer—genre classifiers should be reusable across multiple topics—which does not arise in standard text classification is introduced and different features for building genre classifiers and their ability to transfer across multiple-topic domains are investigated.
Abstract: Current document-retrieval tools succeed in locating large numbers of documents relevant to a given query. While search results may be relevant according to the topic of the documents, it is more difficult to identify which of the relevant documents are most suitable for a particular user. Automatic genre analysis (i.e., the ability to distinguish documents according to style) would be a useful tool for identifying documents that are most suitable for a particular user. We investigate the use of machine learning for automatic genre classification. We introduce the idea of domain transfer—genre classifiers should be reusable across multiple topics—which does not arise in standard text classification. We investigate different features for building genre classifiers and their ability to transfer across multiple-topic domains. We also show how different feature-sets can be used in conjunction with each other to improve performance and reduce the number of documents that need to be labeled.

221 citations


Journal ArticleDOI
TL;DR: Concepts of natural and represented information, encoded and embodied information, as well as experienced, enacted, expressed, embedded, recorded, and trace information are elaborated.
Abstract: Fundamental forms of information, as well as the term information itself, are defined and developed for the purposes of information science/studies. Concepts of natural and represented information (taking an unconventional sense of representation), encoded and embodied information, as well as experienced, enacted, expressed, embedded, recorded, and trace information are elaborated. The utility of these terms for the discipline is illustrated with examples from the study of information-seeking behavior and of information genres. Distinctions between the information and curatorial sciences with respect to their social (and informational) objects of study are briefly outlined.

Journal IssueDOI
TL;DR: The authors examine the current three interdisciplinary approaches to conceptualizing how humans have sought information including the everyday life information seeking–sense-making approach, the information foraging approach, and the problem–solution perspective on information seeking approach and propose an initial integrated model of these different approaches with information use.
Abstract: For millennia humans have sought, organized, and used information as they learned and evolved patterns of human information behaviors to resolve their human problems and survive. However, despite the current focus on living in an “information age,” we have a limited evolutionary understanding of human information behavior. In this article the authors examine the current three interdisciplinary approaches to conceptualizing how humans have sought information including (a) the everyday life information seeking–sense-making approach, (b) the information foraging approach, and (c) the problem–solution perspective on information seeking approach. In addition, due to the lack of clarity regarding the role of information use in information behavior, a fourth information approach is provided based on a theory of information use. The use theory proposed starts from an evolutionary psychology notion that humans are able to adapt to their environment and survive because of our modular cognitive architecture. Finally, the authors begin the process of conceptualizing these diverse approaches, and the various aspects or elements of these approaches, within an integrated model with consideration of information use. An initial integrated model of these different approaches with information use is proposed. © 2006 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: The article provides an in-depth analysis of previous literature that led to the understanding of the four interactive components of “e” learning and how the authors can utilize these components to maximize the positive and minimize the negative results of ‘e’ learning.
Abstract: The article provides an in-depth analysis of previous literature that led to the understanding of the four interactive components of “e” learning and how we can utilize these components to maximize the positive and minimize the negative results of “e” learning. The four interactive dimensions of “e” learning are the following three originally described in Moore's editorial (1989): (1) interaction with the content, (2) interaction with the instructor, (3) interaction with the students, and an additional new fourth dimension, interaction with the system, which considered all of the new computer technology since his article. In our viewpoint we will highlight the impact that this fourth technological interactive dimension has on the results of “e” learning. The question then is not “to ‘e’ or not to ‘e’,” since “e” learning is already an essential factor of our contemporary learning environment. The question is how to “e”, based on the understanding of the four interactive components of “e” learning, and the understanding that these four types of interactions are different from the ones we are accustomed to in the traditional learning environment. © 2006 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: The factor-analytic solutions allow us to test classifications against the structures contained in the database; in this article the process will be demonstrated for the delineation of a set of biochemistry journals.
Abstract: The aggregated citation relations among journals included in the Science Citation Index provide us with a huge matrix, which can be analyzed in various ways By using principal component analysis or factor analysis, the factor scores can be employed as indicators of the position of the cited journals in the citing dimensions of the database Unrotated factor scores are exact, and the extraction of principal components can be made stepwise because the principal components are independent Rotation may be needed for the designation, but in the rotated solution a model is assumed This assumption can be legitimated on pragmatic or theoretical grounds Because the resulting outcomes remain sensitive to the assumptions in the model, an unambiguous classification is no longer possible in this case However, the factor-analytic solutions allow us to test classifications against the structures contained in the database; in this article the process will be demonstrated for the delineation of a set of biochemistry journals © 2006 Wiley Periodicals, Inc

Journal IssueDOI
TL;DR: The authors propose a new framework for assessing the performance of relatedness measures and visualization algorithms that contains four factors: accuracy, coverage, scalability, and robustness.
Abstract: Measuring the relatedness between bibliometric units (journals, documents, authors, or words) is a central task in bibliometric analysis. Relatedness measures are used for many different tasks, among them the generating of maps, or visual pictures, showing the relationship between all items from these data. Despite the importance of these tasks, there has been little written on how to quantitatively evaluate the accuracy of relatedness measures or the resulting maps. The authors propose a new framework for assessing the performance of relatedness measures and visualization algorithms that contains four factors: accuracy, coverage, scalability, and robustness. This method was applied to 10 measures of journal–journal relatedness to determine the best measure. The 10 relatedness measures were then used as inputs to a visualization algorithm to create an additional 10 measures of journal–journal relatedness based on the distances between pairs of journals in two-dimensional space. This second step determines robustness (i.e., which measure remains best after dimension reduction). Results show that, for low coverage (under 50p), the Pearson correlation is the most accurate raw relatedness measure. However, the best overall measure, both at high coverage, and after dimension reduction, is the cosine index or a modified cosine index. Results also showed that the visualization algorithm increased local accuracy for most measures. Possible reasons for this counterintuitive finding are discussed. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: Results show that cybermetric measures could be useful for reflecting the contribution of technologically oriented institutions, increasing the visibility of developing countries, and improving the rankings based on Science Citation Index (SCI) data with known biases.
Abstract: To test feasibility of cybermetric indicators for describing and ranking university activities as shown in their Web sites, a large set of 9,330 institutions worldwide was compiled and analyzed. Using search engines' advanced features, size (number of pages), visibility (number of external inlinks), and number of rich files (pdf, ps, doc, ppt, and xls formats) were obtained for each of the institutional domains of the universities. We found a statistically significant correlation between a Web ranking built on a combination of Webometric data and other university rankings based on bibliometric and other indicators. Results show that cybermetric measures could be useful for reflecting the contribution of technologically oriented institutions, increasing the visibility of developing countries, and improving the rankings based on Science Citation Index (SCI) data with known biases.

Journal ArticleDOI
TL;DR: An evaluation methodology based on fuzzy computing with words aimed at measuring the information quality of Web sites containing documents is presented and two new majority guided linguistic aggregation operators are introduced, the Majority guided Linguistic Induced Ordered Weighted Averaging and weighted MLIOWA.
Abstract: An evaluation methodology based on fuzzy computing with words aimed at measuring the information quality of Web sites containing documents is presented. This methodology is qualitative and user oriented because it generates linguistic recommendations on the information quality of the content-based Web sites based on users' perceptions. It is composed of two main components, an evaluation scheme to analyze the information quality of Web sites and a measurement method to generate the linguistic recommendations. The evaluation scheme is based on both technical criteria related to the Web site structure and criteria related to the content of information on the Web sites. It is user driven because the chosen criteria are easily understandable by the users, in such a way that Web visitors can assess them by means of linguistic evaluation judgments. The measurement method is user centered because it generates linguistic recommendations of the Web sites based on the visitors' linguistic evaluation judgments. To combine the linguistic evaluation judgments we introduce two new majority guided linguistic aggregation operators, the Majority guided Linguistic Induced Ordered Weighted Averaging (MLIOWA) and weighted MLIOWA operators, which generate the linguistic recommendations according to the majority of the evaluation judgments provided by different visitors. The use of this methodology could improve tasks such as information filtering and evaluation on the World Wide Web.

Journal IssueDOI
TL;DR: A five-factor model of relevance is proposed on the basis of Grice's theory of communication: topicality, novelty, reliability, understandability, and scope, which finds topicality and novelty to be the two essential relevance criteria.
Abstract: How does an information user perceive a document as relevant? The literature on relevance has identified numerous factors affecting such a judgment. Taking a cognitive approach, this study focuses on the criteria users employ in making relevance judgment beyond topicality. On the basis of Grice's theory of communication, we propose a five-factor model of relevance: topicality, novelty, reliability, understandability, and scope. Data are collected from a semicontrolled survey and analyzed by following a psychometric procedure. Topicality and novelty are found to be the two essential relevance criteria. Understandability and reliability are also found to be significant, but scope is not. The theoretical and practical implications of this study are discussed. © 2006 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: In this article, the invisible college concept is discussed with the intent of developing a consensus regarding its definition, and a new definition of the concept is introduced, including a proposed research model.
Abstract: This article addresses the invisible college concept with the intent of developing a consensus regarding its definition. Emphasis is placed on the term as it was defined and used in Derek de Solla Price's work (1963, 1986) and reviewed on the basis of its thematic progress in past research over the years. Special attention is given to Lievrouw's (1990) article concerning the structure versus social process problem to show that both conditions are essential to the invisible college and may be reconciled. A new definition of the invisible college is also introduced, including a proposed research model. With this model, researchers are encouraged to study the invisible college by focusing on three critical components—the subject specialty, the scientists as social actors, and the information use environment (IUE). © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This book is very referred for you because it gives not only the experience but also lesson, that will give wellness for all people from many societies.
Abstract: Where you can find the the digital sublime myth power and cyberspace easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this the digital sublime myth power and cyberspace book. It is about this book that will give wellness for all people from many societies.

Journal ArticleDOI
TL;DR: Fuzzy techniques are used in the design of reputation systems based on collecting and aggregating peers' opinions and the behavior of the proposed system is described by comparison with probabilistic approaches.
Abstract: Peer-to-peer (P2P) applications are rapidly gaining acceptance among users of Internet-based services, especially because of their capability of exchanging resources while preserving the anonymity of both requesters and providers. However, concerns have been raised about the possibility that malicious users can exploit the network to spread tampered-with resources (e.g., malicious programs and viruses). A considerable amount of research has thus focused on the development of trust and reputation models in P2P networks. In this article, we propose to use fuzzy techniques in the design of reputation systems based on collecting and aggregating peers' opinions. Fuzzy techniques are used in the evaluation and synthesis of all the opinions expressed by peers. The behavior of the proposed system is described by comparison with probabilistic approaches.

Journal ArticleDOI
TL;DR: The researchers conclude that the essence of teen everyday life information seeking (ELIS) is the gathering and processing of information to facilitate the teen-to-adulthood maturation process.
Abstract: This is the first part of a two-part article that offers a theoretical and an empirical model of the everyday life information needs of urban teenagers. The qualitative methodology used to gather data for the development of the models included written surveys, audio journals, written activity logs, photographs, and semistructured group interviews. Twenty-seven inner-city teens aged 14 through 17 participated in the study. Data analysis took the form of iterative pattern coding using QSR NVivo 2 software (QSR International, 2002). The resulting theoretical model includes seven areas of urban teen development: the social self, the emotional self, the reflective self, the physical self, the creative self, the cognitive self, and the sexual self. The researchers conclude that the essence of teen everyday life information seeking (ELIS) is the gathering and processing of information to facilitate the teen-to-adulthood maturation process. ELIS is self-exploration and world exploration that helps teens understand themselves and the social and physical worlds in which they live. This study shows the necessity of tying youth information-seeking research to developmental theory in order to examine the reasons why adolescents engage in various information behaviors.

Journal IssueDOI
TL;DR: Interview data gathered in the High Energy Physics (HEP) community is drawn on to address recent problems stemming from collaborative research activity that stretches the boundaries of the traditional scientific authorship model, suggesting that future work in this area draw on the emerging economics literature on “mechanism design” in considering how credit can be attributed.
Abstract: In this article, I draw on interview data gathered in the High Energy Physics (HEP) community to address recent problems stemming from collaborative research activity that stretches the boundaries of the traditional scientific authorship model. While authorship historically has been attributed to individuals and small groups, thereby making it relatively easy to tell who made major contributions to the work, recent collaborations have involved hundreds or thousands of individuals. Printing all of these names in the author list on articles can mean difficulties in discerning the nature or extent of individual contributions, which has significant implications for hiring and promotion procedures. This also can make collaborative research less attractive to scientists at the outset of a project. I discuss the issues that physicists are considering as they grapple with what it means to be “an author,” in addition to suggesting that future work in this area draw on the emerging economics literature on “mechanism design” in considering how credit can be attributed in ways that both ensure proper attribution and induce scientists to put forth their best effort. © 2006 Wiley Periodicals, Inc.

Journal IssueDOI
Traci Hong1
TL;DR: This article explores the associations that message features and Web structural features have with perceptions of Web site credibility in a within-subjects experiment that actively located health-related Web sites on the basis of two tasks that differed in task specificity and complexity.
Abstract: This article explores the associations that message features and Web structural features have with perceptions of Web site credibility. In a within-subjects experiment, 84 participants actively located health-related Web sites on the basis of two tasks that differed in task specificity and complexity. Web sites that were deemed most credible were content analyzed for message features and structural features that have been found to be associated with perceptions of source credibility. Regression analyses indicated that message features predicted perceived Web site credibility for both searches when controlling for Internet experience and issue involvement. Advertisements and structural features had no significant effects on perceived Web site credibility. Institution-affiliated domain names (.gov, .org, .edu) predicted Web site credibility, but only in the general search, which was more difficult. Implications of results are discussed in terms of online credibility research and Web site design. © 2006 Wiley Periodicals, Inc.

Journal IssueDOI
TL;DR: The study shows that the citation counts of the publications correspond reasonably well with the authors' own assessments of scientific contribution, and confirms that review articles are cited more frequently than other publication types.
Abstract: In this study scientists were asked about their own publication history and their citation counts The study shows that the citation counts of the publications correspond reasonably well with the authors' own assessments of scientific contribution Generally, citations proved to have the highest accuracy in identifying either major or minor contributions Nevertheless, according to these judgments, citations are not a reliable indicator of scientific contribution at the level of the individual article In the construction of relative citation indicators, the average citation rate of the subfield appears to be slightly more appropriate as a reference standard than the journal citation rate The study confirms that review articles are cited more frequently than other publication types Compared to the significance authors attach to these articles they appear to be considerably “overcited” However, there were only marginal differences in the citation rates between empirical, methods, and theoretical contributions © 2006 Wiley Periodicals, Inc

Journal ArticleDOI
TL;DR: The conceptual issues of information use are discussed by reviewing the major ideas of sense-making methodology developed by Brenda Dervin by utilizing the ideas of metaphor analysis suggested by Lakoff and Johnson.
Abstract: The conceptual issues of information use are discussed by reviewing the major ideas of sense-making methodology developed by Brenda Dervin. Sense-making methodology approaches the phenomena of information use by drawing on the metaphor of gap-bridging. The nature of this metaphor is explored by utilizing the ideas of metaphor analysis suggested by Lakoff and Johnson. First, the source domain of the metaphor is characterized by utilizing the graphical illustrations of sense-making metaphors. Second, the target domain of the metaphor is analyzed by scrutinizing Dervin's key writings on information seeking and use. The metaphor of gap-bridging does not suggest a substantive conception of information use; the metaphor gives methodological and heuristic guidance to posit contextual questions as to how people interpret information to make sense of it. Specifically, these questions focus on the ways in which cognitive, affective, and other elements useful for the sense-making process are constructed and shaped to bridge the gap. Ultimately, the key question of information use studies is how people design information in context.

Journal IssueDOI
TL;DR: Results of the study indicate that there is a strong preference for nonsponsored links, with searchers viewing these results first more than 82p of the time, and the order of the result listing does not appear to affect searcher evaluation of sponsored links.
Abstract: In this article, we report results of an investigation into the effect of sponsored links on ecommerce information seeking on the Web. In this research, 56 participants each engaged in six ecommerce Web searching tasks. We extracted these tasks from the transaction log of a Web search engine, so they represent actual ecommerce searching information needs. Using 60 organic and 30 sponsored Web links, the quality of the Web search engine results was controlled by switching nonsponsored and sponsored links on half of the tasks for each participant. This allowed for investigating the bias toward sponsored links while controlling for quality of content. The study also investigated the relationship between searching self-efficacy, searching experience, types of ecommerce information needs, and the order of links on the viewing of sponsored links. Data included 2,453 interactions with links from result pages and 961 utterances evaluating these links. The results of the study indicate that there is a strong preference for nonsponsored links, with searchers viewing these results first more than 82p of the time. Searching self-efficacy and experience does not increase the likelihood of viewing sponsored links, and the order of the result listing does not appear to affect searcher evaluation of sponsored links. The implications for sponsored links as a long-term business model are discussed. © 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This study is among the first to apply communication theory to an exploration of relational (socioemotional) aspects of virtual reference service (VRS) and identify interpersonal communication dynamics present in the chat reference environment.
Abstract: Synchronous chat reference services have emerged as viable alternatives to the traditional face-to-face (FtF) library reference encounter. Research in virtual reference service (VRS) and client–librarian behavior is just beginning with a primary focus on task issues of accuracy and efficiency. This study is among the first to apply communication theory to an exploration of relational (socioemotional) aspects of VRS. It reports results from a pilot study that analyzed 44 transcripts nominated for the LSSI Samuel Swett Green Award (Library Systems and Services, Germantown, MD) for Exemplary Virtual Reference followed by an analysis of 245 randomly selected anonymous transcripts from Maryland AskUsNow! statewide chat reference service. Transcripts underwent in-depth qualitative content analysis. Results revealed that interpersonal skills important to FtF reference success are present (although modified) in VRS. These include techniques for rapport building, compensation for lack of nonverbal cues, strategies for relationship development, evidence of deference and respect, facesaving tactics, greeting and closing rituals. Results also identified interpersonal communication dynamics present in the chat reference environment, differences in client versus librarian patterns, and compensation strategies for lack of nonverbal communication.

Journal ArticleDOI
TL;DR: Distributions of these relations show that there is more sharing of similar than different kinds knowledge, suggesting that knowledge may flow across disciplinary boundaries along lines of practice.
Abstract: Interdisciplinary collaboration has become of particular interest as science and social science research increasingly crosses traditional boundaries, raising issues about what kinds of information and knowledge exchange occurs, and thus what to support. Research on interdisciplinarity, learning, and knowledge management suggest the benefits of collaboration are achieved when individuals pool knowledge toward a common goal. Yet, it is not sufficient to say that knowledge exchange must take place; instead, we need to ask what kinds of exchanges form the basis of collaboration in these groups. To explore this, members of three distributed, interdisciplinary teams (one science and two social science teams) were asked what they learned from the five to eight others with whom they worked most closely, and what they thought those others learned from them. Results show the exchange of factual knowledge to be only one of a number of learning exchanges that support the team. Important exchanges also include learning the process of doing something, learning about methods, engaging jointly in research, learning about technology, generating new ideas, socialization into the profession, accessing a network of contacts, and administration work. Distributions of these relations show that there is more sharing of similar than different kinds knowledge, suggesting that knowledge may flow across disciplinary boundaries along lines of practice.

Journal IssueDOI
TL;DR: It is claimed that although theoretical frameworks are appropriate for guiding research, a Theory of Link Analysis is not possible, and that the Web is incapable of giving definitive answers to large-scale link analysis research questions concerning social factors underlying link creation.
Abstract: Link analysis in various forms is now an established technique in many different subjects, reflecting the perceived importance of links and of the Web. A critical but very difficult issue is how to interpret the results of social science link analyses. It is argued that the dynamic nature of the Web, its lack of quality control, and the online proliferation of copying and imitation mean that methodologies operating within a highly positivist, quantitative framework are ineffective. Conversely, the sheer variety of the Web makes application of qualitative methodologies and pure reason very problematic to large-scale studies. Methodology triangulation is consequently advocated, in combination with a warning that the Web is incapable of giving definitive answers to large-scale link analysis research questions concerning social factors underlying link creation. Finally, it is claimed that although theoretical frameworks are appropriate for guiding research, a Theory of Link Analysis is not possible. © 2006 Wiley Periodicals, Inc.