scispace - formally typeset
Search or ask a question

Showing papers on "The Internet published in 2008"


Book
12 Oct 2008
TL;DR: The classic survey design reference, updated for the digital age as mentioned in this paper, has been used for over two decades to assist both students and professionals in effectively planning and conducting mail, telephone, and, more recently, Internet surveys.
Abstract: The classic survey design reference, updated for the digital ageFor over two decades, Dillman's classic text on survey design has aided both students and professionals in effectively planning and conducting mail, telephone, and, more recently, Internet surveys. The new edition is thoroughly updated and revised, and covers all aspects of survey research. It features expanded coverage of mobile phones, tablets, and the use of do-it-yourself surveys, and Dillman's unique Tailored Design Method is also thoroughly explained. This invaluable resource is crucial for any researcher seeking to increase response rates and obtain high-quality feedback from survey questions. Consistent with current emphasis on the visual and aural, the new edition is complemented by copious examples within the text and accompanying website.This heavily revised Fourth Edition includes:Strategies and tactics for determining the needs of a given survey, how to design it, and how to effectively administer itHow and when to use mail, telephone, and Internet surveys to maximum advantageProven techniques to increase response ratesGuidance on how to obtain high-quality feedback from mail, electronic, and other self-administered surveysDirection on how to construct effective questionnaires, including considerations of layoutThe effects of sponsorship on the response rates of surveysUse of capabilities provided by newly mass-used media: interactivity, presentation of aural and visual stimuli.The Fourth Edition reintroduces the telephoneincluding coordinating land and mobile.Grounded in the best research, the book offers practical how-to guidelines and detailed examples for practitioners and students alike.

5,067 citations


Journal ArticleDOI
TL;DR: The Phylogeny.fr platform transparently chains programs to automatically perform phylogenetic analyses and can also meet the needs of specialists; the first ones will find up-to-date tools chained in a phylogeny pipeline to analyze their data in a simple and robust way, while the specialists will be able to easily build and run sophisticated analyses.
Abstract: Phylogenetic analyses are central to many research areas in biology and typically involve the identification of homologous sequences, their multiple alignment, the phylogenetic reconstruction and the graphical representation of the inferred tree. The Phylogeny.fr platform transparently chains programs to automatically perform these tasks. It is primarily designed for biologists with no experience in phylogeny, but can also meet the needs of specialists; the first ones will find up-to-date tools chained in a phylogeny pipeline to analyze their data in a simple and robust way, while the specialists will be able to easily build and run sophisticated analyses. Phylogeny.fr offers three main modes. The ‘One Click’ mode targets non-specialists and provides a ready-to-use pipeline chaining programs with recognized accuracy and speed: MUSCLE for multiple alignment, PhyML for tree building, and TreeDyn for tree rendering. All parameters are set up to suit most studies, and users only have to provide their input sequences to obtain a ready-to-print tree. The ‘Advanced’ mode uses the same pipeline but allows the parameters of each program to be customized by users. The ‘A la Carte’ mode offers more flexibility and sophistication, as users can build their own pipeline by selecting and setting up the required steps from a large choice of tools to suit their specific needs. Prior to phylogenetic analysis, users can also collect neighbors of a query sequence by running BLAST on general or specialized databases. A guide tree then helps to select neighbor sequences to be used as input for the phylogeny pipeline. Phylogeny.fr is available at: http://www.phylogeny.fr/

4,364 citations


Journal ArticleDOI
01 Jan 2008
TL;DR: A theoretical framework describing the trust-based decision-making process a consumer uses when making a purchase from a given site is developed and the proposed model is tested using a Structural Equation Modeling technique on Internet consumer purchasing behavior data collected via a Web survey.
Abstract: Are trust and risk important in consumers' electronic commerce purchasing decisions? What are the antecedents of trust and risk in this context? How do trust and risk affect an Internet consumer's purchasing decision? To answer these questions, we i) develop a theoretical framework describing the trust-based decision-making process a consumer uses when making a purchase from a given site, ii) test the proposed model using a Structural Equation Modeling technique on Internet consumer purchasing behavior data collected via a Web survey, and iii) consider the implications of the model. The results of the study show that Internet consumers' trust and perceived risk have strong impacts on their purchasing decisions. Consumer disposition to trust, reputation, privacy concerns, security concerns, the information quality of the Website, and the company's reputation, have strong effects on Internet consumers' trust in the Website. Interestingly, the presence of a third-party seal did not strongly influence consumers' trust.

2,650 citations


Book
28 Feb 2008
TL;DR: Shirky as mentioned in this paper discusses how social tools support group organization and communication in an entirely new way, one that was previously impossible, by including anecdotes from users of social media sites like WordPress and Blogspot, and how they used these social media tools to achieve a purpose.
Abstract: Shirky’s book discusses how social tools support group organization and communication in an entirely new way, one that was previously impossible. He does so by including anecdotes from users of social media sites like WordPress and Blogspot, and how they used these social media tools to achieve a purposebasically, his book looks to exemplify the tool part of social media tools, even though we as a society seem to take this for granted, now.

2,386 citations


Proceedings ArticleDOI
18 May 2008
TL;DR: This work applies the de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service, and demonstrates that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset.
Abstract: We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information

2,241 citations


Journal ArticleDOI
TL;DR: This paper presents structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain,” and presents these algorithms and results as a first step towards 3D modeled sites, cities, and landscapes from Internet imagery.
Abstract: There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like "Notre Dame" or "Trevi Fountain." This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world's well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.

2,207 citations


Book
24 Nov 2008
TL;DR: A new and recent account presents a comprehensive explanation of the effect of complex connectivity patterns on dynamical phenomena in a vast number of everyday systems that can be represented as large complex networks.
Abstract: The availability of large data sets have allowed researchers to uncover complex properties such as large scale fluctuations and heterogeneities in many networks which have lead to the breakdown of standard theoretical frameworks and models. Until recently these systems were considered as haphazard sets of points and connections. Recent advances have generated a vigorous research effort in understanding the effect of complex connectivity patterns on dynamical phenomena. For example, a vast number of everyday systems, from the brain to ecosystems, power grids and the Internet, can be represented as large complex networks. This new and recent account presents a comprehensive explanation of these effects.

1,694 citations


Journal ArticleDOI
TL;DR: This survey paper looks at emerging research into the application of Machine Learning techniques to IP traffic classification - an inter-disciplinary blend of IP networking and data mining techniques.
Abstract: The research community has begun looking for IP traffic classification techniques that do not rely on `well known? TCP or UDP port numbers, or interpreting the contents of packet payloads. New work is emerging on the use of statistical traffic characteristics to assist in the identification and classification process. This survey paper looks at emerging research into the application of Machine Learning (ML) techniques to IP traffic classification - an inter-disciplinary blend of IP networking and data mining techniques. We provide context and motivation for the application of ML techniques to IP traffic classification, and review 18 significant works that cover the dominant period from 2004 to early 2007. These works are categorized and reviewed according to their choice of ML strategies and primary contributions to the literature. We also discuss a number of key requirements for the employment of ML-based traffic classifiers in operational IP networks, and qualitatively critique the extent to which the reviewed works meet these requirements. Open issues and challenges in the field are also discussed.

1,519 citations


Journal ArticleDOI
TL;DR: The dplR package should make it easier for dendrochronologists to take advantage of R and use it as their primary analytic environment.

1,316 citations


01 Aug 2008
TL;DR: This specification describes a network-based mobility management protocol and is referred to as Proxy Mobile IPv6, which enables IP mobility for a host without requiring its participation in any mobility-related signaling.
Abstract: Network-based mobility management enables IP mobility for a host without requiring its participation in any mobility-related signaling. The network is responsible for managing IP mobility on behalf of the host. The mobility entities in the network are responsible for tracking the movements of the host and initiating the required mobility signaling on its behalf. This specification describes a network-based mobility management protocol and is referred to as Proxy Mobile IPv6. [STANDARDS-TRACK]

1,315 citations


Book
27 Oct 2008
TL;DR: Hindman et al. as discussed by the authors argue that the Internet has done little to broaden political discourse but in fact empowers a small set of elites, some new, but most familiar.
Abstract: Is the Internet democratizing American politics? Do political Web sites and blogs mobilize inactive citizens and make the public sphere more inclusive? The Myth of Digital Democracy reveals that, contrary to popular belief, the Internet has done little to broaden political discourse but in fact empowers a small set of elites--some new, but most familiar. Matthew Hindman argues that, though hundreds of thousands of Americans blog about politics, blogs receive only a miniscule portion of Web traffic, and most blog readership goes to a handful of mainstream, highly educated professionals. He shows how, despite the wealth of independent Web sites, online news audiences are concentrated on the top twenty outlets, and online organizing and fund-raising are dominated by a few powerful interest groups. Hindman tracks nearly three million Web pages, analyzing how their links are structured, how citizens search for political content, and how leading search engines like Google and Yahoo! funnel traffic to popular outlets. He finds that while the Internet has increased some forms of political participation and transformed the way interest groups and candidates organize, mobilize, and raise funds, elites still strongly shape how political material on the Web is presented and accessed. The Myth of Digital Democracy. debunks popular notions about political discourse in the digital age, revealing how the Internet has neither diminished the audience share of corporate media nor given greater voice to ordinary citizens.

Journal ArticleDOI
10 Sep 2008-JAMA
TL;DR: Internet-based learning is associated with large positive effects compared with no intervention and with non-Internet instructional methods, suggesting effectiveness similar to traditional methods.
Abstract: Context The increasing use of Internet-based learning in health professions education may be informed by a timely, comprehensive synthesis of evidence of effectiveness. Objectives To summarize the effect of Internet-based instruction for health professions learners compared with no intervention and with non-Internet interventions. Data Sources Systematic search of MEDLINE, Scopus, CINAHL, EMBASE, ERIC, TimeLit, Web of Science, Dissertation Abstracts, and the University of Toronto Research and Development Resource Base from 1990 through 2007. Study Selection Studies in any language quantifying the association of Internet-based instruction and educational outcomes for practicing and student physicians, nurses, pharmacists, dentists, and other health care professionals compared with a no-intervention or non-Internet control group or a preintervention assessment. Data Extraction Two reviewers independently evaluated study quality and abstracted information including characteristics of learners, learning setting, and intervention (including level of interactivity, practice exercises, online discussion, and duration). Data Synthesis There were 201 eligible studies. Heterogeneity in results across studies was large (I2 ≥ 79%) in all analyses. Effect sizes were pooled using a random effects model. The pooled effect size in comparison to no intervention favored Internet-based interventions and was 1.00 (95% confidence interval [CI], 0.90-1.10; P Conclusions Internet-based learning is associated with large positive effects compared with no intervention. In contrast, effects compared with non-Internet instructional methods are heterogeneous and generally small, suggesting effectiveness similar to traditional methods. Future research should directly compare different Internet-based interventions.

Journal ArticleDOI
TL;DR: In this paper, the authors found that participants often used the Internet, especially social networking sites, to connect and reconnect with friends and family members, and there was overlap between participants' online and offline networks.


Journal ArticleDOI
TL;DR: An information adoption model was developed to examine the factors affecting information adoption of online opinion seekers in online customer communities and found comprehensiveness and relevance to be the most effective components of the argument quality construct.
Abstract: Purpose – Web‐based technologies have created numerous opportunities for electronic word‐of‐mouth (eWOM) communication. This phenomenon impacts online retailers as this easily accessible information could greatly affect the online consumption decision. The purpose of this paper is to examine the extent to which opinion seekers are willing to accept and adopt online consumer reviews and which factors encourage adoption.Design/methodology/approach – Using dual‐process theories, an information adoption model was developed to examine the factors affecting information adoption of online opinion seekers in online customer communities. The model was tested empirically using a sample of 154 users who had experience within the online customer community, Openrice.com. Users were required to complete a survey regarding the online consumer reviews received from the virtual sharing platform.Findings – The paper found comprehensiveness and relevance to be the most effective components of the argument quality construct ...

Journal ArticleDOI
TL;DR: In a very significant development for eHealth, a broad adoption of Web 2.0 technologies and approaches coincides with the more recent emergence of Personal Health Application Platforms and Personally Controlled Health Records such as Google Health, Microsoft HealthVault, and Dossia.
Abstract: In a very significant development for eHealth, broad adoption of Web 2.0 technologies and approaches coincides with the more recent emergence of Personal Health Application Platforms and Personally Controlled Health Records such as Google Health, Microsoft HealthVault, and Dossia. "Medicine 2.0" applications, services and tools are defined as Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtual reality approaches to enable and facilitate specifically 1) social networking, 2) participation, 3) apomediation, 4) openness and 5) collaboration, within and between these user groups. The Journal of Medical Internet Research (JMIR) publishes a Medicine 2.0 theme issue and sponsors a conference on "How Social Networking and Web 2.0 changes Health, Health Care, Medicine and Biomedical Research", to stimulate and encourage research in these five areas.

Journal ArticleDOI
TL;DR: This study estimates historical electricity use by data centers worldwide and regionally on the basis of more detailed data than were available for previous assessments, including electricity used by servers, data center communications, and storage equipment.
Abstract: The direct electricity used by data centers has become an important issue in recent years as demands for new Internet services (such as search, music downloads, video-on-demand, social networking, and telephony) have become more widespread. This study estimates historical electricity used by data centers worldwide and regionally on the basis of more detailed data than were available for previous assessments, including electricity used by servers, data center communications, and storage equipment. Aggregate electricity use for data centers doubled worldwide from 2000 to 2005. Three quarters of this growth was the result of growth in the number of the least expensive (volume) servers. Data center communications and storage equipment each contributed about 10% of the growth. Total electricity use grew at an average annual rate of 16.7% per year, with the Asia Pacific region (without Japan) being the only major world region with growth significantly exceeding that average. Direct electricity used by information technology equipment in data centers represented about 0.5% of total world electricity consumption in 2005. When electricity for cooling and power distribution is included, that figure is about 1%. Worldwide data center power demand in 2005 was equivalent (in capacity terms) to about seventeen 1000 MW power plants.

Journal ArticleDOI
TL;DR: This article examined how narcissism is manifested on a social networking Web site (i.e., Facebook) and found that narcissism predicted higher levels of social activity in the online community and more self-promoting content in several aspects of the social networking web pages.
Abstract: The present research examined how narcissism is manifested on a social networking Web site (i.e., Facebook.com). Narcissistic personality self-reports were collected from social networking Web page owners. Then their Web pages were coded for both objective and subjective content features. Finally, strangers viewed the Web pages and rated their impression of the owner on agentic traits, communal traits, and narcissism. Narcissism predicted (a) higher levels of social activity in the online community and (b) more selfpromoting content in several aspects of the social networking Web pages. Strangers who viewed the Web pages judged more narcissistic Web page owners to be more narcissistic. Finally, mediational analyses revealed several Web page content features that were influential in raters’ narcissistic impressions of the owners, including quantity of social interaction, main photo selfpromotion, and main photo attractiveness. Implications of the expression of narcissism in social networking communities are discussed.

Journal ArticleDOI
TL;DR: Findings suggest that those with higher levels of education and of a more resource-rich background use the Web for more “capitalenhancing” activities and that online skill is an important mediating factor in the types of activities people pursue online.
Abstract: This article expands understanding of the digital divide to more nuanced measures of use by examining differences in young adults' online activities. Young adults are the most highly connected age group, but that does not mean that their Internet uses are homogenous. Analyzing data about the Web uses of 270 adults from across the United States, the article explores the differences in 18- to 26-year-olds' online activities and what social factors explain the variation. Findings suggest that those with higher levels of education and of a more resource-rich background use the Web for more “capitalenhancing” activities. Detailed analyses of user attributes also reveal that online skill is an important mediating factor in the types of activities people pursue online. The authors discuss the implications of these findings for a “second-level digital divide,” that is, differences among the population of young adult Internet users.

Journal ArticleDOI
TL;DR: A new public dataset based on manipulations and embellishments of a popular social network site, Facebook.com, is introduced and five distinctive features of this dataset are emphasized and its advantages and limitations vis-a-vis other kinds of network data are highlighted.

Journal ArticleDOI
TL;DR: Assessment of faculty's awareness of the benefits of Web 2.0 to supplement in-class learning and better understand faculty's decisions to adopt these tools using the decomposed theory of planned behavior (DTPB) model indicated that while some faculty members feel that some Web2.0 technologies could improve students' learning, their interaction with faculty and with other peers, their writing abilities, and their satisfaction with the course; few choose to use them in the classroom.
Abstract: While students are increasing their use of emerging technologies such as text messaging, wikis, social networks, and other Web 2.0 applications, this is not the case with many university faculty. The purpose of this study was to assess faculty's awareness of the benefits of Web 2.0 to supplement in-class learning and better understand faculty's decisions to adopt these tools using the decomposed theory of planned behavior (DTPB) model. Findings indicated that while some faculty members feel that some Web 2.0 technologies could improve students' learning, their interaction with faculty and with other peers, their writing abilities, and their satisfaction with the course; few choose to use them in the classroom. Additional results indicated that faculty's attitude and their perceived behavioral control are strong indicators of their intention to use Web 2.0. A number of implications are drawn highlighting how the use of Web 2.0 could be useful in the classroom.

01 Jan 2008
TL;DR: Reading a book changes us forever as the authors return from the worlds they inhabit during their reading journeys with new insights about their surroundings and ourselves.
Abstract: The essence of both reading and reading instruction is change. Reading a book changes us forever as we return from the worlds we inhabit during our reading journeys with new insights about our surroundings and ourselves. Teaching a student to read is also a transforming experience. It opens new windows to the world and creates a lifetime of opportunities. Change defines our work as both literacy educators and researchers — by teaching a student to read, we change the world.

Journal ArticleDOI
TL;DR: This study analyzes the impact of trust and risk perceptions on one's willingness to use e-government services and proposes a model of e- government trust composed of disposition to trust, trust of the Internet (TOI), Trust of the government (TOG) and perceived risk.
Abstract: Citizen confidence in government and technology is imperative to the wide-spread adoption of e-government. This study analyzes the impact of trust and risk perceptions on one's willingness to use e-government services. We propose a model of e-government trust composed of disposition to trust, trust of the Internet (TOI), trust of the government (TOG) and perceived risk. Results from a citizen survey indicate that disposition to trust positively affects TOI and TOG, which in turn affect intentions to use an e-government service. TOG also affects negatively perceived risk, which affects use intentions as well. Implications for practice and research are discussed.

Journal ArticleDOI
TL;DR: The prevailing paradigm in Internet privacy literature, treating privacy within a context merely of rights and violations, is inadequate for studying the Internet as a social realm as discussed by the authors, which is not the case in the real world.
Abstract: The prevailing paradigm in Internet privacy literature, treating privacy within a context merely of rights and violations, is inadequate for studying the Internet as a social realm. Following Goffm...

Journal Article
TL;DR: Zittrain this article argues that the Internet is on a path to lockdown, ending its cycle of innovation and facilitating unsettling new kinds of control, and that its salvation lies in the hands of its millions of users.
Abstract: This extraordinary book explains the engine that has catapulted the Internet from backwater to ubiquityand reveals that it is sputtering precisely because of its runaway success. With the unwitting help of its users, the generative Internet is on a path to a lockdown, ending its cycle of innovationand facilitating unsettling new kinds of control.IPods, iPhones, Xboxes, and TiVos represent the first wave of Internet-centered products that cant be easily modified by anyone except their vendors or selected partners. These tethered appliances have already been used in remarkable but little-known ways: car GPS systems have been reconfigured at the demand of law enforcement to eavesdrop on the occupants at all times, and digital video recorders have been ordered to self-destruct thanks to a lawsuit against the manufacturer thousands of miles away. New Web 2.0 platforms like Google mash-ups and Facebook are rightly toutedbut their applications can be similarly monitored and eliminated from a central source. As tethered appliances and applications eclipse the PC, the very nature of the Internetits generativity, or innovative characteris at risk.The Internets current trajectory is one of lost opportunity. Its salvation, Zittrain argues, lies in the hands of its millions of users. Drawing on generative technologies like Wikipedia that have so far survived their own successes, this book shows how to develop new technologies and social structures that allow users to work creatively and collaboratively, participate in solutions, and become true netizens.

Proceedings ArticleDOI
02 Jun 2008
TL;DR: The social networking in YouTube videos is investigated, finding that the links to related videos generated by uploaders' choices have clear small-world characteristics, indicating that the videos have strong correlations with each other, and creates opportunities for developing novel techniques to enhance the service quality.
Abstract: YouTube has become the most successful Internet website providing a new generation of short video sharing service since its establishment in early 2005. YouTube has a great impact on Internet traffic nowadays, yet itself is suffering from a severe problem of scalability. Therefore, understanding the characteristics of YouTube and similar sites is essential to network traffic engineering and to their sustainable development. To this end, we have crawled the YouTube site for four months, collecting more than 3 million YouTube videos' data. In this paper, we present a systematic and in-depth measurement study on the statistics of YouTube videos. We have found that YouTube videos have noticeably different statistics compared to traditional streaming videos, ranging from length and access pattern, to their growth trend and active life span. We investigate the social networking in YouTube videos, as this is a key driving force toward its success. In particular, we find that the links to related videos generated by uploaders' choices have clear small-world characteristics. This indicates that the videos have strong correlations with each other, and creates opportunities for developing novel techniques to enhance the service quality.

Journal ArticleDOI
17 Aug 2008
TL;DR: The experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P1P applications.
Abstract: As peer-to-peer (P2P) emerges as a major paradigm for scalable network application design, it also exposes significant new challenges in achieving efficient and fair utilization of Internet network resources. Being largely network-oblivious, many P2P applications may lead to inefficient network resource usage and/or low application performance. In this paper, we propose a simple architecture called P4P to allow for more effective cooperative traffic control between applications and network providers. We conducted extensive simulations and real-life experiments on the Internet to demonstrate the feasibility and effectiveness of P4P. Our experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications.

Patent
01 Feb 2008
TL;DR: In this article, various methods, systems and apparatus for displaying content associated with a point-of-interest (POI) in a digital mapping system, are disclosed, including detecting a change in the zoom level of an electronic map displayed on a computing device.
Abstract: Various methods, systems and apparatus for displaying content associated with a point-of-interest (“POI”) in a digital mapping system, are disclosed. One such method may include detecting a change in the zoom level of an electronic map displayed on a computing device, determining if the new zoom-level is at a pre-determined zoom level (e.g. at maximum zoom), identifying a POI on the map, retrieving content associated with the POI (“POI content”) and displaying the POI content. The method may further include detecting a change in the zoom, or pan, of the digital map while POI content is displayed, and removing the POI content in response. One apparatus, according to aspects of the present invention, may include means of detecting a change in the zoom level in a digital map displayed through an application (e.g. a web browser, an application on web-enabled cellular phones, etc., displaying a map generated by a service such as Google Maps®, Yahoo! Maps®, Windows Live Search Maps®, MapQuest®, etc.) on a computing device (e.g. personal computer, workstation, thin client, PDA, cellular phone/smart phone, GPS device, etc.) means of identifying a POI at the pre-determined zoom level, means of obtaining content associated with the POI, and means of displaying the POI content. POI content may be retrieved from a database (e.g. internet-based database); or, in an alternate embodiment, gathered by crawling websites associated with the POI. In one embodiment, POI content may be displayed as an image (e.g. a PNG file, GIF, Flash® component, etc.) superimposed on the digital map (e.g. as an overlay object on the map image.) In alternate embodiments, POI content may replace the digital map and may contain links to other content.

Journal ArticleDOI
TL;DR: It is found that the web site brand is a more important cue than web site quality in influencing consumers' trust and perceived risk, and in turn, consumer purchase intention.
Abstract: Purpose – The purpose of this paper is to investige whether online environment cues (web site quality and web site brand) affect customer purchase intention towards an online retailer and whether this impact is mediated by customer trust and perceived risk. The study also aimed to assess the degree of reciprocity between consumers' trust and perceived risk in the context of an online shopping environment.Design/methodology/approach – The study proposed a research framework for testing the relationships among the constructs based on the stimulus‐organism‐response framework. In addition, this study developed a non‐recursive model. After the validation of measurement scales, empirical analyses were performed using structural equation modelling.Findings – The findings confirm that web site quality and web site brand affect consumers' trust and perceived risk, and in turn, consumer purchase intention. Notably, this study finds that the web site brand is a more important cue than web site quality in influencing...

Journal ArticleDOI
Midori A. Harris, Jennifer I. Deegan, Amelia Ireland, Jane Lomax, Michael Ashburner1, Susan Tweedie1, Seth Carbon2, Suzanna E. Lewis2, Christopher J. Mungall2, John Day Richter2, Karen Eilbeck, Judith A. Blake, Carol J. Bult, Alexander D. Diehl, Mary E. Dolan, Harold J. Drabkin, Janan T. Eppig, David P. Hill, Ni Li, Martin Ringwald, Rama Balakrishnan3, Gail Binkley3, J. Michael Cherry3, Karen R. Christie3, Maria C. Costanzo3, Qing Dong3, Stacia R. Engel3, Dianna G. Fisk3, Jodi E. Hirschman3, Benjamin C. Hitz3, Eurie L. Hong3, Cynthia J. Krieger3, Stuart R. Miyasato3, Robert S. Nash3, Julie Park3, Marek S. Skrzypek3, Shuai Weng3, Edith D. Wong3, Kathy K. Zhu3, David Botstein4, Kara Dolinski4, Michael S. Livstone4, Rose Oughtred4, Tanya Z. Berardini5, Li Donghui5, Seung Y. Rhee5, Rolf Apweiler6, Daniel Barrell6, Evelyn Camon6, Emily Dimmer6, Rachael P. Huntley, Nicola Mulder, Varsha K. Khodiyar, Ruth C. Lovering, Sue Povey, Rex L. Chisholm, Petra Fey, Pascale Gaudet, Warren A. Kibbe, Ranjana Kishore, Erich M. Schwarz, Paul W. Sternberg, Kimberly Van Auken, Michelle G. Giglio, Linda Hannick, Jennifer R. Wortman, Martin Aslett, Matthew Berriman, Valerie Wood, Howard J. Jacob, Stan Laulederkind, Victoria Petri, Mary Shimoyama, Jennifer L. Smith, Simon N. Twigger, Pankaj Jaiswal, Trent E. Seigfried, Doug Howe, Monte Westerfield, Candace Collmer, Trudy Torto Alalibo, Erika Feltrin, Giorgio Valle, Susan Bromberg, Shane C. Burgess, Fiona M. McCarthy 
TL;DR: The GO Consortium has launched a focused effort to provide comprehensive and detailed annotation of orthologous genes across a number of ‘reference’ genomes, including human and several key model organisms.
Abstract: The Gene Ontology (GO) project (http://www.geneontology.org) provides a set of structured, controlled vocabularies for community use in annotating genes, gene products and sequences (also see http://www.sequenceontology.org/). The ontologies have been extended and refined for several biological areas, and improvements to the structure of the ontologies have been implemented. To improve the quantity and quality of gene product annotations available from its public repository, the GO Consortium has launched a focused effort to provide comprehensive and detailed annotation of orthologous genes across a number of reference genomes, including human and several key model organisms. Software developments include two releases of the ontology-editing tool OBO-Edit, and improvements to the AmiGO browser interface.