scispace - formally typeset
Search or ask a question
Author

Ivan Herman

Other affiliations: Hungarian Academy of Sciences
Bio: Ivan Herman is an academic researcher from Centrum Wiskunde & Informatica. The author has contributed to research in topics: Information visualization & Semantic Web. The author has an hindex of 22, co-authored 101 publications receiving 3823 citations. Previous affiliations of Ivan Herman include Hungarian Academy of Sciences.


Papers
More filters
Journal ArticleDOI
TL;DR: This is a survey on graph visualization and navigation techniques, as used in information visualization, which approaches the results of traditional graph drawing from a different perspective.
Abstract: This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective.

1,648 citations

Journal ArticleDOI
TL;DR: A scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers is presented and several projects by members of the HCLSIG are reported, illustrating the range ofSemantic Web technologies that have applications in areas of biomedicine.
Abstract: A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature.

337 citations

Book ChapterDOI
23 Sep 2001
TL;DR: GraphML (Graph Markup Language), an XML format for graph structures, is presented, as an initial step towards this goal, which allows for extension modules for additional data, such as graph drawing information or data specific to a particular application.
Abstract: Following a workshop on graph data formats held with the 8th Symposium on Graph Drawing (GD 2000), a task group was formed to propose a format for graphs and graph drawings that meets current and projected requirements. On behalf of this task group, we here present GraphML (Graph Markup Language), an XML format for graph structures, as an initial step towards this goal. Its main characteristic is a unique mechanism that allows to de.ne extension modules for additional data, such as graph drawing information or data specific to a particular application. These modules can freely be combined or stripped without affecting the graph structure, so that information can be added (or omitted) in a well-defined way.

225 citations

Journal ArticleDOI
TL;DR: The Semantic Web makes it possible for any computer to access information by creating a common language between any personal computers and the Internet at large.
Abstract: The article describes the Semantic Web, how it functions, how it grows and what makes it different from the World Wide Web The Semantic Web makes it possible for any computer to access information by creating a common language between any personal computers and the Internet at large This system would enhance the World Wide Web by creating one comprehensive format that all programs would be based on INSETS: KEY CONCEPTS;Combining Concepts;FRIEND OF A FRIEND;Which Genes Cause Heart Disease?

216 citations

Journal ArticleDOI
TL;DR: The concern in this paper is to show the expressiveness of MANIFOLD, the feasibility of its implementation and its usefulness in practice, and a series of small manifold programs which describe the skeletons of some adaptive recursive algorithms that are of particular interest in computer graphics.
Abstract: Management of the communications among a set of concurrent processes arises in many applications and is a central concern in parallel computing. In this paper we introduce MANIFOLD: a co-ordination language whose sole purpose is to describe and manage complex interconnections among independent, concurrent processes. In the underlying paradigm of this language the primary concern is not with what functionality the individual processes in a parallel system provide. Instead, the emphasis is on how these processes are interconnected and how their interaction patterns change during the execution life of the system. This paper also includes an overview of our implementation of MANIFOLD. As an example of the application of MANIFOLD, we present a series of small manifold programs which describe the skeletons of some adaptive recursive algorithms that are of particular interest in computer graphics. Our concern in this paper is to show the expressiveness of MANIFOLD, the feasibility of its implementation and its usefulness in practice. Issues regarding performance and optimization are beyond the scope of this paper.

160 citations


Cited by
More filters
01 Jan 2006
TL;DR: Platform-independent and open source igraph aims to satisfy all the requirements of a graph package while possibly remaining easy to use in interactive mode as well.
Abstract: There is no other package around that satisfies all the following requirements: •Ability to handle large graphs efficiently •Embeddable into higher level environments (like R [6] or Python [7]) •Ability to be used for quick prototyping of new algorithms (impossible with “click & play” interfaces) •Platform-independent and open source igraph aims to satisfy all these requirements while possibly remaining easy to use in interactive mode as well.

8,850 citations

Journal ArticleDOI
TL;DR: The FAIR Data Principles as mentioned in this paper are a set of data reuse principles that focus on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.
Abstract: There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders—representing academia, industry, funding agencies, and scholarly publishers—have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.

7,602 citations

01 Jan 1978
TL;DR: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.), and is a "must-have" reference for every serious programmer's digital library.
Abstract: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.). One of the best-selling programming books published in the last fifty years, "K&R" has been called everything from the "bible" to "a landmark in computer science" and it has influenced generations of programmers. Available now for all leading ebook platforms, this concise and beautifully written text is a "must-have" reference for every serious programmers digital library. As modestly described by the authors in the Preface to the First Edition, this "is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help."

2,120 citations

Journal ArticleDOI
TL;DR: This is a survey on graph visualization and navigation techniques, as used in information visualization, which approaches the results of traditional graph drawing from a different perspective.
Abstract: This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective.

1,648 citations

Book
01 Dec 2006
TL;DR: Providing an in-depth examination of core text mining and link detection algorithms and operations, this text examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches.
Abstract: 1. Introduction to text mining 2. Core text mining operations 3. Text mining preprocessing techniques 4. Categorization 5. Clustering 6. Information extraction 7. Probabilistic models for Information extraction 8. Preprocessing applications using probabilistic and hybrid approaches 9. Presentation-layer considerations for browsing and query refinement 10. Visualization approaches 11. Link analysis 12. Text mining applications Appendix Bibliography.

1,628 citations