scispace - formally typeset
Search or ask a question
Author

Seymour E. Goodman

Bio: Seymour E. Goodman is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: The Internet & Information technology. The author has an hindex of 30, co-authored 112 publications receiving 3019 citations. Previous affiliations of Seymour E. Goodman include Princeton University & Stanford University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a comprehensive framework for describing the diffusion of the Internet in a country using six dimensions, and addresses how to apply the framework in practice, highlighting Internet diffusion determinants.
Abstract: This paper presents a comprehensive framework for describing the diffusion of the Internet in a country. It incorporates insights gained from in-depth studies of about 25 countries undertaken since 1997. The framework characterizes diffusion using six dimensions, defining them in detail, and examines how the six dimensions relate to underlying bodies of theory from the national systems of innovation and diffusion of innovations approaches. It addresses how to apply the framework in practice, highlighting Internet diffusion determinants. This framework is useful for business stakeholders wanting to make use of and invest in the Internet, for policy makers debating how to positively (or negatively) influence its use and development, and for researchers studying the large-scale diffusion of complex, interrelated technologies.

232 citations

Journal ArticleDOI
TL;DR: It is argued that deficiencies in the use of IT are the least of the problems of a continent plagued by a history of exploitation, postcolonial political difficulties, bloody civil conflicts, and extensive health, educational, demographic and economic problems.
Abstract: Africa seems to be the "lost continent" of the information technologies (IT). The second largest continent is the least computerized, l and its more than twoscore countries have an average telephone density that is an order of magnitude smaller than that of the European Community. A recent graphic on world computer densities used the map of Africa simply as a place to display the overflow data for Europe [7]. It may be argued that deficiencies in the use of IT are the least of the problems of a continent plagued by a history of exploitation, postcolonial political difficulties, bloody civil conflicts, and extensive health, educational, demographic and economic problems. Nevertheless, attention should be given to the fact that more than 500 million people have largely been left out of the "global information society." "International Perspectives" brings

147 citations

Journal ArticleDOI
TL;DR: Over 70 countries have full TCP/IP Internet connectivity, and about 150 have at least e-mail services through IP or via more limited forms of connectivity (e.g., LILJCP or Fidonet).
Abstract: If the 1, be considered a market ph'enome-\" on, with sustained double-digit growth and no apparent end in sight to the upward spiral. Recent lnrernet numbers arc stunning. an impressive 69% increase j7]. Over 70 countries have full TCP/IP Internet connectivity, and about 150 have at least e-mail services through IP or via more limited forms of connectivity (e.g., LILJCP or Fidonet). Monthly traffic on the U.S. NSF backhonc alone is ahout IO terahytes [l]. But behind rhrse eatistics of overwhelming s~~ccesses there are some this aciting trchn&py. Surprisingly titttr is known ahout this diffusion beyond the basic macro-statistics. WC do know that ahout au-fifth 01. the world's population, estimated only on the basis of the countries they live in, has access to far more capability ttnn the lest. There is H direct corrrlation between measwes of national drvcl-opment and the quality of network to many I.DCs is often only a Fidonet link to a few PCs with less than a dozen regular ose~s. Even within the mo*t advanced, well-connected countries the majority of the populations have little or no participation. For example, despite the much proclaimed connectivity of U.S. universities , where access is almost a free good, it appears that only a Tmall fraction of professors are users at most schools, and there mar he significant differences in levels of use across academic disciplines. In spite of the magnitude of thr phenomena, relatively little has hen

146 citations

Journal ArticleDOI
TL;DR: An introduction to the Design and Analysis of Algorithms and its applications to computer science.
Abstract: Introduction to the Design and Analysis of Algorithms. By S. E. Goodman and S. T. Hedetniemi. New York, McGraw‐Hill, 1977. xi, 371 p. 23·5 cm. £13·45.

124 citations


Cited by
More filters
01 May 1993
TL;DR: Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems.
Abstract: Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a fixed subset of atoms; the second assigns each a fixed subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently—those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 100,000,000 atoms on several parallel supercomputers--the nCUBE 2, Intel iPSC/860 and Paragon, and Cray T3D. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and a 1840-node Intel Paragon performs up to 165 faster than a single Cray C9O processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

29,323 citations

Journal ArticleDOI
TL;DR: An overview of pattern clustering methods from a statistical pattern recognition perspective is presented, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners.
Abstract: Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. However, clustering is a difficult problem combinatorially, and differences in assumptions and contexts in different communities has made the transfer of useful generic concepts and methodologies slow to occur. This paper presents an overview of pattern clustering methods from a statistical pattern recognition perspective, with a goal of providing useful advice and references to fundamental concepts accessible to the broad community of clustering practitioners. We present a taxonomy of clustering techniques, and identify cross-cutting themes and recent advances. We also describe some important applications of clustering algorithms such as image segmentation, object recognition, and information retrieval.

14,054 citations

Journal ArticleDOI
TL;DR: In this article, a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data is presented, which is derived from a set of assumptions made previously as well as the assumption of likelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence.
Abstract: We describe a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data. First and foremost, we develop a methodology for assessing informative priors needed for learning. Our approach is derived from a set of assumptions made previously as well as the assumption of likelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence. We show that likelihood equivalence when combined with previously made assumptions implies that the user's priors for network parameters can be encoded in a single Bayesian network for the next case to be seen—a prior network—and a single measure of confidence for that network. Second, using these priors, we show how to compute the relative posterior probabilities of network structures given data. Third, we describe search methods for identifying network structures with high posterior probabilities. We describe polynomial algorithms for finding the highest-scoring network structures in the special case where every node has at most k e 1 parent. For the general case (k > 1), which is NP-hard, we review heuristic search algorithms including local search, iterative local search, and simulated annealing. Finally, we describe a methodology for evaluating Bayesian-network learning algorithms, and apply this approach to a comparison of various approaches.

4,124 citations

Journal ArticleDOI
TL;DR: Detailed principles for making design choices during the process of selecting appropriate experts for the Delphi study are given and suggestions for theoretical applications are made.

3,510 citations