Bio: B. Marx is an academic researcher. The author has an hindex of 1, co-authored 1 publications receiving 1778 citations.
01 Jan 1998
TL;DR: Information Architecture for the World Wide Web is a guide to how to design Web sites and intranets that support growth, management, and ease of use for Webmasters, designers, and anyone else involved in building a Web site.
Abstract: From the Publisher: Some Web sites "work" and some don't Good Web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls You need to know who will be using the site, and what they'll be using it for You need some idea of what you'd like to draw their attention to during their visit Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable Information Architecture for the World Wide Web is about applying the principles of architecture and library science to Web site design Each Web site is like a public building, available for tourists and regulars alike to breeze through at their leisure The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday Most books on Web development concentrate either on the aesthetics or the mechanics of the site This book is about the framework that holds the two together With this book, you learn how to design Web sites and intranets that support growth, management, and ease of use Special attention is given to: The process behind architecting a large, complex site Web site hierarchy design and organization Techniques for making your site easier to search Information Architecture for the World Wide Web is for Webmasters, designers, and anyone else involved in building a Web site It's for novice Web designers who, from the start, want to avoid the traps that result in poorly designed sites It's for experienced Web designers who have already created sites but realize that something "is missing" from their sites and want to improve them It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their Web pages into a cohesive site The authors are two of the principals of Argus Associates, a Web consulting firm At Argus, they have created information architectures for Web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical
•22 Dec 2015
TL;DR: This book discusses Computational Statistics, a branch of Statistics, and its applications in medicine, education, and research.
Abstract: Prefaces Introduction What Is Computational Statistics? An Overview of the Book Probability Concepts Introduction Probability Conditional Probability and Independence Expectation Common Distributions Sampling Concepts Introduction Sampling Terminology and Concepts Sampling Distributions Parameter Estimation Empirical Distribution Function Generating Random Variables Introduction General Techniques for Generating Random Variables Generating Continuous Random Variables Generating Discrete Random Variables Exploratory Data Analysis Introduction Exploring Univariate Data Exploring Bivariate and Trivariate Data Exploring Multidimensional Data Finding Structure Introduction Projecting Data Principal Component Analysis Projection Pursuit EDA Independent Component Analysis Grand Tour Nonlinear Dimensionality Reduction Monte Carlo Methods for Inferential Statistics Introduction Classical Inferential Statistics Monte Carlo Methods for Inferential Statistics Bootstrap Methods Data Partitioning Introduction Cross-Validation Jackknife Better Bootstrap Confidence Intervals Jackknife-after-Bootstrap Probability Density Estimation Introduction Histograms Kernel Density Estimation Finite Mixtures Generating Random Variables Supervised Learning Introduction Bayes' Decision Theory Evaluating the Classifier Classification Trees Combining Classifiers Unsupervised Learning Introduction Measures of Distance Hierarchical Clustering K-Means Clustering Model-Based Clustering Assessing Cluster Results Parametric Models Introduction Spline Regression Models Logistic Regression Generalized Linear Models Nonparametric models Introduction Some Smoothing Methods Kernel Methods Smoothing Splines Nonparametric Regression-Other Details Regression Trees Additive Models Markov Chain Monte Carlo Methods Introduction Background Metropolis-Hastings Algorithms The Gibbs Sampler Convergence Monitoring Spatial Statistics Introduction Visualizing Spatial Point Processes Exploring First-Order and Second-Order Properties Modeling Spatial Point Processes Simulating Spatial Point Processes Appendix A: Introduction to Matlab What Is MATLAB? Getting Help in MATLAB File and Workspace Management Punctuation in MATLAB Arithmetic Operators Data Constructs in MATLAB Script Files and Functions Control Flow Simple Plotting Contact Information Appendix B: Projection Pursuit Indexes Indexes MATLAB Source Code Appendix C: Matlab Statistics Toolbox Appendix D: Computational Statistics Toolbox Appendix E: Exploratory Data Analysis Toolboxes Introduction EDA Toolbox EDA GUI Toolbox Appendix F: Data Sets Appendix G: NOTATION References INDEX MATLAB Code, Further Reading, and Exercises appear at the end of each chapter.
TL;DR: The ThemeRiver visualization depicts thematic variations over time within a large collection of documents and uses a river metaphor to convey several key notions, allowing a user to discern patterns that suggest relationships or trends.
Abstract: The ThemeRiver visualization depicts thematic variations over time within a large collection of documents. The thematic changes are shown in the context of a time-line and corresponding external events. The focus on temporal thematic change within a context framework allows a user to discern patterns that suggest relationships or trends. For example, the sudden change of thematic strength following an external event may indicate a causal relationship. Such patterns are not readily accessible in other visualizations of the data. We use a river metaphor to convey several key notions. The document collection's time-line, selected thematic content and thematic strength are indicated by the river's directed flow, composition and changing width, respectively. The directed flow from left to right is interpreted as movement through time and the horizontal distance between two points on the river defines a time interval. At any point in time, the vertical distance, or width, of the river indicates the collective strength of the selected themes. Colored "currents" flowing within the river represent individual themes. A current's vertical width narrows or broadens to indicate decreases or increases in the strength of the individual theme.
01 May 1995
TL;DR: This paper argues for making use of text structure when retrieving from full text documents, and presents a visualization paradigm, called TileBars, that demonstrates the usefulness of explicit term distribution information in Boolean-type queries.
Abstract: The field of information retrieval has traditionally focused on textbases consisting of titles and abstracts. As a consequence, many underlying assumptions must be altered for retrieval from full-length text collections. This paper argues for making use of text structure when retrieving from full text documents, and presents a visualization paradigm, called TileBars, that demonstrates the usefulness of explicit term distribution information in Boolean-type queries. TileBars simultaneously and compactly indicate relative document length, query term frequency, and query term distribution. The patterns in a column of TileBars can be quickly scanned and deciphered, aiding users in making judgments about the potential relevance of the retrieved documents.
TL;DR: Adherence to this principle and enhancement of the reliable practical use of the scale through continuing education of health professionals, standardisation across different settings, and consensus on methods to address confounders will maintain its role in clinical practice and research in the future.
Abstract: Since 1974, the Glasgow Coma Scale has provided a practical method for bedside assessment of impairment of conscious level, the clinical hallmark of acute brain injury. The scale was designed to be easy to use in clinical practice in general and specialist units and to replace previous ill-defined and inconsistent methods. 40 years later, the Glasgow Coma Scale has become an integral part of clinical practice and research worldwide. Findings using the scale have shown strong associations with those obtained by use of other early indices of severity and outcome. However, predictive statements should only be made in combination with other variables in a multivariate model. Individual patients are best described by the three components of the coma scale; whereas the derived total coma score should be used to characterise groups. Adherence to this principle and enhancement of the reliable practical use of the scale through continuing education of health professionals, standardisation across different settings, and consensus on methods to address confounders will maintain its role in clinical practice and research in the future.