scispace - formally typeset
Search or ask a question
Institution

University of Nebraska–Lincoln

EducationLincoln, Nebraska, United States
About: University of Nebraska–Lincoln is a education organization based out in Lincoln, Nebraska, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 28059 authors who have published 61544 publications receiving 2139104 citations. The organization is also known as: Nebraska & UNL.


Papers
More filters
Book
01 Jan 1996
TL;DR: The author explains the development of the Huffman Coding Algorithm and some of the techniques used in its implementation, as well as some of its applications, including Image Compression, which is based on the JBIG standard.
Abstract: Preface 1 Introduction 1.1 Compression Techniques 1.1.1 Lossless Compression 1.1.2 Lossy Compression 1.1.3 Measures of Performance 1.2 Modeling and Coding 1.3 Organization of This Book 1.4 Summary 1.5 Projects and Problems 2 Mathematical Preliminaries 2.1 Overview 2.2 A Brief Introduction to Information Theory 2.3 Models 2.3.1 Physical Models 2.3.2 Probability Models 2.3.3. Markov Models 2.3.4 Summary 2.5 Projects and Problems 3 Huffman Coding 3.1 Overview 3.2 "Good" Codes 3.3. The Huffman Coding Algorithm 3.3.1 Minimum Variance Huffman Codes 3.3.2 Length of Huffman Codes 3.3.3 Extended Huffman Codes 3.4 Nonbinary Huffman Codes 3.5 Adaptive Huffman Coding 3.5.1 Update Procedure 3.5.2 Encoding Procedure 3.5.3 Decoding Procedure 3.6 Applications of Huffman Coding 3.6.1 Lossless Image Compression 3.6.2 Text Compression 3.6.3 Audio Compression 3.7 Summary 3.8 Projects and Problems 4 Arithmetic Coding 4.1 Overview 4.2 Introduction 4.3 Coding a Sequence 4.3.1 Generating a Tag 4.3.2 Deciphering the Tag 4.4 Generating a Binary Code 4.4.1 Uniqueness and Efficiency of the Arithmetic Code 4.4.2 Algorithm Implementation 4.4.3 Integer Implementation 4.5 Comparison of Huffman and Arithmetic Coding 4.6 Applications 4.6.1 Bi-Level Image Compression-The JBIG Standard 4.6.2 Image Compression 4.7 Summary 4.8 Projects and Problems 5 Dictionary Techniques 5.1 Overview 5.2 Introduction 5.3 Static Dictionary 5.3.1 Diagram Coding 5.4 Adaptive Dictionary 5.4.1 The LZ77 Approach 5.4.2 The LZ78 Approach 5.5 Applications 5.5.1 File Compression-UNIX COMPRESS 5.5.2 Image Compression-the Graphics Interchange Format (GIF) 5.5.3 Compression over Modems-V.42 bis 5.6 Summary 5.7 Projects and Problems 6 Lossless Image Compression 6.1 Overview 6.2 Introduction 6.3 Facsimile Encoding 6.3.1 Run-Length Coding 6.3.2 CCITT Group 3 and 4-Recommendations T.4 and T.6 6.3.3 Comparison of MH, MR, MMR, and JBIG 6.4 Progressive Image Transmission 6.5 Other Image Compression Approaches 6.5.1 Linear Prediction Models 6.5.2 Context Models 6.5.3 Multiresolution Models 6.5.4 Modeling Prediction Errors 6.6 Summary 6.7 Projects and Problems 7 Mathematical Preliminaries 7.1 Overview 7.2 Introduction 7.3 Distortion Criteria 7.3.1 The Human Visual System 7.3.2 Auditory Perception 7.4 Information Theory Revisted 7.4.1 Conditional Entropy 7.4.2 Average Mutual Information 7.4.3 Differential Entropy 7.5 Rate Distortion Theory 7.6 Models 7.6.1 Probability Models 7.6.2 Linear System Models 7.6.3 Physical Models 7.7 Summary 7.8 Projects and Problems 8 Scalar Quantization 8.1 Overview 8.2 Introduction 8.3 The Quantization Problem 8.4 Uniform Quantizer 8.5 Adaptive Quantization 8.5.1 Forward Adaptive Quantization 8.5.2 Backward Adaptive Quantization 8.6 Nonuniform Quantization 8.6.1 pdf-Optimized Quantization 8.6.2 Companded Quantization 8.7 Entropy-Coded Quantization 8.7.1 Entropy Coding of Lloyd-Max Quantizer Outputs 8.7.2 Entropy-Constrained Quantization 8.7.3 High-Rate Optimum Quantization 8.8 Summary 8.9 Projects and Problems 9 Vector Quantization 9.1 Overview 9.2 Introduction 9.3 Advantages of Vector Quantization over Scalar Quantization 9.4 The Linde-Buzo-Gray Algorithm 9.4.1 Initializing the LBG Algorithm 9.4.2 The Empty Cell Problem 9.4.3 Use of LBG for Image Compression 9.5 Tree-Structured Vector Quantizers 9.5.1 Design of Tree-Structured Vector Quantizers 9.6 Structured Vector Quantizers 9.6.1 Pyramid Vector Quantization 9.6.2 Polar and Spherical Vector Quantizers 9.6.3 Lattice Vector Quantizers 9.7 Variations on the Theme 9.7.1 Gain-Shape Vector Quantization 9.7.2 Mean-Removed Vector Quantization 9.7.3 Classified Vector Quantization 9.7.4 Multistage Vector Quantization 9.7.5 Adaptive Vector Quantization 9.8 Summary 9.9 Projects and Problems 10 Differential Encoding 10.1 Overview 10.2 Introduction 10.3 The Basic Algorithm 10.4 Prediction in DPCM 10.5 Adaptive DPCM (ADPCM) 10.5.1 Adaptive Quantization in DPCM 10.5.2 Adaptive Prediction in DPCM 10.6 Delta Modulation 10.6.1 Constant Factor Adaptive Delta Modulation (CFDM) 10.6.2 Continuously Variable Slope Delta Modulation 10.7 Speech Coding 10.7.1 G.726 10.8 Summary 10.9 Projects and Problems 11 Subband Coding 11.1 Overview 11.2 Introduction 11.3 The Frequency Domain and Filtering 11.3.1 Filters 11.4 The Basic Subband Coding Algorithm 11.4.1 Bit Allocation 11.5 Application to Speech Coding-G.722 11.6 Application to Audio Coding-MPEG Audio 11.7 Application to Image Compression 11.7.1 Decomposing an Image 11.7.2 Coding the Subbands 11.8 Wavelets 11.8.1 Families of Wavelets 11.8.2 Wavelets and Image Compression 11.9 Summary 11.10 Projects and Problems 12 Transform Coding 12.1 Overview 12.2 Introduction 12.3 The Transform 12.4 Transforms of Interest 12.4.1 Karhunen-Loeve Transform 12.4.2 Discrete Cosine Transform 12.4.3 Discrete Sine Transform 12.4.4 Discrete Walsh-Hadamard Transform 12.5 Quantization and Coding of Transform Coefficients 12.6 Application to Image Compression-JPEG 12.6.1 The Transform 12.6.2 Quantization 12.6.3 Coding 12.7 Application to Audio Compression 12.8 Summary 12.9 Projects and Problems 13 Analysis/Synthesis Schemes 13.1 Overview 13.2 Introduction 13.3 Speech Compression 13.3.1 The Channel Vocoder 13.3.2 The Linear Predictive Coder (Gov.Std.LPC-10) 13.3.3 Code Excited Linear Prediction (CELP) 13.3.4 Sinusoidal Coders 13.4 Image Compression 13.4.1 Fractal Compression 13.5 Summary 13.6 Projects and Problems 14 Video Compression 14.1 Overview 14.2 Introduction 14.3 Motion Compensation 14.4 Video Signal Representation 14.5 Algorithms for Videoconferencing and Videophones 14.5.1 ITU_T Recommendation H.261 14.5.2 Model-Based Coding 14.6 Asymmetric Applications 14.6.1 The MPEG Video Standard 14.7 Packet Video 14.7.1 ATM Networks 14.7.2 Compression Issues in ATM Networks 14.7.3 Compression Algorithms for Packet Video 14.8 Summary 14.9 Projects and Problems A Probability and Random Processes A.1 Probability A.2 Random Variables A.3 Distribution Functions A.4 Expectation A.5 Types of Distribution A.6 Stochastic Process A.7 Projects and Problems B A Brief Review of Matrix Concepts B.1 A Matrix B.2 Matrix Operations C Codes for Facsimile Encoding D The Root Lattices Bibliography Index

2,311 citations

Journal ArticleDOI
TL;DR: The SWAT-CUP tool as discussed by the authors is a semi-distributed river basin model that requires a large number of input parameters, which complicates model parameterization and calibration, and is used to provide statistics for goodness-of-fit.
Abstract: SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters, which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT, including manual calibration procedures and automated procedures using the shuffled complex evolution method and other common methods. In addition, SWAT-CUP was recently developed and provides a decision-making framework that incorporates a semi-automated approach (SUFI2) using both manual and automated calibration and incorporating sensitivity and uncertainty analysis. In SWAT-CUP, users can manually adjust parameters and ranges iteratively between autocalibration runs. Parameter sensitivity analysis helps focus the calibration and uncertainty analysis and is used to provide statistics for goodness-of-fit. The user interaction or manual component of the SWAT-CUP calibration forces the user to obtain a better understanding of the overall hydrologic processes (e.g., baseflow ratios, ET, sediment sources and sinks, crop yields, and nutrient balances) and of parameter sensitivity. It is important for future calibration developments to spatially account for hydrologic processes; improve model run time efficiency; include the impact of uncertainty in the conceptual model, model parameters, and measured variables used in calibration; and assist users in checking for model errors. When calibrating a physically based model like SWAT, it is important to remember that all model input parameters must be kept within a realistic uncertainty range and that no automatic procedure can substitute for actual physical knowledge of the watershed.

2,200 citations

Journal ArticleDOI
TL;DR: The fit of integration describes the extent the qualitative and quantitative findings cohere and can help health services researchers leverage the strengths of mixed methods.
Abstract: Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods.

2,165 citations

Journal ArticleDOI
TL;DR: The authors constructed a 52-item inventory to measure adults' metacognitive awareness and classified items into eight subcomponents subsumed under two broader categories, knowledge of cognition and regulation of cognition.

2,131 citations

Journal ArticleDOI
TL;DR: The article provides a methodological overview of priority, implementation, and mixing in the sequential explanatory design and offers some practical guidance in addressing those issues.
Abstract: This article discusses some procedural issues related to the mixed-methods sequential explanatory design, which implies collecting and analyzing quantitative and then qualitative data in two consecutive phases within one study. Such issues include deciding on the priority or weight given to the quantitative and qualitative data collection and analysis in the study, the sequence of the data collection and analysis, and the stage/stages in the research process at which the quantitative and qualitative data are connected and the results are integrated. The article provides a methodological overview of priority, implementation, and mixing in the sequential explanatory design and offers some practical guidance in addressing those issues. It also outlines the steps for graphically representing the procedures in a mixed-methods study. A mixed-methods sequential explanatory study of doctoral students’ persistence in a distance-learning program in educational leadership is used to illustrate the methodological dis...

2,123 citations


Authors

Showing all 28272 results

NameH-indexPapersCitations
Donald P. Schneider2421622263641
Suvadeep Bose154960129071
David D'Enterria1501592116210
Aaron Dominguez1471968113224
Gregory R Snow1471704115677
J. S. Keller14498198249
Andrew Askew140149699635
Mitchell Wayne1391810108776
Kenneth Bloom1381958110129
P. de Barbaro1371657102360
Randy Ruchti1371832107846
Ia Iashvili135167699461
Yuichi Kubota133169598570
Ilya Kravchenko132136693639
Andrea Perrotta131138085669
Network Information
Related Institutions (5)
University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

95% related

Pennsylvania State University
196.8K papers, 8.3M citations

95% related

University of Minnesota
257.9K papers, 11.9M citations

94% related

University of California, Davis
180K papers, 8M citations

94% related

University of Wisconsin-Madison
237.5K papers, 11.8M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202393
2022381
20212,809
20202,977
20192,846
20182,854