scispace - formally typeset
Search or ask a question

Showing papers in "Statistical Science in 2002"


Journal ArticleDOI
TL;DR: This work describes the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used, and statistics and machine learning provide effective technologies for fraud detection.
Abstract: Fraud is increasing dramatically with the expansion of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. Methodologies for the detection of fraud are essential if we are to catch fraudsters once fraud prevention has failed. Statistics and machine learning provide effective technologies for fraud detection and have been applied successfully to detect activities such as money laundering, e-commerce credit card fraud, telecommunications fraud and computer intrusion, to name but a few. We describe the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used.

1,209 citations


Journal ArticleDOI
TL;DR: In this paper, a method of exact permutation inference that is entirely free of distributional assumptions and uses the random assignment of treatments as the "reasoned basis for inference" is derived.
Abstract: By slightly reframing the concept of covariance adjustment in randomized experiments, a method of exact permutation inference is derived that is entirely free of distributional assumptions and uses the random assignment of treatments as the "reasoned basis for inference." This method of exact permutation inference may be used with many forms of covariance adjustment, including robust regression and locally weighted smoothers. The method is then generalized to observational studies where treatments were not randomly assigned, so that sensitivity to hidden biases must be examined. Adjustments using an instrumental variable are also discussed. The methods are illustrated using data from two observational studies.

459 citations


Journal ArticleDOI
TL;DR: It is argued that two types of sieves outperform the block method, each of them in its own important niche, namely linear and categorical processes, respectively.
Abstract: We review and compare block, sieve and local bootstraps for time series and thereby illuminate theoretical aspects of the procedures as well as their performance on finite-sample data. Our view is selective with the intention of providing a new and fair picture of some particular aspects of bootstrapping time series. The generality of the block bootstrap is contrasted with sieve bootstraps. We discuss implementational advantages and disadvantages. We argue that two types of sieve often outperform the block method, each of them in its own important niche, namely linear and categorical processes. Local bootstraps, designed for nonparametric smoothing problems, are easy to use and implement but exhibit in some cases low performance.

275 citations


Journal ArticleDOI
TL;DR: The mixture transition distribution model (MTD) was introduced by Raftery as discussed by the authors for the modeling of high-order Markov chains with a finite state space and has been successfully applied to a range of situations, including the analysis of wind directions, DNA sequences and social behavior.
Abstract: The mixture transition distribution model (MTD) was introduced in 1985 by Raftery for the modeling of high-order Markov chains with a finite state space. Since then it has been generalized and successfully applied to a range of situations, including the analysis of wind directions, DNA sequences and social behavior. Here we review the MTD model and the developments since 1985. We first introduce the basic principle and then we present several extensions, including general state spaces and spatial statistics. Following that, we review methods for estimating the model parameters. Finally, a review of different types of applications shows the practical interest of the MTD model.

247 citations


Journal ArticleDOI
TL;DR: In this article, the authors adopt a Bayesian approach to sample size determination in hierarchical models and provide theoretical tools for studying performance as a function of sample size, with a variety of illustrative results.
Abstract: Sample size determination (SSD) is a crucial aspect of experimental design. Two SSD problems are considered here. The first concerns how to select a sample size to achieve specified performance with regard to one or more features of a model. Adopting a Bayesian perspective, we move the Bayesian SSD problem from the rather elementary models addressed in the literature to date in the direction of the wide range of hierarchical models which dominate the current Bayesian landscape. Our approach is generic and thus, in principle, broadly applicable. However, it requires full model specification and computationally intensive simulation, perhaps limiting it practically to simple instances of such models. Still, insight from such cases is of useful design value. In addition, we present some theoretical tools for studying performance as a function of sample size, with a variety of illustrative results. Such results provide guidance with regard to what is achievable. We also offer two examples, a survival model with censoring and a logistic regression model. The second problem concerns how to select a sample size to achieve specified separation of two models. We approach this problem by adopting a screening criterion which in turn forms a model choice criterion. This criterion is set up to choose model 1 when the value is large, model 2 when the value is small. The SSD problem then requires choosing $n_{1}$ to make the probability of selecting model 1 when model 1 is true sufficiently large and choosing $n_{2}$ to make the probability of selecting model 2 when model 2 is true sufficiently large. The required n is $\max(n_{1}, n_{2})$. Here, we again provide two illustrations. One considers separating normal errors from t errors, the other separating a common growth curve model from a model with individual growth curves.

165 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss empirical failings of the coin-flip model of voting and consider, first, the implications for voting power and, second, ways in which votes could be modeled more realistically.
Abstract: In an election, voting power—the probability that a single vote is decisive—is affected by the rule for aggregating votes into a single outcome. Voting power is important for studying political representation, fairness and strategy, and has been much discussed in political science. Although power indexes are often considered as mathematical definitions, they ultimately depend on statistical models of voting. Mathematical calculations of voting power usually have been performed under the model that votes are decided by coin flips. This simple model has interesting implications for weighted elections, two-stage elections (such as the U.S. Electoral College) and coalition structures. We discuss empirical failings of the coin-flip model of voting and consider, first, the implications for voting power and, second, ways in which votes could be modeled more realistically. Under the random voting model, the standard deviation of the average of n votes is proportional to 1/√n, but under more general models, this variance can have the form cn^(−α) or √a−b log n. Voting power calculations undermore realistic models present research challenges in modeling and computation.

104 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare methods for setting Gaussian and Poisson confidence intervals for cases in which the parameter to be estimated is bounded, and show that these procedures lead to substantially different intervals when a relatively improbable observation implies a parameter estimate beyond the bound.
Abstract: Setting confidence bounds is an essential part of the reporting of experimental results. Current physics experiments are often done to measure nonnegative parameters that are small and may be zero and to search for small signals in the presence of backgrounds. These are examples of experiments which offer the possibility of yielding a result, recognized a priori to be relatively improbable, of a negative estimate for a quantity known to be positive. The classical Neyman procedure for setting confidence bounds in this situation is arguably unsatisfactory and several alternatives have been recently proposed. We compare methods for setting Gaussian and Poisson confidence intervals for cases in which the parameter to be estimated is bounded. These procedures lead to substantially different intervals when a relatively improbable observation implies a parameter estimate beyond the bound.

92 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider various alternatives to greedy, deterministic schemes, and present a Bayesian framework for studying adaptation in the context of an extended linear model (ELM).
Abstract: In many statistical applications, nonparametric modeling can provide insights into the features of a dataset that are not obtainable by other means. One successful approach involves the use of (univariate or multivariate) spline spaces. As a class, these methods have inherited much from classical tools for parametric modeling. For example, stepwise variable selection with spline basis terms is a simple scheme for locating knots (breakpoints) in regions where the data exhibit strong, local features. Similarly, candidate knot configurations (generated by this or some other search technique), are routinely evaluated with traditional selection criteria like AIC or BIC. In short, strategies typically applied in parametric model selection have proved useful in constructing flexible, low-dimensional models for nonparametric problems. Until recently, greedy, stepwise procedures were most frequently suggested in the literature. Research into Bayesian variable selection, however, has given rise to a number of new spline-based methods that primarily rely on some form of Markov chain Monte Carlo to identify promising knot locations. In this paper, we consider various alternatives to greedy, deterministic schemes, and present a Bayesian framework for studying adaptation in the context of an extended linear model (ELM). Our major test cases are Logspline density estimation and (bivariate) Triogram regression models. We selected these because they illustrate a number of computational and methodological issues concerning model adaptation that arise in ELMs.

88 citations


Journal ArticleDOI
TL;DR: The eponym Walker Circulation refers to a concept used by atmospheric scientists and oceanographers in providing a physical explanation for the El Nino-Southern Oscillation phenomenon, whereas the eponym Yule-Walker equations refers to properties satisfied by the autocorrelations of an autoregressive process as discussed by the authors.
Abstract: The eponym "Walker Circulation" refers to a concept used by atmospheric scientists and oceanographers in providing a physical explanation for the El Nino-Southern Oscillation phenomenon, whereas the eponym "Yule-Walker equations" refers to properties satisfied by the autocorrelations of an autoregressive process. But how many statisticians (or, for that matter, atmospheric scientists) are aware that the "Walker" in both terms refers to the same individual, Sir Gilbert Thomas Walker, and that these two appellations arose in conjunction with the same research on the statistical prediction of climate? Like George Udny Yule (the "Yule" in Yule- Walker), Walker's motivation was to devise a statistical model that exhibited quasiperiodic behavior. The original assessments of Walker's work, both in the meteorology and in statistics, were somewhat negative. With hindsight, it is argued that his research should be viewed as quite successful.

72 citations


Journal ArticleDOI
TL;DR: In this paper, a literature review of inference for superpopulation parameters is given, with emphasis on why these findings have not been previously appreciated, and examples are provided for estimating superpopulation means, linear regression coefficients and logistic regression coefficients using U.S. data from the 1987 National Health Interview Survey, the third National Health and Nutrition Examination Survey and the 1986 National Hospital Discharge Survey.
Abstract: Sample survey inference is historically concerned with finite-population parameters, that is, functions (like means and totals) of the observations for the individuals in the population. In scientific applications, however, interest usually focuses on the “superpopulation” parameters associated with a stochastic mechanismhypothesized to generate the observations in the population rather than the finite-population parameters. Two relevant findings discussed in this paper are that (1) with stratified sampling, it is not sufficient to drop finite-population correction factors from standard design-based variance formulas to obtain appropriate variance formulas for superpopulation inference, and (2) with cluster sampling, standard design-based variance formulas can dramatically underestimate superpopulation variability, even with a small sampling fraction of the final units. A literature review of inference for superpopulation parameters is given, with emphasis on why these findings have not been previously appreciated. Examples are provided for estimating superpopulation means, linear regression coefficients and logistic regression coefficients using U.S. data from the 1987 National Health Interview Survey, the third National Health and Nutrition Examination Survey and the 1986 National Hospital Discharge Survey.

68 citations


Journal ArticleDOI
TL;DR: A theoretic model in which party leaders choose electoral declarations with an eye toward the expected policy outcome of the coalition bargaining game induced by the party declarations and the parties' beliefs about citizens' voting behavior is presented.
Abstract: Most theoretic models of multiparty electoral competition make the assumption that party leaders are motivated to maximize their vote share or seat share. In plurality-rule systems this is a sensible assumption. However, in proportional representation systems, this assumption is questionable since the ability to make public policy is not strictly increasing in vote shares or seat shares. We present a theoretic model in which party leaders choose electoral declarations with an eye toward the expected policy outcome of the coalition bargaining game induced by the party declarations and the parties' beliefs about citizens' voting behavior. To test this model, we turn to data from the 1989 Dutch parliamentary election. We use Markov chain Monte Carlo methods to estimate the parties' beliefs about mass voting behavior and to average over measurement uncertainty and missing data. Due to the complexity of the parties' objective functions and the uncertainty in objective function estimates, equilibria are found numerically. Unlike previous models of multiparty electoral competition, the equilibrium results are consistent with the empirical declarations of the four major Dutch parties.

Journal ArticleDOI
TL;DR: In this paper, the authors make the claim that causal failures are more deleterious to infrastructure reliability than cascading failures, which is contrary to a commonly held perception of network designers and operators.
Abstract: This paper is addressed to engineers and statisticians working on topics in reliability and survival analysis. It is also addressed to designers of network systems. The material here is prompted by problems of infrastructure assurance and protection. Infrastructure systems, like the internet and the power grid, comprise a web of interconnected components experiencing interacting (or dependent) failures. Such systems are prone to a paralyzing collapse caused by a succession of rapid failures; this phenomenon is referred to as "cascading failures." Assessing the reliability of an infrastructure system is a key step in its design. The purpose of this paper is to articulate on aspects of infrastructure reliability, in particular the notions of chance, interaction, cause and cascading. Following a commentary on how the term "reliability" is sometimes interpreted, the paper begins by making the argument that exchangeability is a meaningful setting for discussing interaction. We start by considering individual components and describe what it means to say that they are exchangeable. We then show how exchangeability leads us to distinguish between chance and probability. We then look at how entire networks can be exchangeable and how components within a network can be dependent. The above material, though expository, serves the useful purpose of enabling us to introduce and make precise the notions of causal and cascading failures. Classifying dependent failures as being either causal or cascading and characterizing these notions is a contribution of this paper. The others are a focus on networks and their setting in the context of exchangeability. A simple model for cascading failures closes the paper. A virtue of this model is that it enables us to make the important claim that causal failures are more deleterious to infrastructure reliability than cascading failures. This claim, being contrary to a commonly held perception of network designers and operators, is perhaps the key contribution of this paper.

Journal ArticleDOI
TL;DR: A two-day workshop on the evaluation of complex computer models of the world was held in Santa Fe, New Mexico in December 1999 as mentioned in this paper, where the focus was on evaluating the accuracy and utility of computer models.
Abstract: As decision- and policy-makers come to rely increasingly on estimates and simulations produced by computerized models of the world, in areas as diverse as climate prediction, transportation planning, economic policy and civil engineering, the need for objective evaluation of the accuracy and utility of such models likewise becomes more urgent. This article summarizes a two-day workshop that took place in Santa Fe, New Mexico in December 1999, whose focus was the evaluation of complex computer models. Approximately half of the workshop was taken up with formal presentation of four computer models by their creators, each paired with an initial assessment by a statistician. These prepared papers are presented, in shortened form, in Section 3 of this paper. The remainder of the workshop was devoted to introductory and summary comments, short contributed descriptions of related models and a great deal of floor discussion, which was recorded by assigned rapporteurs. These are presented in Sections 2 and 4 in the paper. In the introductory and concluding sections we attempt to summarize the progress made by the workshop and suggest next steps.

Journal ArticleDOI
TL;DR: A sample from the panoply of formal theories on voting and elections to Statistical Science readers who have had limited exposure to such work is presented, providing a general, but not overly simplified, review of these theories with practical examples.
Abstract: The purpose of this article is to present a sample from the panoply of formal theories on voting and elections to Statistical Science readers who have had limited exposure to such work. These abstract ideas provide a framework for understanding the context of the empirical articles that follow in this volume. The primary focus of this theoretical literature is on the use of mathematical formalism to describe electoral systems and outcomes by modeling both voting rules and human behavior. As with empirical models, these constructs are never perfect descriptors of reality, but instead form the basis for understanding fundamental characteristics of the studied system. Our focus is on providing a general, but not overly simplified, review of these theories with practical examples. We end the article with a thought experiment that applies different vote aggregation schemes to the 2000 presidential election count in Florida, and we find that alternative methods provide different results.

Journal ArticleDOI
TL;DR: The 2000 U.S. election was the most controversial U. S. election in recent history, mainly due to the disputed outcome of the election in Florida as mentioned in this paper, and the 2000 presidential election was one of the seminal elections in the history of the United States.
Abstract: The 2000 presidential election was the most controversial U.S. election in recent history, mainly due to the disputed outcome of the election in Florida. Elsewhere in this issue, Richard Smith analyzes the high vote for Pat Buchanan in Palm Beach county. As background for his article, we summarize this and other voting-related issues that may have affected the outcome of the election in Florida, such as the undervote in counties that used a punch-card ballot and the overvote in counties that used multicolumn and multipage ballots.

Journal ArticleDOI
TL;DR: This article used multiple regression techniques, using votes for the other candidates and demographic variables as covariates, to obtain point and interval predictions for Buchanan's vote in Palm Beach county, where the Reform party candidate Pat Buchanan recorded an unexpectedly large 3,407 votes.
Abstract: This article presents a statistical analysis of the results of the 2000 U.S. presidential election in the 67 counties of Florida, with particular attention to the result in Palm Beach county, where the Reform party candidate Pat Buchanan recorded an unexpectedly large 3,407 votes. It was alleged that the "butterfly ballot'' had misled many voters into voting for Buchanan when they in fact intended to vote for Al Gore. We use multiple regression techniques, using votes for the other candidates and demographic variables as covariates, to obtain point and interval predictions for Buchanan's vote in Palm Beach based on the data in the other 66 counties of Florida. A typical result shows a point prediction of 371 and a 95% prediction interval of 219--534. Much of the discussion is concerned with technical aspects of applying multiple regression to this kind of data set, focussing on issues such as heteroskedasticity, overdispersion, data transformations and diagnostics. All the analyses point to Buchanan's actual vote as a clear and massive outlier.

Journal ArticleDOI
TL;DR: Emanuel Parzen as mentioned in this paper was one of the first to receive the American Statistical Association's William S. Wilks Memorial Medal, which was presented by the American Association for the Advancement of Science.
Abstract: Emanuel Parzen was born in New York City on April 21, 1929. He attended the Bronx High School of Science, received an A.B. in Mathematics from Harvard University in 1949, an M.A. in Mathematics from the University of California at Berkeley in 1951 and his Ph.D. in Mathematics and Statistics in 1953, also at Berkeley. He was a research scientist at Hudson Labs, Physics Department of Columbia University, from1953 to1956 and an Assistant Professor of Mathematical Statistics at Columbia from1955 to1956. In1956, he moved to Stanford University, where he stayed until1970, at which time he joined the faculty at the State University of New York at Buffalo, where he served first as Leading Professor and Chairman of the Department of Statistics and then as Director of Statistical Science. In1978 he moved to Texas A&M University as a Distinguished Professor, a post he currently holds. He has been a Fellow at Imperial College London, at IBM Systems Research Institute and at the Center for Advanced Study in the Behavioral Sciences at Stanford, as well as a Visiting Professor at the Sloan School of MIT, the Department of Statistics at Harvard and the Department of Biostatistics at Harvard. In 1959 he married Carol Tenowitz. They have two children and four grandchildren. Professor Parzen has authored or coauthored over 100 papers and 6 books. He has served on innumerable editorial boards and national committees, and has organized several influential conferences and workshops. He has directed the research of many graduate students and provided advice, encouragement and collaboration to students and colleagues around the world. To honor these contributions, he has been elected a Fellow of the American Statistical Association, of the Institute of Mathematical Statistics and of the American Association for the Advancement of Science. In 1994, he was awarded the prestigious Samuel S. Wilks Memorial Medal by the American Statistical Association.

Journal ArticleDOI
TL;DR: In this article, the authors discuss various technical problems that have arisen in attempting to implement the Telecommunications Act of 1996, the purpose of which was to ensure fair competition in the local telecommunications market.
Abstract: We discuss various technical problems that have arisen in attempting to implement the Telecommunications Act of 1996, the purpose of which was to ensure fair competition in the local telecommunications market. We treat the interpretation of the "parity'' requirement, testing the parity hypothesis, the effect of correlation, disaggregation and reaggregation, "balancing,'' benchmarks, payment schedules and some computational problems. Also we discuss the difficulty of working in an adversarial (rather than scientific) environment.

Journal ArticleDOI
TL;DR: Kotz as discussed by the authors was the senior coeditor-in-chief of the thirteen-volume Encyclopedia of Statistical Sciences, an author or coauthor of over one hundred and fifty articles on statistical methodology and theory, twelve books in the field of statistics and quality control and three Russian-English scientific dictionaries and co-author of the often-cited compendium of statistical distributions.
Abstract: Samuel Kotz was born in Harbin, China, on August 28, 1930 After graduating with honors in 1946 from the Russian School in Harbin, he studied electrical engineering at Harbin Institute of Technology during 1947-1949 In 1949 he emigrated to Israel, where, after two years of military service, he studied at the Hebrew University in Jerusalem, obtaining an MA with honors in Mathematics in 1956 Following two years at the Israeli Meteorological Service, he entered graduate school at Cornell University and obtained a PhD degree in Mathematics in 1960 After research positions at the University of North Carolina, Chapel Hill, and the University of Toronto, he joined the latter institution as an Associate Professor in 1964 He moved to Temple University, Philadelphia, in 1967 as Professor of Mathematics and then to the University of Maryland, College Park, in 1979 as Professor in the College of Business and Management He took early retirement and moved to George Washington University in 1997 Samuel Kotz has made substantial contributions in several areas of statistics--including systems of distributions, measures of dependence, multivariate analysis, characterizations, limit distributions, replacement theory, quality control, information theory and applications of statistics He is the senior coeditor-in-chief of the thirteen-volume Encyclopedia of Statistical Sciences, an author or coauthor of over one hundred and fifty articles on statistical methodology and theory, twelve books in the field of statistics and quality control and three Russian-English scientific dictionaries and coauthor of the often-cited compendium of statistical distributions His efforts, excellence and contributions were recognized by the award of honorary Doctor of Science degrees from Harbin Institute of Technology (China) in 1988, from the University of Athens (Greece) in 1995 and from Bowling Green State University (Ohio, USA) in 1997 In 1997 a volume containing thirty-eight essays was published in honor of his sixty-fifth birthday He was awarded membership in the Washington Academy of Sciences in 1998 He is a Fellow of the Royal Statistical Society, Fellow of the American Statistical Association, Fellow of the Institute of Mathematical Statistics and an elected member of the International Statistical Institute


Journal ArticleDOI
TL;DR: Professor Mardia has made pioneering contributions in many areas of statistics including multivariate analysis, directional data analysis, shape analysis, and spatial statistics and has been credited for path-breaking contributions in geostatistics, imaging, machine vision, tracking, and spatio-temporal modeling.
Abstract: Kantilal Vardichand Mardia was born on April 3, 1935, in Sirohi, Rajasthan, India. He earned his B.Sc. degree in mathematics from Ismail Yusuf College–University of Bombay, in 1955, M.Sc. degrees in statistics and in pure mathematics from University of Bombay in 1957 and University of Poona in 1961, respectively, and Ph.D. degrees in statistics from the University of Rajasthan and the University of Newcastle, respectively, in 1965 and 1967. For significant contributions in statistics, he was awarded a D.Sc. degree from the University of Newcastle in 1973. He started his career as an Assistant Lecturer in the Institute of Science, Bombay and went to Newcastle as a Commonwealth Scholar. After receiving the Ph.D. degree from Newcastle, he joined the University of Hull as a lecturer in statistics in 1967, later becoming a reader in statistics in 1971. He was appointed a Chair Professor in Applied Statistics at the University of Leeds in 1973 and was the Head of the Department of Statistics during 1976–1993, and again from 1997 to the present. Professor Mardia has made pioneering contributions in many areas of statistics including multivariate analysis, directional data analysis, shape analysis, and spatial statistics. He has been credited for path-breaking contributions in geostatistics, imaging, machine vision, tracking, and spatio-temporal modeling, to name a few. He was instrumental in the founding of the Center of Medical Imaging Research in Leeds and he holds the position of a joint director of this internationally eminent center. He has pushed hard in creating exchange programs between Leeds and other scholarly centers such as the University of Granada, Spain, and the Indian Statistical Institute, Calcutta. He has written several scholarly books and edited conference proceedings and other special volumes. But perhaps he is best known for his books: Multivariate Analysis (coauthored with John Kent and John Bibby, 1979, Academic Press), Statistics of Directional Data (second edition with Peter Jupp, 1999, Wiley) and Statistical Shape Analysis (coauthored with Ian Dryden, 1998, Wiley). The conferences and workshops he has been organizing in Leeds for a number of years have had significant impacts on statistics and its interface with IT (information technology). He is dynamic and his sense of humor is unmistakable. He is a world traveler. Among other places, he has visited Princeton University, the University of Michigan, Harvard University, the University of Granada, Penn State and the University of Connecticut. He has given keynote addresses and invited lectures in international conferences on numerous occasions. He has been on the editorial board of statistical, as well as image related, journals including the IEEE Transactions on Pattern Analysis and Machine Intelligence, Journal of Environmental and Ecological Statistics, Journal of Statistical Planning and Inference and Journal of Applied Statistics. He has been elected a Fellow of the American Statistical Association, a Fellow of the Institute of Mathematical Statistics, and a Fellow of the American Dermatoglyphic Association. He is also an elected member of the International Statistical Institute and a Senior Member of IEEE. Professor Mardia retired on September 30, 2000 to take a full-time post as Senior Research Professor at Leeds—a new position especially created for him.

Journal ArticleDOI
TL;DR: Godambe was the recipient of the 1987 Gold Medal of the Statistical Society of Canada and is an Honorary Member of the International Statistical Association as mentioned in this paper and Fellow of the Institute of Mathematical Statistics.
Abstract: Vidyadhar Prabhakar Godambe was born on June 1, 1926, in Pune, India. He received the M.Sc. degree in Statistics from Bombay University in 1950 and the Ph.D. from the University of London in 1958. Between periods of study, from 1951 to 1955, he was a Research Officer in the Bureau of Economics and Statistics of the Government of Bombay. Following a year as Visiting Lecturer at the University of California, Berkeley (1957--1958), and a year as Senior Research Fellow at the Indian Statistical Institute in Calcutta (1958--1959), he became Professor and Head of the Statistics Department at Science College in Nagpur. He was promoted to the position of Professor and Head of the Statistics Department in the Institute of Science, Bombay University, in 1962. In 1964, he left India for North America, becoming for one year a Research Statistician at the Dominion Bureau of Statistics in Ottawa. After subsequent Visiting Professorships at Johns Hopkins University and the University of Michigan, he joined the University of Waterloo Department of Statistics in 1967 and has been at Waterloo ever since. Professor Godambe is a Fellow of the American Statistical Association, a Fellow of the Institute of Mathematical Statistics, a Member of the International Statistical Institute, and an Honorary Fellow of the International Indian Statistical Association. He is the recipient of the 1987 Gold Medal of the Statistical Society of Canada and is an Honorary Member of that society. Upon his retirement in 1991 he was awarded the title of Distinguished Professor Emeritus at the University of Waterloo. The following conversation took place in Waterloo in August 2000, mainly by correspondence. Some of the responses are taken from "Briefly about myself,'' an autobiographical piece written by Professor Godambe in 1998--2000.