scispace - formally typeset
Search or ask a question
Topic

Credit card

About: Credit card is a research topic. Over the lifetime, 16998 publications have been published within this topic receiving 347688 citations. The topic is also known as: 💳 & App-O-Rama.


Papers
More filters
Journal Article
TL;DR: Defection rates are not just a measure of service quality; they are also a guide for achieving it; by listening to the reasons why customers defect, managers learn exactly where the company is falling short and where to direct their resources.
Abstract: Companies that want to improve their service quality should take a cue from manufacturing and focus on their own kind of scrap heap: customers who won't come back. Because that scrap heap can be every bit as costly as broken parts and misfit components, service company managers should strive to reduce it. They should aim for "zero defections"--keeping every customer they can profitably serve. As companies reduce customer defection rates, amazing things happen to their financials. Although the magnitude of the change varies by company and industry, the pattern holds: profits rise sharply. Reducing the defection rate just 5% generates 85% more profits in one bank's branch system, 50% more in an insurance brokerage, and 30% more in an auto-service chain. And when MBNA America, a Delaware-based credit card company, cut its 10% defection rate in half, profits rose a whopping 125%. But defection rates are not just a measure of service quality; they are also a guide for achieving it. By listening to the reasons why customers defect, managers learn exactly where the company is falling short and where to direct their resources. Staples, the stationery supplies retailer, uses feedback from customers to pinpoint products that are priced too high. That way, the company avoids expensive broad-brush promotions that pitch everything to everyone. Like any important change, managing for zero defections requires training and reinforcement. Great-West Life Assurance Company pays a 50% premium to group health-insurance brokers that hit customer-retention targets, and MBNA America gives bonuses to departments that hit theirs.

5,915 citations

Journal ArticleDOI
01 May 1995
TL;DR: A critical survey of existing literature on human and machine recognition of faces is presented, followed by a brief overview of the literature on face recognition in the psychophysics community and a detailed overview of move than 20 years of research done in the engineering community.
Abstract: The goal of this paper is to present a critical survey of existing literature on human and machine recognition of faces. Machine recognition of faces has several applications, ranging from static matching of controlled photographs as in mug shots matching and credit card verification to surveillance video images. Such applications have different constraints in terms of complexity of processing requirements and thus present a wide range of different technical challenges. Over the last 20 years researchers in psychophysics, neural sciences and engineering, image processing analysis and computer vision have investigated a number of issues related to face recognition by humans and machines. Ongoing research activities have been given a renewed emphasis over the last five years. Existing techniques and systems have been tested on different sets of images of varying complexities. But very little synergism exists between studies in psychophysics and the engineering literature. Most importantly, there exists no evaluation or benchmarking studies using large databases with the image quality that arises in commercial and law enforcement applications In this paper, we first present different applications of face recognition in commercial and law enforcement sectors. This is followed by a brief overview of the literature on face recognition in the psychophysics community. We then present a detailed overview of move than 20 years of research done in the engineering community. Techniques for segmentation/location of the face, feature extraction and recognition are reviewed. Global transform and feature based methods using statistical, structural and neural classifiers are summarized. >

2,727 citations

Proceedings Article
04 Aug 2001
TL;DR: It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.
Abstract: This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectation is computed using the conditional probability of each class given the example. Mathematically, let the entry in a cost matrix be the cost of predicting class when the true class is . If then the prediction is correct, while if the prediction is incorrect. The optimal prediction for an example is the class that minimizes ! (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For each , is a sum over the alternative possibilities for the true class of . In this framework, the role of a learning algorithm is to produce a classifier that for any example can estimate the probability " # of each class being the true class of . For an example , making the prediction means acting as if is the true class of . The essence of cost-sensitive decision-making is that it can be optimal to act as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is most likely legitimate. 1.1 Cost matrix properties A cost matrix always has the following structure when there are only two classes: actual negative actual positive predict negative $% $& ' )(!*+* $% -,. /(!*10 predict positive 2,& $& ' )(302* 2,& -,. /(30+0 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, while columns correspond to actual classes, i.e. row/column = / = predicted/actual. In our notation, the cost of a false positive is (302* while the cost of a false negative is (!*!0 . Conceptually, the cost of labeling an example incorrectly should always be greater than the cost of labeling it correctly. Mathematically, it should always be the case that ( 0 *54 ( *+* and ( *!064 ( 0 0 . We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated, so (!*+*879(302* but still (!*!0 4 (30+0 . In this case the optimal policy is to label all examples positive. Similarly, if (:0 * 4 (!*+* but (30 0;7 in a cost matrix if for all , = ?7 > @ A . In this case the cost of predicting > is no greater than the cost of predicting = , regardless of what the true class is. So it is optimal never to predict = . As a special case, the optimal prediction is always > if row > is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positive constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optimal are unchanged B if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline away from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonableness conditions can be transformed into a simpler matrix that always leads to the same decisions:

2,113 citations

Proceedings Article
01 Jan 2002
TL;DR: In this article, the authors integrate sociological and economic theories about institution-based trust to propose that the perceived effectiveness of three IT-enabled institutional mechanisms-specifically feedback mechanisms, third-party escrow services, and credit card guarantees-engender buyer trust in the community of online auction sellers.
Abstract: Institution-based trust is a buyer's perception that effective third-party institutional mechanisms are in place to facilitate transaction success. This paper integrates sociological and economic theories about institution-based trust to propose that the perceived effectiveness of three IT-enabled institutional mechanisms-specifically feedback mechanisms, third-party escrow services, and credit card guarantees-engender buyer trust in the community of online auction sellers. Trust in the marketplace intermediary that provides the overarching institutional context also builds buyer's trust in the community of sellers. In addition, buyers' trust in the community of sellers (as a group) facilitates online transactions by reducing perceived risk. Data collected from 274 buyers in Amazon's online auction marketplace provide support for the proposed structural model. Longitudinal data collected a year later show that transaction intentions are correlated with actual and self-reported buyer behavior. The study shows that the perceived effectiveness of institutional mechanisms encompasses both "weak" (market-driven) and "strong" (legally binding) mechanisms. These mechanisms engender trust, not only in a few reputable sellers, but also in the entire community of sellers, which contributes to an effective online marketplace. The results thus help explain why, despite the inherent uncertainty that arises when buyers and sellers are separated in time and in space, online marketplaces are proliferating. Implications for theory are discussed, and suggestions for future research on improving IT-enabled trust-building mechanisms are suggested.

2,044 citations

Journal ArticleDOI
TL;DR: In this article, the authors integrate sociological and economic theories about institution-based trust to propose that the perceived effectiveness of three IT-enabled institutional mechanisms-specifically feedback mechanisms, third-party escrow services, and credit card guarantees-engender buyer trust in the community of online auction sellers.
Abstract: Institution-based trust is a buyer's perception that effective third-party institutional mechanisms are in place to facilitate transaction success. This paper integrates sociological and economic theories about institution-based trust to propose that the perceived effectiveness of three IT-enabled institutional mechanisms-specifically feedback mechanisms, third-party escrow services, and credit card guarantees-engender buyer trust in the community of online auction sellers. Trust in the marketplace intermediary that provides the overarching institutional context also builds buyer's trust in the community of sellers. In addition, buyers' trust in the community of sellers (as a group) facilitates online transactions by reducing perceived risk. Data collected from 274 buyers in Amazon's online auction marketplace provide support for the proposed structural model. Longitudinal data collected a year later show that transaction intentions are correlated with actual and self-reported buyer behavior. The study shows that the perceived effectiveness of institutional mechanisms encompasses both "weak" (market-driven) and "strong" (legally binding) mechanisms. These mechanisms engender trust, not only in a few reputable sellers, but also in the entire community of sellers, which contributes to an effective online marketplace. The results thus help explain why, despite the inherent uncertainty that arises when buyers and sellers are separated in time and in space, online marketplaces are proliferating. Implications for theory are discussed, and suggestions for future research on improving IT-enabled trust-building mechanisms are suggested.

1,950 citations


Network Information
Related Topics (5)
Empirical research
51.3K papers, 1.9M citations
83% related
The Internet
213.2K papers, 3.8M citations
81% related
Competitive advantage
46.6K papers, 1.5M citations
79% related
Consumer behaviour
24.6K papers, 992.9K citations
78% related
Social media
76K papers, 1.1M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023337
2022670
2021410
2020560
2019550
2018589