scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Pattern Recognition and Machine Learning

01 Aug 2007-Technometrics (Taylor & Francis)-Vol. 49, Iss: 3, pp 366-366
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.
Citations
More filters
Journal ArticleDOI
TL;DR: A Modified Fisher Discriminant Function is proposed in this study which makes the traditional function more sensitive to the important instances, so that the profit that can be obtained from a fraud/legitimate classifier is maximized.
Abstract: We introduce Fisher Linear Discriminant Analysis (FLDA).We modify it to be sensitive toward profitable instances.We applied them together in credit card fraud detection problem.The results are compared in terms of total obtained profit with three well-known models.Modified fisher outperforms the other models in attaining high profit. In parallel to the increase in the number of credit card transactions, the financial losses due to fraud have also increased. Thus, the popularity of credit card fraud detection has been increased both for academicians and banks. Many supervised learning methods were introduced in credit card fraud literature some of which bears quite complex algorithms. As compared to complex algorithms which somehow over-fit the dataset they are built on, one can expect simpler algorithms may show a more robust performance on a range of datasets. Although, linear discriminant functions are less complex classifiers and can work on high-dimensional problems like credit card fraud detection, they did not receive considerable attention so far. This study investigates a linear discriminant, called Fisher Discriminant Function for the first time in credit card fraud detection problem. On the other hand, in this and some other domains, cost of false negatives is very higher than false positives and is different for each transaction. Thus, it is necessary to develop classification methods which are biased toward the most important instances. To cope for this, a Modified Fisher Discriminant Function is proposed in this study which makes the traditional function more sensitive to the important instances. This way, the profit that can be obtained from a fraud/legitimate classifier is maximized. Experimental results confirm that Modified Fisher Discriminant could eventuate more profit.

147 citations


Cites background or methods from "Pattern Recognition and Machine Lea..."

  • ...The Fisher Criterion (Christopher, 2006) is defined as the ratio of the between-class variance to the within-class variance:...

    [...]

  • ...The Fisher Criterion (Christopher, 2006) is defined as the ratio of the between-class variance to the within-class variance: JðWÞ ¼ ðm0 m1Þ 2 S20 þ S 2 1 where m0 and m1 are means of classes while S 2 0 and S 2 1 are the scatters of the classes....

    [...]

  • ...Fisher Linear Discriminant or linear classifier (Christopher, 2006; Fisher, 1936; Fukunaga, 1990; McLachlan, 2004) utilizes dimension reduction method to find the best (D-1)-dimensional hyperplane(s) which can divide a D-dimensional space into two or more subspaces....

    [...]

  • ...The modification in the Fisher Criterion which makes it profitable is to apply weighted average for both classes where the weights are defined as total available usable limits on each credit card....

    [...]

Journal ArticleDOI
TL;DR: The development of three component-specific feature descriptors for each monogenic component is produced first and the resulting features are fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks.
Abstract: In this paper, the classification via sprepresentation and multitask learning is presented for target recognition in SAR image. To capture the characteristics of SAR image, a multidimensional generalization of the analytic signal, namely the monogenic signal, is employed. The original signal can be then orthogonally decomposed into three components: 1) local amplitude; 2) local phase; and 3) local orientation. Since the components represent the different kinds of information, it is beneficial by jointly considering them in a unifying framework. However, these components are infeasible to be directly utilized due to the high dimension and redundancy. To solve the problem, an intuitive idea is to define an augmented feature vector by concatenating the components. This strategy usually produces some information loss. To cover the shortage, this paper considers three components into different learning tasks, in which some common information can be shared. Specifically, the component-specific feature descriptor for each monogenic component is produced first. Inspired by the recent success of multitask learning, the resulting features are then fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks. The inference is reached in terms of the total reconstruction error accumulated from all tasks. The novelty of this paper includes 1) the development of three component-specific feature descriptors; 2) the introduction of multitask learning into sparse representation model; 3) the numerical implementation of proposed method; and 4) extensive comparative experimental studies on MSTAR SAR dataset, including target recognition under standard operating conditions, as well as extended operating conditions, and the capability of outliers rejection.

147 citations

Journal ArticleDOI
TL;DR: A derivation of AP that is much simpler than the original one and is based on a quite different graphical model is presented, which allows easy derivations of message updates for extensions and modifications of the standard AP algorithm.
Abstract: Affinity propagation (AP) was recently introduced as an unsupervised learning algorithm for exemplar-based clustering. We present a derivation of AP that is much simpler than the original one and is based on a quite different graphical model. The new model allows easy derivations of message updates for extensions and modifications of the standard AP algorithm. We demonstrate this by adjusting the new AP model to represent the capacitated clustering problem. For those wishing to investigate or extend the graphical model of the AP algorithm, we suggest using this new formulation since it allows a simpler and more intuitive model manipulation.

147 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...Recall the max-sum message update rules (Bishop, 2006):...

    [...]

  • ...Recall the max-sum message update rules (Bishop, 2006): µx→ f (x) = ∑ {l| fl∈ne(x)\ f } µ fl→x(x), (2.4) µ f →x(x) = max x1,...,xM f (x, x1, . . . , xm) + ∑ {m|xm∈ne( f )\x} µxm→ f (xm) , (2.5) where the notation ne(x)\ f is used to indicate the set of variable node x’s neighbors excluding…...

    [...]

  • ...We compare the CAP algorithm to capacitated k-medoids (CKM)—a variant of k-medoids (KM; Bishop, 2006) that was adapted to greedily account for the cluster size limit.2 The measure we use to compare the algorithms is the 1See supporting online material for Frey and Dueck (2007), equations S2a to S8,…...

    [...]

  • ...We compare the CAP algorithm to capacitated k-medoids (CKM)—a variant of k-medoids (KM; Bishop, 2006) that was adapted to greedily account for the cluster size limit....

    [...]

Book ChapterDOI
01 Jan 2013
TL;DR: OPEM is proposed for the first time, an hybrid unknown malware detector which combines the frequency of occurrence of operational codes with the information of the execution trace of an executable (dynamically obtained) and it is shown that this hybrid approach enhances the performance of both approaches when run separately.
Abstract: Malware is any computer software potentially harmful to both computers and networks. The amount of malware is growing every year and poses a serious global security threat. Signature-based detection is the most extended method in commercial antivirus software, however, it consistently fails to detect new malware. Supervised machine learning has been adopted to solve this issue. There are two types of features that supervised malware detectors use: (i) static features and (ii) dynamic features. Static features are extracted without executing the sample whereas dynamic ones requires an execution. Both approaches have their advantages and disadvantages. In this paper, we propose for the first time, OPEM, an hybrid unknown malware detector which combines the frequency of occurrence of operational codes (statically obtained) with the information of the execution trace of an executable (dynamically obtained). We show that this hybrid approach enhances the performance of both approaches when run separately.

147 citations


Cites methods from "Pattern Recognition and Machine Lea..."

  • ...– Cross validation: To evaluate the performance of machine-learning classifiers, k-fold cross validation is usually used in machine-learning experiments [17]....

    [...]

Journal ArticleDOI
20 Nov 2013-PLOS ONE
TL;DR: This theory describes signaling as a combination of a pragmatic and a communicative action, and explains how it simplifies coordination in online social interactions, and cast signaling within a “joint action optimization” framework in which co-actors optimize the success of their interaction and joint goals rather than only their part of the joint action.
Abstract: Although the importance of communication is recognized in several disciplines, it is rarely studied in the context of online social interactions and joint actions. During online joint actions, language and gesture are often insufficient and humans typically use non-verbal, sensorimotor forms of communication to send coordination signals. For example, when playing volleyball, an athlete can exaggerate her movements to signal her intentions to her teammates (say, a pass to the right) or to feint an adversary. Similarly, a person who is transporting a table together with a co-actor can push the table in a certain direction to signal where and when he intends to place it. Other examples of “signaling” are over-articulating in noisy environments and over-emphasizing vowels in child-directed speech. In all these examples, humans intentionally modify their action kinematics to make their goals easier to disambiguate. At the moment no formal theory exists of these forms of sensorimotor communication and signaling. We present one such theory that describes signaling as a combination of a pragmatic and a communicative action, and explains how it simplifies coordination in online social interactions. We cast signaling within a “joint action optimization” framework in which co-actors optimize the success of their interaction and joint goals rather than only their part of the joint action. The decision of whether and how much to signal requires solving a trade-off between the costs of modifying one’s behavior and the benefits in terms of interaction success. Signaling is thus an intentional strategy that supports social interactions; it acts in concert with automatic mechanisms of resonance, prediction, and imitation, especially when the context makes actions and intentions ambiguous and difficult to read. Our theory suggests that communication dynamics should be studied within theories of coordination and interaction rather than only in terms of the maximization of information transmission.

147 citations