scispace - formally typeset
Search or ask a question

Showing papers by "Chris J. Mitchell published in 2019"


Journal ArticleDOI
TL;DR: In this paper, the authors examined the effect of generating errors versus studying on item recognition, cued recall, associative recognition, two-alternative forced choice and multiple-choice performance.

19 citations


Journal ArticleDOI
TL;DR: The dataTherefore, when baseline response choice was equated and only one outcome was primed per test trial, PIT was sensitive to outcome devaluation, and the data support goal-directed models of PIT.
Abstract: The current article concerns human outcome-selective Pavlovian-instrumental transfer (PIT), where Pavlovian cues selectively invigorate instrumental responses that predict common rewarding outcomes. Several recent experiments have observed PIT effects that were insensitive to outcome devaluation manipulations, which has been taken as evidence of an automatic "associative" mechanism. Other similar studies observed PIT effects that were sensitive to devaluation, which suggests a more controlled, goal-directed process. Studies supporting the automatic approach have been criticized for using a biased baseline, whereas studies supporting the goal-directed approach have been criticized for priming multiple outcomes at test. The current experiment addressed both of these issues. Participants first learned to perform two instrumental responses to earn two outcomes each (R1-O1/O3, R2-O2/O4), before four Pavlovian stimuli (S1-S4) were trained to predict each outcome. One outcome that was paired with each instrumental response (O3 and O4) was then devalued, so that baseline response choice at test would be balanced. Instrumental responding was then assessed in the presence of each individual Pavlovian stimulus, so that only one outcome was primed per trial. PIT effects were observed for the valued outcomes (ts > 3.96, ps < .001) but not for the devalued outcomes (F < 1, Bayes Factor10 = .29). Hence, when baseline response choice was equated and only one outcome was primed per test trial, PIT was sensitive to outcome devaluation. The data therefore support goal-directed models of PIT. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

14 citations


Posted Content
TL;DR: OAuthGuard, an OAuth 2.0 and OpenID Connect vulnerability scanner and protector, that works with RPs using Google OAuth 1.5 and 2.5 services, was developed and was able to protect user security and privacy for 56 of these 69 RPs, and was unable to warn users that they were using an insecure implementation.
Abstract: Millions of users routinely use Google to log in to websites supporting OAuth 2.0 or OpenID Connect; the security of OAuth 2.0 and OpenID Connect is therefore of critical importance. As revealed in previous studies, in practice RPs often implement OAuth 2.0 incorrectly, and so many real-world OAuth 2.0 and OpenID Connect systems are vulnerable to attack. However, users of such flawed systems are typically unaware of these issues, and so are at risk of attacks which could result in unauthorised access to the victim user's account at an RP. In order to address this threat, we have developed OAuthGuard, an OAuth 2.0 and OpenID Connect vulnerability scanner and protector, that works with RPs using Google OAuth 2.0 and OpenID Connect services. It protects user security and privacy even when RPs do not implement OAuth 2.0 or OpenID Connect correctly. We used OAuthGuard to survey the 1000 top-ranked websites supporting Google sign-in for the possible presence of five OAuth 2.0 or OpenID Connect security and privacy vulnerabilities, of which one has not previously been described in the literature. Of the 137 sites in our study that employ Google Sign-in, 69 were found to suffer from at least one serious vulnerability. OAuthGuard was able to protect user security and privacy for 56 of these 69 RPs, and for the other 13 was able to warn users that they were using an insecure implementation.

12 citations


Proceedings ArticleDOI
11 Nov 2019
TL;DR: OAuthGuard as mentioned in this paper is an OAuth 2.0 and OpenID Connect vulnerability scanner and protector, which works with RPs using Google OAuth2.0/OpenID Connect services.
Abstract: Millions of users routinely use Google to log in to websites supporting the standardised protocols OAuth 2.0 or OpenID Connect; the security of OAuth 2.0 and OpenID Connect is therefore of critical importance. As revealed in previous studies, in practice RPs often implement OAuth 2.0 incorrectly, and so many real-world OAuth 2.0 and OpenID Connect systems are vulnerable to attack. However, users of such flawed systems are typically unaware of these issues, and so are at risk of attacks which could result in unauthorised access to the victim user's account at an RP. In order to address this threat, we have developed OAuthGuard, an OAuth 2.0 and OpenID Connect vulnerability scanner and protector, that works with RPs using Google OAuth 2.0 and OpenID Connect services. It protects user security and privacy even when RPs do not implement OAuth 2.0 or OpenID Connect correctly. We used OAuthGuard to survey the 1000 top-ranked websites supporting Google sign-in for the possible presence of five OAuth 2.0 or OpenID Connect security and privacy vulnerabilities, of which one has not previously been described in the literature. Of the 137 sites in our study that employ Google Sign-in, 69 were found to suffer from at least one serious vulnerability. OAuthGuard was able to protect user security and privacy for 56 of these 69 RPs, and for the other 13 was able to warn users that they were using an insecure implementation.

11 citations


Journal ArticleDOI
TL;DR: Variations in the ratings of the blocked cue as a result of manipulating the outcome base rate can be explained if participants are uncertain about the status of the blocks, and data are consistent with the inferential account, but are more challenging for the associative analysis.
Abstract: The blocking phenomenon is one of the most enduring issues in the study of learning. Numerous explanations have been proposed, which fall into two main categories. An associative analysis states that, following A+/AX+ training, Cue A prevents an associative link from forming between X and the outcome. In contrast, an inferential explanation is that A+/AX+ training does not permit an inference that X causes the outcome. More specifically, the trials on which X is presented (AX+) are often argued to be uninformative with respect to the causal status of X because the outcome would have resulted on AX trials whether X was causal or not. If participants are uncertain about X, their ratings on test might be particularly sensitive to the overall base rate of the outcome. That is, a blocked cue, about which one is uncertain, should be rated as a more likely cause when most cues lead to the outcome than when most cues do not. This hypothesis was supported in 2 experiments. Experiment 1 used an overshadowing control and Experiment 2 used an uncorrelated control (to demonstrate a redundancy effect). Variations in the ratings of the blocked cue as a result of manipulating the outcome base rate can be explained if participants are uncertain about the status of the blocked cue. Experiment 3 showed that participants are uncertain about blocked cues by using a direct self-report measure of certainty. These data are consistent with the inferential account, but are more challenging for the associative analysis. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

9 citations


Journal ArticleDOI
01 Aug 2019-Memory
TL;DR: It is suggested that errorful generation improves memory specifically for the guessed fact, and this may be linked to an increase in motivation to learn that fact.
Abstract: The current research examined the effects of errorful generation on memory, focusing particularly on the roles of motivation and surprise. In two experiments, participants were first presen...

9 citations


BookDOI
TL;DR: C crawled the 10,000 most popular websites to give insights into the number of websites that are using the technique, which websites are collecting fingerprinting information, and exactly what information is being retrieved.
Abstract: Browser fingerprinting is a relatively new method of uniquely identifying browsers that can be used to track web users. In some ways it is more privacy-threatening than tracking via cookies, as users have no direct control over it. A number of authors have considered the wide variety of techniques that can be used to fingerprint browsers; however, relatively little information is available on how widespread browser fingerprinting is, and what information is collected to create these fingerprints in the real world. To help address this gap, we crawled the 10,000 most popular websites; this gave insights into the number of websites that are using the technique, which websites are collecting fingerprinting information, and exactly what information is being retrieved. We found that approximately 69\% of websites are, potentially, involved in first-party or third-party browser fingerprinting. We further found that third-party browser fingerprinting, which is potentially more privacy-damaging, appears to be predominant in practice. We also describe \textit{FingerprintAlert}, a freely available browser extension we developed that detects and, optionally, blocks fingerprinting attempts by visited websites.

8 citations


Journal ArticleDOI
TL;DR: Cognitive load abolished the sensitivity to outcome devaluation that was otherwise seen when multiple outcomes and responses were cued on test, and demonstrated that complex O-R priming effects are sensitive to cognitive load, whereas the very simple, standard O- R priming effect is more robust.
Abstract: The extent to which human outcome-response (O-R) priming effects are automatic or under cognitive control is currently unclear. Two experiments tested the effect of cognitive load on O-R priming to shed further light on the debate. In Experiment 1, two instrumental responses earned beer and chocolate points in an instrumental training phase. Instrumental response choice was then tested in the presence of beer, chocolate, and neutral stimuli. On test, a Reversal instruction group was told that the stimuli signalled which response would not be rewarded. The transfer test was also conducted under either minimal (No Load) or considerable (Load) cognitive load. The Non-Reversal groups showed O-R priming effects, where the reward cues increased the instrumental responses that had previously produced those outcomes, relative to the neutral stimulus. This effect was observed even under cognitive load. The Reversal No Load group demonstrated a reversed effect, where response choice was biased towards the response that was most likely to be rewarded according to the instruction. Most importantly, response choice was at chance in the Reversal Load condition. In Experiment 2, cognitive load abolished the sensitivity to outcome devaluation that was otherwise seen when multiple outcomes and responses were cued on test. Collectively, the results demonstrate that complex O-R priming effects are sensitive to cognitive load, whereas the very simple, standard O-R priming effect is more robust.

7 citations


Journal ArticleDOI
TL;DR: In this article, major shortcomings in a recently published group key establishment protocol are described, and these shortcomings are sufficiently serious that the protocol should not be used, and the shortcomings are discussed in detail.
Abstract: Major shortcomings in a recently published group key establishment protocol are described. These shortcomings are sufficiently serious that the protocol should not be used.

4 citations


Posted Content
TL;DR: In this paper, the authors provide a detailed analysis of the impact of quantum computing on the security of 5G mobile telecommunications and propose a multi-phase approach to upgrading security that allows for a simple and smooth migration to a post-quantum-secure system.
Abstract: This paper provides a detailed analysis of the impact of quantum computing on the security of 5G mobile telecommunications. This involves considering how cryptography is used in 5G, and how the security of the system would be affected by the advent of quantum computing. This leads naturally to the specification of a series of simple, phased, recommended changes intended to ensure that the security of 5G (as well as 3G and 4G) is not badly damaged if and when large scale quantum computing becomes a practical reality. By exploiting backwards-compatibility features of the 5G security system design, we are able to propose a novel multi-phase approach to upgrading security that allows for a simple and smooth migration to a post-quantum-secure system.

2 citations


Posted Content
TL;DR: A recently proposed authenticated key agreement protocol is shown to be insecure, allowing an active man in the middle opponent to replay old messages and have them accepted.
Abstract: A recently proposed authenticated key agreement protocol is shown to be insecure. In particular, one of the two parties is not authenticated, allowing an active man in the middle opponent to replay old messages. The protocol is essentially an authenticated Diffie-Hellman key agreement scheme, and the lack of authentication allows an attacker to replay old messages and have them accepted. Moreover, if the ephemeral key used to compute a protocol message is ever compromised, then the key established using the replayed message will also be compromised. Fixing the problem is simple - there are many provably secure and standardised protocols which are just as efficient as the flawed scheme.