scispace - formally typeset
Search or ask a question
Topic

Task analysis

About: Task analysis is a research topic. Over the lifetime, 10432 publications have been published within this topic receiving 283481 citations.


Papers
More filters
Journal ArticleDOI
Abstract: While much has been written about task-based language teaching (TBLT), research examining teachers’ understandings of what TBLT means remains limited. This article explores the understandings of TB...

161 citations

Journal ArticleDOI
TL;DR: In this article, the authors identify the areas of mismatch between the assessment problems teachers face and the type of assessment training they receive, and how the gaps can be addressed in testing courses.
Abstract: What are the areas of mismatch between the assessment problems teachers face and the type of assessment training they receive? How can the gaps be addressed in testing courses?

161 citations

Journal ArticleDOI
TL;DR: In this paper, a study on L2 proficiency in writing, conducted among 84 Dutch university students of Italian and 75 students of French, showed that manipulation of task complexity led in the complex task to a significant decrease of errors, while at the same time a trend for a lexically more varied text was observed.
Abstract: In a study on L2 proficiency in writing, conducted among 84 Dutch university students of Italian and 75 students of French, manipulation of task complexity led in the complex task to a significant decrease of errors, while at the same time a trend for a lexically more varied text was observed (Kuiken and Vedder 2005, 2007, in press). Based on this first analysis in which some global performance measures were used, a more specific analysis was carried out. In the latter analysis, which is reported in this article, accuracy was investigated in more detail according to the type of errors in the L2 texts, while lexical variation was analysed further by distinguishing frequent words from infrequent ones. Results showed that the effect of task complexity could mainly be attributed to lower ratios of lexical errors in the more complex task. With respect to the use of frequent versus infrequent words mixed results were found. On the basis of these findings a number of implications with regard to the operationalisation of task complexity and linguistic performance are discussed.

161 citations

Journal ArticleDOI
TL;DR: The results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
Abstract: For many applications the collection of labeled data is expensive laborious. Exploitation of unlabeled data during training is thus a long pursued objective of machine learning. Self-supervised learning addresses this by positing an auxiliary task (different, but related to the supervised task) for which data is abundantly available. In this paper, we show how ranking can be used as a proxy task for some regression problems. As another contribution, we propose an efficient backpropagation technique for Siamese networks which prevents the redundant computation introduced by the multi-branch network architecture. We apply our framework to two regression problems: Image Quality Assessment (IQA) and Crowd Counting. For both we show how to automatically generate ranked image sets from unlabeled data. Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting. In addition, we show that measuring network uncertainty on the self-supervised proxy task is a good measure of informativeness of unlabeled data. This can be used to drive an algorithm for active learning and we show that this reduces labeling effort by up to 50 percent.

161 citations

Book ChapterDOI
05 Nov 2009
TL;DR: The fundamental processes of teaching and testing are highlighted, outlining a task-based approach to each and posing questions in need of inquiry, and several challenges that will condition the ultimate contribution of TBLT to language education are forecast.
Abstract: Task-based language teaching (TBLT) is an approach to second or foreign language education that integrates theoretical and empirical foundations for good pedagogy with a focus on tangible learning outcomes in the form of “tasks” – that is, what learners are able to do with the language. Task-based practice draws on diverse sources, including philosophy of education, theories of second language acquisition, and research-based evidence about effective instruction. Equally important, TBLT acts on the exigencies of language learning in human endeavors, and the often ineffectual responses of language education to date, by providing a framework within which educators can construct effective programs that meet the language use needs of learners and society. Though there is global interest in the value of TBLT to foster worthwhile language learning, there is also diversity in the educational scope, practical applications, and research associated with the name. Certainly, TBLT remains a contested domain of inquiry and practice, though much of the debate surrounding TBLT results from incomplete understandings of precisely what this educational approach comprises. In the following, I review key underpinnings of task-based instruction, its emergence within language education, and its component parts. I then highlight the fundamental processes of teaching and testing, outlining a task-based approach to each and posing questions in need of inquiry. I conclude by forecasting several challenges that will condition the ultimate contribution of TBLT to language education.

161 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
78% related
Robustness (computer science)
94.7K papers, 1.6M citations
78% related
User interface
85.4K papers, 1.7M citations
78% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202328
202264
2021665
2020819
2019737
2018834