scispace - formally typeset
Search or ask a question
Author

Md. Mustafizur Rahman

Bio: Md. Mustafizur Rahman is an academic researcher from Bangladesh University of Engineering and Technology. The author has contributed to research in topics: Autism & Artificial neural network. The author has an hindex of 11, co-authored 26 publications receiving 415 citations. Previous affiliations of Md. Mustafizur Rahman include University of Virginia & University of Texas at Austin.

Papers
More filters
Journal ArticleDOI
TL;DR: The successes of neural IR thus far are highlighted, obstacles to its wider adoption are cataloged, and potentially promising directions for future research are suggested.
Abstract: A recent “third wave” of neural network (NN) approaches now delivers state-of-the-art performance in many machine learning tasks, spanning speech recognition, computer vision, and natural language processing. Because these modern NNs often comprise multiple interconnected layers, work in this area is often referred to as deep learning. Recent years have witnessed an explosive growth of research into NN-based approaches to information retrieval (IR). A significant body of work has now been created. In this paper, we survey the current landscape of Neural IR research, paying special attention to the use of learned distributed representations of textual units. We highlight the successes of neural IR thus far, catalog obstacles to its wider adoption, and suggest potentially promising directions for future research.

124 citations

Proceedings ArticleDOI
13 Dec 2010
TL;DR: This paper has demonstrated their newly developed interactive computer game which will be helpful in increasing intelligibility in the speeches of autistic children and can be used as a therapy besides the traditional approaches.
Abstract: Autism is a disorder of neural development which affects about one in every 150 kids on average. One of the major complexities regarding autistic children in social communication is the speech disorder. The problems related to speech disorder fall into different categories and unintelligibility in speech is one of them. Although there is no definite medicine or treatment for autism, doctors, therapists, and special teachers can help kids with autism overcome many difficulties by different physical and psychological therapies. In this paper we have demonstrated our newly developed interactive computer game which will be helpful in increasing intelligibility in the speeches of autistic children and can be used as a therapy besides the traditional approaches. During our five months of intervention with the autistic children of Autism Welfare Foundation (AWF) at Dhaka, we checked the effectiveness of this therapy and got some encouraging results.

52 citations

Posted Content
TL;DR: The current landscape of Neural IR research is surveyed, paying special attention to the use of learned representations of queries and documents (i.e., neural embeddings), to highlight the successes and obstacles to its wider adoption, and suggest potentially promising directions for future research.
Abstract: A recent "third wave" of Neural Network (NN) approaches now delivers state-of-the-art performance in many machine learning tasks, spanning speech recognition, computer vision, and natural language processing. Because these modern NNs often comprise multiple interconnected layers, this new NN research is often referred to as deep learning. Stemming from this tide of NN work, a number of researchers have recently begun to investigate NN approaches to Information Retrieval (IR). While deep NNs have yet to achieve the same level of success in IR as seen in other areas, the recent surge of interest and work in NNs for IR suggest that this state of affairs may be quickly changing. In this work, we survey the current landscape of Neural IR research, paying special attention to the use of learned representations of queries and documents (i.e., neural embeddings). We highlight the successes of neural IR thus far, catalog obstacles to its wider adoption, and suggest potentially promising directions for future research.

45 citations

Proceedings ArticleDOI
05 Dec 2011
TL;DR: A personalized game based on digital story-telling concept that helps the children of age ranging from 9 to 14 years old with autism to understand the use of money and teaches the autistic children the social behavior appropriate while shopping.
Abstract: Autism is a general term used to describe a group of complex developmental brain disorders known as Pervasive Developmental Disorders (PDD). It is a life-long disability that prevents people from understanding what they see, hear, and sense. This results in severe problems with social relationships, communications, and behavior. Autism is typically diagnosed between the ages of two and six, although variations of ASD (Autism Spectrum Disorders) can sometimes be diagnosed earlier or later [1]. Children with learning disability such as autism who have serious impairments with social, emotional and communication skills require high degree of personalization in using the educational software developed for them. In this paper we present a personalized game based on digital story-telling concept that helps the children of age ranging from 9 to 14 years old with autism to understand the use of money. It also teaches the autistic children the social behavior appropriate while shopping. The game is developed on BYOB (Build Your Own Block, an advanced offshoot of the game engine Scratch).

40 citations

Proceedings ArticleDOI
06 Jul 2011
TL;DR: An interactive computer game for the autistic children for improving the fluency in their speech is developed and produced encouraging results over a participant during three months of observation.
Abstract: Autism is a complex developmental disability that typically appears during the first three years of life. This is the result of a neurological disorder that affects the functioning of human brain. Children diagnosed with autism often are self-absorbed and seem to exist in a private world where they are unable to successfully communicate and interact with others. Sometimes they have difficulties in developing speaking skills and understanding what others say to them. Lack of fluency in the speech is one of the most frequently found problems with autistic children. The traditional methods for increasing fluency were found to be monotonous and hence, not much successful during our three months of observations over the participants of Autism Welfare Foundation (AWF) at Dhaka. Therefore, we developed an interactive computer game for the autistic children for improving the fluency in their speech. Our game produced encouraging results over a participant during three months of observation. In this paper, we describe our project and the outcomes.

40 citations


Cited by
More filters
Posted Content
TL;DR: This work discusses core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration, and important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn.
Abstract: We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet, and list a collection of RL resources. After presenting a brief summary, we close with discussions. Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant update.

935 citations

Posted Content
TL;DR: A comprehensive review of 62 state-of-the-art robust training methods, all of which are categorized into five groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority.
Abstract: Deep learning has achieved remarkable success in numerous domains with help from large amounts of big data. However, the quality of data labels is a concern because of the lack of high-quality labels in many real-world scenarios. As noisy labels severely degrade the generalization performance of deep neural networks, learning from noisy labels (robust training) is becoming an important task in modern deep learning applications. In this survey, we first describe the problem of learning with label noise from a supervised learning perspective. Next, we provide a comprehensive review of 46 state-of-the-art robust training methods, all of which are categorized into seven groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority. Subsequently, we summarize the typically used evaluation methodology, including public noisy datasets and evaluation metrics. Finally, we present several promising research directions that can serve as a guideline for future studies.

474 citations

Journal ArticleDOI
TL;DR: It is concluded that RNNs are capable of modelling seasonality directly if the series in the dataset possess homogeneous seasonal patterns; otherwise, it is recommended to recommend a deseasonalisation step.

450 citations

Posted Content
TL;DR: This tutorial provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example, and covers a wide range of techniques.
Abstract: The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing applications. This survey provides an overview of text ranking with neural network architectures known as transformers, of which BERT is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in natural language processing (NLP), information retrieval (IR), and beyond. In this survey, we provide a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. We cover a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. There are two themes that pervade our survey: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this survey also attempts to prognosticate where the field is heading.

315 citations