scispace - formally typeset
Search or ask a question
Author

Saeeda Naz

Bio: Saeeda Naz is an academic researcher from Hazara University. The author has contributed to research in topics: Cursive & Arabic script. The author has an hindex of 22, co-authored 68 publications receiving 1860 citations. Previous affiliations of Saeeda Naz include Government Post Graduate College & King Saud bin Abdulaziz University for Health Sciences.

Papers published on a yearly basis

Papers
More filters
Book ChapterDOI
01 Jan 2018
TL;DR: In this paper, the authors discuss state-of-the-art deep learning architecture and its optimization when used for medical image segmentation and classification, and discuss the challenges of deep learning methods with regard to medical imaging and open research issue.
Abstract: The health care sector is totally different from any other industry. It is a high priority sector and consumers expect the highest level of care and services regardless of cost. The health care sector has not achieved society’s expectations, even though the sector consumes a huge percentage of national budgets. Mostly, the interpretations of medical data are analyzed by medical experts. In terms of a medical expert interpreting images, this is quite limited due to its subjectivity and the complexity of the images; extensive variations exist between experts and fatigue sets in due to their heavy workload. Following the success of deep learning in other real-world applications, it is seen as also providing exciting and accurate solutions for medical imaging, and is seen as a key method for future applications in the health care sector. In this chapter, we discuss state-of-the-art deep learning architecture and its optimization when used for medical image segmentation and classification. The chapter closes with a discussion of the challenges of deep learning methods with regard to medical imaging and open research issue.

679 citations

Posted Content
TL;DR: In this paper, state-of-the-art deep learning architecture and its optimization used for medical image segmentation and classification is discussed. And the challenges deep learning based methods for medical imaging and open research issue are discussed.
Abstract: Healthcare sector is totally different from other industry. It is on high priority sector and people expect highest level of care and services regardless of cost. It did not achieve social expectation even though it consume huge percentage of budget. Mostly the interpretations of medical data is being done by medical expert. In terms of image interpretation by human expert, it is quite limited due to its subjectivity, the complexity of the image, extensive variations exist across different interpreters, and fatigue. After the success of deep learning in other real world application, it is also providing exciting solutions with good accuracy for medical imaging and is seen as a key method for future applications in health secotr. In this chapter, we discussed state of the art deep learning architecture and its optimization used for medical image segmentation and classification. In the last section, we have discussed the challenges deep learning based methods for medical imaging and open research issue.

300 citations

Journal ArticleDOI
TL;DR: The proposed framework conducts three studies using three architectures of convolutional neural networks (AlexNet, GoogLeNet, and VGGNet) to classify brain tumors such as meningioma, gliomas, and pituitary and achieves highest accuracy up to 98.69 in terms of classification and detection.
Abstract: Brain tumors are the most destructive disease, leading to a very short life expectancy in their highest grade. The misdiagnosis of brain tumors will result in wrong medical intercession and reduce chance of survival of patients. The accurate diagnosis of brain tumor is a key point to make a proper treatment planning to cure and improve the existence of patients with brain tumors disease. The computer-aided tumor detection systems and convolutional neural networks provided success stories and have made important strides in the field of machine learning. The deep convolutional layers extract important and robust features automatically from the input space as compared to traditional predecessor neural network layers. In the proposed framework, we conduct three studies using three architectures of convolutional neural networks (AlexNet, GoogLeNet, and VGGNet) to classify brain tumors such as meningioma, glioma, and pituitary. Each study then explores the transfer learning techniques, i.e., fine-tune and freeze using MRI slices of brain tumor dataset—Figshare. The data augmentation techniques are applied to the MRI slices for generalization of results, increasing the dataset samples and reducing the chance of over-fitting. In the proposed studies, the fine-tune VGG16 architecture attained highest accuracy up to 98.69 in terms of classification and detection.

277 citations

Journal ArticleDOI
01 Mar 2014
TL;DR: The Urdu, Pushto, and Sindhi languages are discussed, with the emphasis being on the Nasta'liq and Naskh scripts, with an emphasis on the preprocessing, segmentation, feature extraction, classification, and recognition in OCR.
Abstract: We survey the optical character recognition (OCR) literature with reference to the Urdu-like cursive scripts. In particular, the Urdu, Pushto, and Sindhi languages are discussed, with the emphasis being on the Nasta'liq and Naskh scripts. Before detaining the OCR works, the peculiarities of the Urdu-like scripts are outlined, which are followed by the presentation of the available text image databases. For the sake of clarity, the various attempts are grouped into three parts, namely: (a) printed, (b) handwritten, and (c) online character recognition. Within each part, the works are analyzed par rapport a typical OCR pipeline with an emphasis on the preprocessing, segmentation, feature extraction, classification, and recognition. HighlightsA literature review of the Nasta'liq and Naskh cursive script OCR.The peculiarities and challenges are described a priori.Printed, handwritten and online OCR efforts are being explored.Analyses based on the stages of a typical OCR pipeline.

121 citations

Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed deep convolutional neural network classifier with transfer learning and data augmentation techniques provides better detection of Parkinson's disease as compared to state-of-the-art work.
Abstract: Parkinson’s disease (PD), a multi-system neurodegenerative disorder which affects the brain slowly, is characterized by symptoms such as muscle stiffness, tremor in the limbs and impaired balance, all of which tend to worsen with the passage of time. Available treatments target its symptoms, aiming to improve the quality of life. However, automatic diagnosis at early stages is still a challenging medicine-related task to date, since a patient may have an identical behavior to that of a healthy individual at the very early stage of the disease. Parkinson’s disease detection through handwriting data is a significant classification problem for identification of PD at the infancy stage. In this paper, a PD identification is realized with help of handwriting images that help as one of the earliest indicators for PD. For this purpose, we proposed a deep convolutional neural network classifier with transfer learning and data augmentation techniques to improve the identification. Two approaches like freeze and fine-tuning of transfer learning are investigated using ImageNet and MNIST dataset as source task independently. A trained network achieved 98.28% accuracy using fine-tuning-based approach using ImageNet and PaHaW dataset. Experimental results on benchmark dataset reveal that the proposed approach provides better detection of Parkinson’s disease as compared to state-of-the-art work.

100 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Book ChapterDOI
01 Jan 2018
TL;DR: In this paper, the authors discuss state-of-the-art deep learning architecture and its optimization when used for medical image segmentation and classification, and discuss the challenges of deep learning methods with regard to medical imaging and open research issue.
Abstract: The health care sector is totally different from any other industry. It is a high priority sector and consumers expect the highest level of care and services regardless of cost. The health care sector has not achieved society’s expectations, even though the sector consumes a huge percentage of national budgets. Mostly, the interpretations of medical data are analyzed by medical experts. In terms of a medical expert interpreting images, this is quite limited due to its subjectivity and the complexity of the images; extensive variations exist between experts and fatigue sets in due to their heavy workload. Following the success of deep learning in other real-world applications, it is seen as also providing exciting and accurate solutions for medical imaging, and is seen as a key method for future applications in the health care sector. In this chapter, we discuss state-of-the-art deep learning architecture and its optimization when used for medical image segmentation and classification. The chapter closes with a discussion of the challenges of deep learning methods with regard to medical imaging and open research issue.

679 citations

Journal ArticleDOI

601 citations

Journal ArticleDOI
TL;DR: This paper is a review that survey recent technologies developed for Big Data and provides not only a global view of main Big Data technologies but also comparisons according to different system layers such as Data Storage Layer, Data Processing Layer, data Querying layer, Data Access Layer and Management Layer.
Abstract: Developing Big Data applications has become increasingly important in the last few years. In fact, several organizations from different sectors depend increasingly on knowledge extracted from huge volumes of data. However, in Big Data context, traditional data techniques and platforms are less efficient. They show a slow responsiveness and lack of scalability, performance and accuracy. To face the complex Big Data challenges, much work has been carried out. As a result, various types of distributions and technologies have been developed. This paper is a review that survey recent technologies developed for Big Data. It aims to help to select and adopt the right combination of different Big Data technologies according to their technological needs and specific applications’ requirements. It provides not only a global view of main Big Data technologies but also comparisons according to different system layers such as Data Storage Layer, Data Processing Layer, Data Querying Layer, Data Access Layer and Management Layer. It categorizes and discusses main technologies features, advantages, limits and usages.

600 citations