scispace - formally typeset
Search or ask a question

Showing papers in "SN computer science in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors present a comprehensive view on AI-based modeling with the principles and capabilities of potential AI techniques that can play an important role in developing intelligent and smart systems in various real-world application areas.
Abstract: Artificial intelligence (AI) is a leading technology of the current age of the Fourth Industrial Revolution (Industry 4.0 or 4IR), with the capability of incorporating human behavior and intelligence into machines or systems. Thus, AI-based modeling is the key to build automated, intelligent, and smart systems according to today's needs. To solve real-world issues, various types of AI such as analytical, functional, interactive, textual, and visual AI can be applied to enhance the intelligence and capabilities of an application. However, developing an effective AI model is a challenging task due to the dynamic nature and variation in real-world problems and data. In this paper, we present a comprehensive view on "AI-based Modeling" with the principles and capabilities of potential AI techniques that can play an important role in developing intelligent and smart systems in various real-world application areas including business, finance, healthcare, agriculture, smart cities, cybersecurity and many more. We also emphasize and highlight the research issues within the scope of our study. Overall, the goal of this paper is to provide a broad overview of AI-based modeling that can be used as a reference guide by academics and industry people as well as decision-makers in various real-world scenarios and application domains.

93 citations


Journal ArticleDOI
TL;DR: A smart health monitoring system is being developed using Internet of Things (IoT) technology which is capable of monitoring blood pressure, heart rate, oxygen level, and temperature of a person as mentioned in this paper .
Abstract: With the commencement of the COVID-19 pandemic, social distancing and quarantine are becoming essential practices in the world. IoT health monitoring systems prevent frequent visits to doctors and meetings between patients and medical professionals. However, many individuals require regular health monitoring and observation through medical staff. In this proposed work, we have taken advantage of the technology to make patients life easier for earlier diagnosis and treatment. A smart health monitoring system is being developed using Internet of Things (IoT) technology which is capable of monitoring blood pressure, heart rate, oxygen level, and temperature of a person. This system is helpful for rural areas or villages where nearby clinics can be in touch with city hospitals about their patient health conditions. However, if any changes occur in a patient's health based on standard values, then the IoT system will alert the physician or doctor accordingly. The maximum relative error (%ϵr) in the measurement of heart rate, patient body temperature and SPO2 was found to be 2.89%, 3.03%, 1.05%, respectively, which was comparable to the commercials health monitoring system. This health monitoring system based on IoT helps out doctors to collect real-time data effortlessly. The availability of high-speed internet allows the system to monitor the parameters at regular intervals. Furthermore, the cloud platform allows data storage so that previous measurements could be retrieved in the near future. This system would help in identifying and early treatment of COVID-19 individual patients.

54 citations


Journal ArticleDOI
TL;DR: In this paper , the authors used Auto-Regressive Integrated Moving Average (ARIMA) and Long short-term memory (LSTM) models to predict the outbreak of Covid-19 in the upcoming 2 months in Morocco.
Abstract: In this paper, we are interested to forecast and predict the time evolution of the Covid-19 in Morocco based on two different time series forecasting models. We used Auto-Regressive Integrated Moving Average (ARIMA) and Long short-term memory (LSTM) models to predict the outbreak of Covid-19 in the upcoming 2 months in Morocco. In this work, we measured the effective reproduction number using the real data and also the fitted forecasted data produced by the two used approaches, to reveal how effective the measures taken by the Moroccan government have been controlling the Covid-19 outbreak. The prediction results for the next 2 months show a strong evolution in the number of confirmed and death cases in Morocco. According to the measures of the effective reproduction number, the transmissibility of the disease will continue to expand in the next 2 months, but fortunately, the higher value of the effective reproduction number is not considered to be dramatic and, therefore, may give hope for controlling the disease.

26 citations


Journal ArticleDOI
TL;DR: In this article , the authors identify and review key challenges to bridge the knowledge gap between SME's, companies, organisations, businesses, government institutions and the general public in adopting, promoting and utilising blockchain technology.
Abstract: In this paper, we identify and review key challenges to bridge the knowledge-gap between SME's, companies, organisations, businesses, government institutions and the general public in adopting, promoting and utilising Blockchain technology. The challenges indicated are Cybersecurity and Data privacy in this instance. Additional challenges are set out supported by literature, in researching data security management systems and legal frameworks to ascertaining the types and varieties of valid encryption, data acquisition, policy and outcomes under ISO 27001 and the General Data Protection Regulations. Blockchain, a revolutionary method of storage and immutability, provides a robust storage strategy, and when coupled with a Smart Contract, gives users the ability to form partnerships, share information and consent via a legally-based system of carrying out business transactions in a secure digital domain. Globally, ethical and legal challenges significantly differ; consent and trust in the public and private sectors in deploying such defensive data management strategies, is directly related to the accountability and transparency systems in place to deliver certainty and justice. Therefore, investment and research in these areas is crucial to establishing a dialogue between nations to include health, finance and market strategies that should encompass all levels of society. A framework is proposed with elements to include Big Data, Machine Learning and Visualisation methods and techniques. Through the literature we identify a system necessary in carrying out experiments to detect, capture, process and store data. This includes isolating packet data to inform levels of Cybersecurity and privacy-related activities, and ensuring transparency demonstrated in a secure, smart and effective manner.

24 citations



Journal ArticleDOI
TL;DR: In this paper , the authors presented an overview of more than 160 ML-based approaches developed to combat COVID-19, and classified them into two categories: supervised learning and deep learning-based ones.
Abstract: The year 2020 experienced an unprecedented pandemic called COVID-19, which impacted the whole world. The absence of treatment has motivated research in all fields to deal with it. In Computer Science, contributions mainly include the development of methods for the diagnosis, detection, and prediction of COVID-19 cases. Data science and Machine Learning (ML) are the most widely used techniques in this area. This paper presents an overview of more than 160 ML-based approaches developed to combat COVID-19. They come from various sources like Elsevier, Springer, ArXiv, MedRxiv, and IEEE Xplore. They are analyzed and classified into two categories: Supervised Learning-based approaches and Deep Learning-based ones. In each category, the employed ML algorithm is specified and a number of used parameters is given. The parameters set for each of the algorithms are gathered in different tables. They include the type of the addressed problem (detection, diagnosis, or detection), the type of the analyzed data (Text data, X-ray images, CT images, Time series, Clinical data,...) and the evaluated metrics (accuracy, precision, sensitivity, specificity, F1-Score, and AUC). The study discusses the collected information and provides a number of statistics drawing a picture about the state of the art. Results show that Deep Learning is used in 79% of cases where 65% of them are based on the Convolutional Neural Network (CNN) and 17% use Specialized CNN. On his side, supervised learning is found in only 16% of the reviewed approaches and only Random Forest, Support Vector Machine (SVM) and Regression algorithms are employed.

20 citations


Journal ArticleDOI
TL;DR: In this paper , the authors used Auto-Regressive Integrated Moving Average (ARIMA) and Long short-term memory (LSTM) models to predict the outbreak of Covid-19 in the upcoming 2 months in Morocco.
Abstract: In this paper, we are interested to forecast and predict the time evolution of the Covid-19 in Morocco based on two different time series forecasting models. We used Auto-Regressive Integrated Moving Average (ARIMA) and Long short-term memory (LSTM) models to predict the outbreak of Covid-19 in the upcoming 2 months in Morocco. In this work, we measured the effective reproduction number using the real data and also the fitted forecasted data produced by the two used approaches, to reveal how effective the measures taken by the Moroccan government have been controlling the Covid-19 outbreak. The prediction results for the next 2 months show a strong evolution in the number of confirmed and death cases in Morocco. According to the measures of the effective reproduction number, the transmissibility of the disease will continue to expand in the next 2 months, but fortunately, the higher value of the effective reproduction number is not considered to be dramatic and, therefore, may give hope for controlling the disease.

19 citations


Journal ArticleDOI
TL;DR: In this paper , a real-time intrusion detection system (RT-IDS) is proposed, which consists of a deep neural network (DNN) trained using 28 features of the NSL-KDD dataset.
Abstract: Abstract In recent years, due to the rapid growth in network technology, numerous types of intrusions have been uncovered that differ from the existing ones, and the conventional firewalls with specific rule sets and policies are incapable of identifying those intrusions in real-time. Therefore, that demands the requirement of a real-time intrusion detection system (RT-IDS). The ultimate purpose of this research is to construct an RT-IDS capable of identifying intrusions by analysing the inbound and outbound network data in real-time. The proposed system consists of a deep neural network (DNN) trained using 28 features of the NSL-KDD dataset. In addition, it contains the machine learning (ML) pipeline with sequential components for categorical data encoding and feature scaling, which is used before transmitting the real-time data to the trained DNN model to make predictions. Moreover, a real-time feature extractor, which is a C++ program that sniffs data from the real-time network traffic and derives relevant data related to the features of the NSL-KDD dataset using the sniffed data, is deployed between the gateway router and the local area network (LAN). Together with the trained DNN model, the ML pipeline is hosted in a server that can be accessed via a representational state transfer application programming interface (REST API). The DNN has revealed outstanding testing performance results achieving 81%, 96%, 70% and 81% for accuracy, precision, recall and f1-score accordingly. This research comprises a comprehensive technical explanation concerning the implementation and functionality of the complete system. Moreover, leveraging the extensive explanations provided in this paper, advanced IDSs capable of identifying modern intrusions can be constructed.

18 citations


Journal ArticleDOI
TL;DR: In this article , the authors identify and review key challenges to bridge the knowledge gap between SME's, companies, organisations, businesses, government institutions and the general public in adopting, promoting and utilising blockchain technology.
Abstract: In this paper, we identify and review key challenges to bridge the knowledge-gap between SME's, companies, organisations, businesses, government institutions and the general public in adopting, promoting and utilising Blockchain technology. The challenges indicated are Cybersecurity and Data privacy in this instance. Additional challenges are set out supported by literature, in researching data security management systems and legal frameworks to ascertaining the types and varieties of valid encryption, data acquisition, policy and outcomes under ISO 27001 and the General Data Protection Regulations. Blockchain, a revolutionary method of storage and immutability, provides a robust storage strategy, and when coupled with a Smart Contract, gives users the ability to form partnerships, share information and consent via a legally-based system of carrying out business transactions in a secure digital domain. Globally, ethical and legal challenges significantly differ; consent and trust in the public and private sectors in deploying such defensive data management strategies, is directly related to the accountability and transparency systems in place to deliver certainty and justice. Therefore, investment and research in these areas is crucial to establishing a dialogue between nations to include health, finance and market strategies that should encompass all levels of society. A framework is proposed with elements to include Big Data, Machine Learning and Visualisation methods and techniques. Through the literature we identify a system necessary in carrying out experiments to detect, capture, process and store data. This includes isolating packet data to inform levels of Cybersecurity and privacy-related activities, and ensuring transparency demonstrated in a secure, smart and effective manner.

16 citations


Journal ArticleDOI
TL;DR: In this paper , the authors presented an overview of more than 160 ML-based approaches developed to combat COVID-19, and classified them into two categories: supervised learning and deep learning-based ones.
Abstract: The year 2020 experienced an unprecedented pandemic called COVID-19, which impacted the whole world. The absence of treatment has motivated research in all fields to deal with it. In Computer Science, contributions mainly include the development of methods for the diagnosis, detection, and prediction of COVID-19 cases. Data science and Machine Learning (ML) are the most widely used techniques in this area. This paper presents an overview of more than 160 ML-based approaches developed to combat COVID-19. They come from various sources like Elsevier, Springer, ArXiv, MedRxiv, and IEEE Xplore. They are analyzed and classified into two categories: Supervised Learning-based approaches and Deep Learning-based ones. In each category, the employed ML algorithm is specified and a number of used parameters is given. The parameters set for each of the algorithms are gathered in different tables. They include the type of the addressed problem (detection, diagnosis, or detection), the type of the analyzed data (Text data, X-ray images, CT images, Time series, Clinical data,...) and the evaluated metrics (accuracy, precision, sensitivity, specificity, F1-Score, and AUC). The study discusses the collected information and provides a number of statistics drawing a picture about the state of the art. Results show that Deep Learning is used in 79% of cases where 65% of them are based on the Convolutional Neural Network (CNN) and 17% use Specialized CNN. On his side, supervised learning is found in only 16% of the reviewed approaches and only Random Forest, Support Vector Machine (SVM) and Regression algorithms are employed.

16 citations



Journal ArticleDOI
TL;DR: In this paper , a new classification of authentication schemes in WMSNs based on its architecture is presented, and a comprehensive study of the existing authentication schemes is provided in terms of security and performance.
Abstract: Many applications are developed with the quick emergence of the Internet of things (IoT) and wireless sensor networks (WSNs) in the health sector. Healthcare applications that use wireless medical sensor networks (WMSNs) provide competent communication solutions for enhancing people life. WMSNs rely on highly sensitive and resource-constrained devices, so-called sensors, that sense patients' vital signs then send them through open channels via gateways to specialists. However, these transmitted data from WMSNs can be manipulated by adversaries without data security, resulting in crucial consequences. In light of this, efficient security solutions and authentication schemes are needed. Lately, researchers have focussed highly on authentication for WMSNs, and many schemes have been proposed to preserve privacy and security requirements. These schemes face a lot of security and performance issues due to the constrained devices used. This paper presents a new classification of authentication schemes in WMSNs based on its architecture; as far as we know, it is the first of its kind. It also provides a comprehensive study of the existing authentication schemes in terms of security and performance. The performance evaluation is based on experimental results. Moreover, it identifies some future research directions and recommendations for designing authentication schemes in WMSNs.

Journal ArticleDOI
TL;DR: In this article , a survey of state-of-the-art supervised learning architectures for medical image processing is presented. But, the authors do not discuss the performance metrics of supervised learning methods.
Abstract: Medical image interpretation is an essential task for the correct diagnosis of many diseases. Pathologists, radiologists, physicians, and researchers rely heavily on medical images to perform diagnoses and develop new treatments. However, manual medical image analysis is tedious and time consuming, making it necessary to identify accurate automated methods. Deep learning-especially supervised deep learning-shows impressive performance in the classification, detection, and segmentation of medical images and has proven comparable in ability to humans. This survey aims to help researchers and practitioners of medical image analysis understand the key concepts and algorithms of supervised learning techniques. Specifically, this survey explains the performance metrics of supervised learning methods; summarizes the available medical datasets; studies the state-of-the-art supervised learning architectures for medical imaging processing, including convolutional neural networks (CNNs) and their corresponding algorithms, region-based CNNs and their variants, fully convolutional networks (FCN) and U-Net architecture; and discusses the trends and challenges in the application of supervised learning methods to medical image analysis. Supervised learning requires large labeled datasets to learn and achieve good performance, and data augmentation, transfer learning, and dropout techniques have widely been employed in medical image processing to overcome the lack of such datasets.



Journal ArticleDOI
TL;DR: The PUFchain 2.0 as mentioned in this paper is the first hardware-assisted blockchain for simultaneously handling device and data security in smart healthcare by allowing Internet of Medical Things (IoMT) devices to connect and obtain PUF keys from the edge server.
Abstract: This article presents the first-ever hardware-assisted blockchain for simultaneously handling device and data security in smart healthcare. This article presents the hardware security primitive physical unclonable functions (PUF) and blockchain technology together as PUFchain 2.0 with a two-level authentication mechanism. The proposed PUFchain 2.0 security primitive presents a scalable approach by allowing Internet of Medical Things (IoMT) devices to connect and obtain PUF keys from the edge server with an embedded PUF module instead of connecting a PUF module to each device. The PUF key, once assigned to a particular media access control (MAC) address by the miner, will be unique for that MAC address and cannot be assigned to other devices. PUFs are developed based on internal micro-manufacturing process variations during chip fabrication. This property of PUFs is integrated with blockchain by including the PUF key of the IoMT into blockchain for authentication. The robustness of the proposed Proof of PUF-Enabled authentication consensus mechanism in PUFchain 2.0 has been substantiated through test bed evaluation. Arbiter PUFs have been used for the experimental validation of PUFchain 2.0. From the obtained 200 PUF keys, 75% are reliable and the Hamming distance of the PUF module is 48%. Obtained database outputs along with other metrics have been presented for validating the potential of PUFchain 2.0 in smart healthcare.

Journal ArticleDOI
TL;DR: In this article , the authors describe the development of an emotional corpus (hereafter called "BEmoC") for classifying six emotions in Bengali texts, i.e., anger, fear, surprise, sadness, joy, and disgust.
Abstract: Emotion classification in text has growing interest among NLP experts due to the enormous availability of people's emotions and its emergence on various Web 2.0 applications/services. Emotion classification in the Bengali texts is also gradually being considered as an important task for sports, e-commerce, entertainments, and security applications. However, It is a very critical task to develop an automatic emotion classification system for low-resource languages such as, Bengali. Scarcity of resources and deficiency of benchmark corpora make the task more complicated. Thus, the development of a benchmark corpus is the prerequisite to develop an emotion classifier for Bengali texts. This paper describes the development of an emotional corpus (hereafter called 'BEmoC') for classifying six emotions in Bengali texts. The corpus development process consists of four key steps: data crawling, pre-processing, labelling, and verification. A total of 7000 texts are labelled into six basic emotion categories such as anger, fear, surprise, sadness, joy, and disgust, respectively. Dataset evaluation with 0.969 Cohen's κ score indicates the close agreement between the corpus annotators and the expert. The analysis of evaluation also represents that the distribution of emotion words obeys Zipf's law. Moreover, the results of BEmoC analysis shown in terms of coding reliability, emotion density, and most frequent emotion words, respectively.



Journal ArticleDOI
TL;DR: In this article , the authors highlighted the usability measures currently used for voice assistants and highlighted the independent variables used and their context of use and concluded what was carried out on voice assistant usability measurement and what research gaps were present.
Abstract: Voice assistants (VA) are an emerging technology that have become an essential tool of the twenty-first century. The VA ease of access and use has resulted in high usability curiosity in voice assistants. Usability is an essential aspect of any emerging technology, with every technology having a standardized usability measure. Despite the high acceptance rate on the use of VA, to the best of our knowledge, not many studies were carried out on voice assistants' usability. We reviewed studies that used voice assistants for various tasks in this context. Our study highlighted the usability measures currently used for voice assistants. Moreover, our study also highlighted the independent variables used and their context of use. We employed the ISO 9241-11 framework as the measuring tool in our study. We highlighted voice assistant's usability measures currently used; both within the ISO 9241-11 framework, as well as outside of it to provide a comprehensive view. A range of diverse independent variables are identified that were used to measure usability. We also specified that the independent variables still not used to measure some usability experience. We currently concluded what was carried out on voice assistant usability measurement and what research gaps were present. We also examined if the ISO 9241-11 framework can be used as a standard measurement tool for voice assistants.

Journal ArticleDOI
TL;DR: In this paper , the authors propose a distributed approach based on blockchain technology to define a supply chain management system able to provide quality, integrity and traceability of the entire supply chain process.
Abstract: Abstract Ensuring high quality and safety of food products has become a key factor on one hand to protect and improve consumers health and, on the other one, to gain market share. For this reason, much effort in the last year has been devoted to the development of integrated and innovative Agriculture and Food (Agri-Food) supply chains management systems, which should be responsible, in addition to track and store orders and deliveries, to guarantee transparency and traceability of the food production and transformation process. In this paper, differently from traditional supply chains which are based on centralized systems, we propose a fully distributed approach, based on blockchain technology, to define a supply chain management system able to provide quality, integrity and traceability of the entire supply chain process. The proposed framework is based on the Hyperledger Fabric technology, which is a permissioned blockchain system: a prototype has been developed and, by using some use cases, we show the effectiveness of the approach.


Journal ArticleDOI
TL;DR: An elliptic curve symmetric key-based algorithm for secure message forwarding which is relatively efficient in terms of storage, communication, energy and computation requirements and robust under the Canetti–Krawczyk threat model.

Journal ArticleDOI
TL;DR: In this paper , a real-time intrusion detection system (RT-IDS) is proposed, which consists of a deep neural network (DNN) trained using 28 features of the NSL-KDD dataset.
Abstract: Abstract In recent years, due to the rapid growth in network technology, numerous types of intrusions have been uncovered that differ from the existing ones, and the conventional firewalls with specific rule sets and policies are incapable of identifying those intrusions in real-time. Therefore, that demands the requirement of a real-time intrusion detection system (RT-IDS). The ultimate purpose of this research is to construct an RT-IDS capable of identifying intrusions by analysing the inbound and outbound network data in real-time. The proposed system consists of a deep neural network (DNN) trained using 28 features of the NSL-KDD dataset. In addition, it contains the machine learning (ML) pipeline with sequential components for categorical data encoding and feature scaling, which is used before transmitting the real-time data to the trained DNN model to make predictions. Moreover, a real-time feature extractor, which is a C++ program that sniffs data from the real-time network traffic and derives relevant data related to the features of the NSL-KDD dataset using the sniffed data, is deployed between the gateway router and the local area network (LAN). Together with the trained DNN model, the ML pipeline is hosted in a server that can be accessed via a representational state transfer application programming interface (REST API). The DNN has revealed outstanding testing performance results achieving 81%, 96%, 70% and 81% for accuracy, precision, recall and f1-score accordingly. This research comprises a comprehensive technical explanation concerning the implementation and functionality of the complete system. Moreover, leveraging the extensive explanations provided in this paper, advanced IDSs capable of identifying modern intrusions can be constructed.


Journal ArticleDOI
TL;DR: In this paper , a review of the approaches to pediatric epilepsy seizure identification using machine learning techniques was presented, in addition to the techniques applied on the CHB-MIT EEG database of epileptic pediatric signals.
Abstract: Epilepsy is the second most common neurological disease after Alzheimer. It is a disorder of the brain which results in recurrent seizures. Though the epilepsy in general is considered as a serious disorder, its effects in children are rather dangerous. It is mainly because it reasons a slower rate of development and a failure to improve certain skills among such children. Seizures are the most common symptom of epilepsy. As a regular medical procedure, the specialists record brain activity using an electroencephalogram (EEG) to observe epileptic seizures. The detection of these seizures is performed by specialists, but the results might not be accurate and depend on the specialist’s experience; therefore, automated detection of epileptic pediatric seizures might be an optimal solution. In this regard, several techniques have been investigated in the literature. This research aims to review the approaches to pediatric epilepsy seizures’ identification especially those based on machine learning, in addition to the techniques applied on the CHB-MIT scalp EEG database of epileptic pediatric signals.

Journal ArticleDOI
TL;DR: In this article , a new classification of authentication schemes in WMSNs based on its architecture is presented, and a comprehensive study of the existing authentication schemes is provided in terms of security and performance.
Abstract: Many applications are developed with the quick emergence of the Internet of things (IoT) and wireless sensor networks (WSNs) in the health sector. Healthcare applications that use wireless medical sensor networks (WMSNs) provide competent communication solutions for enhancing people life. WMSNs rely on highly sensitive and resource-constrained devices, so-called sensors, that sense patients' vital signs then send them through open channels via gateways to specialists. However, these transmitted data from WMSNs can be manipulated by adversaries without data security, resulting in crucial consequences. In light of this, efficient security solutions and authentication schemes are needed. Lately, researchers have focussed highly on authentication for WMSNs, and many schemes have been proposed to preserve privacy and security requirements. These schemes face a lot of security and performance issues due to the constrained devices used. This paper presents a new classification of authentication schemes in WMSNs based on its architecture; as far as we know, it is the first of its kind. It also provides a comprehensive study of the existing authentication schemes in terms of security and performance. The performance evaluation is based on experimental results. Moreover, it identifies some future research directions and recommendations for designing authentication schemes in WMSNs.


Journal ArticleDOI
TL;DR: A detailed state-of-the-art analysis of text summarization concepts such as summarization approaches, techniques used, standard datasets, evaluation metrics and future scopes for research is provided in this paper .
Abstract: One of the most pressing issues that have arisen due to the rapid growth of the Internet is known as information overloading. Simplifying the relevant information in the form of a summary will assist many people because the material on any topic is plentiful on the Internet. Manually summarising massive amounts of text is quite challenging for humans. So, it has increased the need for more complex and powerful summarizers. Researchers have been trying to improve approaches for creating summaries since the 1950s, such that the machine-generated summary matches the human-created summary. This study provides a detailed state-of-theart analysis of text summarization concepts such as summarization approaches, techniques used, standard datasets, evaluation metrics and future scopes for research. The most commonly accepted approaches are extractive and abstractive, studied in detail in this work. Evaluating the summary and increasing the development of reusable resources and infrastructure aids in comparing and replicating findings, adding competition to improve the outcomes. Different evaluation methods of generated summaries are also discussed in this study. Finally, at the end of this study, several challenges and research opportunities related to text summarization research are mentioned that may be useful for potential researchers working in this area. Keyword: Automatic text summarization, Natural Language Processing, Categorization of text summarization system, abstractive text summarization, extractive text summarization, Hybrid Text Summarization, Evaluation of text summarization system