scispace - formally typeset
Search or ask a question

Showing papers in "Computación Y Sistemas in 2020"


Journal ArticleDOI
TL;DR: A comparative analysis of Hadoop MapReduce and Spark has been presented on the basis of working principle, performance, cost, ease of use, compatibility, data processing, failure tolerance, and security.
Abstract: In the last one decade, the tremendous growth in data emphasizes big data storage and management issues with the highest priorities. For providing better support to software developers for dealing with big data problems, new programming platforms are continuously developing and Hadoop MapReduce is a big game-changer followed by Spark which sets the world of big data on fire with its processing speed and comfortable APIs. Hadoop framework emerged as a leading tool based on the MapReduce programming model with a distributed file system. Spark is on the other hand, recently developed big data analysis and management framework used to explore unlimited underlying features of Big Data. In this research work, a comparative analysis of Hadoop MapReduce and Spark has been presented on the basis of working principle, performance, cost, ease of use, compatibility, data processing, failure tolerance, and security. Experimental analysis has been performed to observe the performance of Hadoop MapReduce and Spark for establishing their suitability under different constraints of the distributed computing environment.

13 citations


Journal ArticleDOI
TL;DR: A cooking QA system in which a recipe question is contextually classified into a particular category using deep learning techniques and the question class is then used to extract the requisite details from the recipe obtained via the rule-based approach to provide a precise answer.
Abstract: In an automated Question Answering (QA) system, Question Classification (QC) is an essential module. The aim of QC is to identify the type of questions and classify them based on the expected answer type. Although the machine-learning approach overcomes the limitation of rules as is the case with the conventional rule-based approach but is restricted to the predefined class of questions. The existing approaches are too specific for the users. To address this challenge, we have developed a cooking QA system in which a recipe question is contextually classified into a particular category using deep learning techniques. The question class is then used to extract the requisite details from the recipe obtained via the rule-based approach to provide a precise answer. The main contribution of this paper is the description of the QC module of the cooking QA system. The obtained intermediate classification accuracy over the unseen data is 90% and the human evaluation accuracy of the final system output is 39.33%.

10 citations


Journal ArticleDOI
TL;DR: This research analyzes the use made by Internet advertising of user data to segment audiences and create profiles according to their tastes and preferences to build the theoretical and methodological body that allows articulating, understanding and interpreting the information with the greatest possible degree of precision.
Abstract: Esta investigacion analiza el uso que hace la publicidad digital de los datos de los usuarios de internet para segmentar las audiencias y crear perfiles acordes con sus gustos y preferencias, para enviarles publicidad personalizada. El estudio utiliza la observacion participante, el analisis y la sintesis; y recopila informacion de diversas fuentes bibliograficas relacionadas con el tema objeto de estudio, para construir el corpus teorico y metodologico que permite articular, comprender e interpretar la informacion con el mayor grado de precision posible. Concluye que a los clientes a quienes se les envia publicidad personalizada son potencialmente candidatos para realizar compras a traves de internet porque sus gustos, preferencias y deseos coinciden con la publicidad que se les envia.

6 citations


Journal ArticleDOI
TL;DR: In this paper, the current state of social research related to organized supporters groups is discussed, which can be classified into the notions of violence, identity, ritual and an empirical reference, "el aguante".
Abstract: Organized Supporters Groups were made visible from multiple incidents in Europe and Ibero-America, events that spread from the 1960s. In this sense, they became a problem of public order for official and sports authorities and, in addition, the media. They also became a research problem for anthropologists, psychologists, and sociologists. Thus, the purpose of the article is to detail the current state of social research related to Organized Supporters Groups. In this way, the references were located in electronic libraries, investigations elaborated from different theoretical, conceptual and methodological approaches. Finally, social research on Organized Supporters Groups can be classified into the notions of violence, identity, ritual and an empirical reference, “el aguante”. Social categories and the empirical referent have been used to interpret and understand the behavior of group-organized supporters.

6 citations


Journal ArticleDOI
TL;DR: The Trickster dominicano, como figura daimonica, se polariza en la sombra, convirtiendose en persecutor por medio del humor agresivo, that se manifiesta como una defensa inconsciente que pretende menospreciar y ridiculizar al otro as mentioned in this paper.
Abstract: En este articulo de reflexion se plantea que el humor en los dominicanos se ha convertido en una defensa patologica que permanece como una barrera que impide integrar las experiencias traumaticas, como fue la colonizacion. El enfoque es cualitativo con un diseno documental de tipo bibliografico y argumentativo. La colonizacion significo un acontecimiento traumatico que altero la vida de los pueblos originarios con experiencias colectivas de abuso, muertes, malos tratos, trabajo forzado, hambrunas, epidemias, guerras, suicidios, un sistema esclavista despiadado y la perdida cultural. La colonizacion se manifiesta como una herida enmascarada del trauma cultural que se afronta desde lo inconsciente a traves de la figura del Trickster personaje universal que representa lo comico. El Trickster dominicano, como figura daimonica, se polariza en la sombra, convirtiendose en persecutor por medio del humor agresivo, que se manifiesta como una defensa inconsciente que pretende menospreciar y ridiculizar al otro. Este humor agresivo, espasmodico y seco es una forma de control compulsivo que se conoce como relajo. Es un Trickster que expresa la invisibilizacion de los grupos originarios, la desdicha de la herencia espanola, la tristeza del negro esclavo y todo el sufrimiento a posteriori de la colonizacion en una mezcla racial y cultural parcialmente incorporada. El relajo del dominicano es una defensa inconsciente, que mantiene la estructura del trauma, por tanto, la fragmentacion polarizada del self individual y colectivo, que no se integra a la vida consciente por lo tanto perdura como una continua disculpa del trauma historico y cultural de la colonizacion.

5 citations


Journal ArticleDOI
TL;DR: An evolutionary method based on the best above internal clustering validation index for an automatic text summarization task is proposed, which has the advantage of not requiring information regarding the specific classes or themes of a text, and is therefore domain and language independent.
Abstract: The main problem for generating an extractive automatic text summary (EATS) is to detect the key themes of a text. For this task, unsupervised approaches cluster the sentences of the original text to find the key sentences that take part in an automatic summary. The quality of an automatic summary is evaluate dusing similarity metrics with human-made summaries. However, the relationship between the quality of the human-made summaries and the internal quality of the clustering is unclear. First, this paper proposes a comparison of the correlation of the quality of a human-made summary to the internal quality of the clustering validation index for finding the best correlation with a clustering validation index. Second, in this paper, an evolutionary method based on the best above internal clustering validation index for an automatic text summarization task is proposed. Our proposed unsupervised method for EATS has the advantage of not requiring information regarding the specific classes or themes of a text, and is therefore domain and language independent. The high results obtained by our method, using the most-competitive standard collection for EATS, prove that our method maintains a high correlation with human-made summaries, meeting the specific features of the groups, for example, compaction, separation, distribution, and density.

5 citations


Journal ArticleDOI
TL;DR: A deep learning model based on BLSTM that predicts the origin of the word from language perspective in the sequence based on the specific words that have come before it in thesequence is implemented that gives better accuracy for word embedding model as compared to character embedding evaluated on two testing sets.
Abstract: The paper describes the application of the code mixed index in Indian social media texts and comparing the complexity to identify language at word level using BLSTM neural model. In Natural Language Processing one of the imperative and relatively less mature areas is a transliteration. During transliteration, issues like language identification, script specification, missing sounds arise in code mixed data. Social media platforms are now widely used by people to express their opinion or interest. The language used by the users in social media nowadays is Code-mixed text, i.e., mixing of two or more languages. In code-mixed data, one language will be written using another language script. So to process such code-mixed text, identification of language used in each word is important for language processing. The major contribution of the work is to propose a technique for identifying the language of Hindi-English code-mixed data used in three social media platforms namely, Facebook, Twitter, and WhatsApp. We propose a deep learning framework based on cBoW and Skip gram model for language identification in code mixed data. Popular word embedding features were used for the representation of each word. Many researches have been recently done in the field of language identification, but word level language identification in the transliterated environment is a current research issue in code mixed data. We have implemented a deep learning model based on BLSTM that predicts the origin of the word from language perspective in the sequence based on the specific words that have come before it in the sequence. The multichannel neural networks combining CNN and BLSTM for word level language identification of code-mixed data where English and Hindi roman transliteration has been used. Combining this with a cBoW and Skip gram for evaluation. The proposed system BLSTM context capture module gives better accuracy for word embedding model as compared to character embedding evaluated on our two testing sets. The problem is modeled collectively with the deep-learning design. We tend to gift an in-depth empirical analysis of the proposed methodology against standard approaches for language identification.

5 citations


Journal ArticleDOI
TL;DR: The model created is an ensemble of classical machine learning models included Logistic Regression, Support Vector Machines, Naive Bayes models and a combination of Logisticregression and Naïve Bayes.
Abstract: The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project MISMISFAKEnHATE on Misinformation and Miscommunication in social media: FAKE news and HATE speech (PGC2018-096212-B-C31).

4 citations


Journal ArticleDOI
TL;DR: Rodriguez-Carvajal et al. as mentioned in this paper used the Ryff Psychological Well-being Scales and qualitative data collected through an open-ended questionnaire to verify if the levels of psychological well-being increased in people who had experienced a romantic breakup after participating in an online group intervention based on Worden's grief tasks model, and the use of techniques recommended by this author for grief work.
Abstract: This study sought to verify if the levels of psychological well-being increased in people who had experienced a romantic breakup, after participating in an online group intervention based on Worden’s grief tasks model, and the use of techniques recommended by this author for grief work. The final sample of the study consisted of 12 adult participants, residents of the Dominican Republic. The quantitative data collection was done with the use of the Ryff Psychological Well-being Scales in its Spanish adaptation carried out by Rodriguez-Carvajal et al. (2010). In addition, qualitative data were collected through an open-ended questionnaire. The results show a significant increase in the levels of psychological well-being after the intervention (m = 28.58; t = 5.86, p <0.001). The qualitative findings, classified into five main categories, show the perception of positive changes in the participants, such as the development of empathy with the group members, the change of erroneous ideas that could interfere in the grief process and motivation to continue therapeutic processes. It is recommended to carry out other investigations with a larger sample and with a control group, so that generalizations and definitive conclusions can be made about the effectiveness of the designed and applied program.

4 citations


Journal ArticleDOI
TL;DR: A tool for the classification of natural language in the social network Twitter to divide in to twoclasses, the opinions that the users express about the political moment of the Mexican presidential elections in 2018, using the tool known as CLiPS from Python.
Abstract: In this work we developed a tool for the classification of natural language in the social network Twitter: The main purpose is to divide in to twoclasses, the opinions that the users express about the political moment of the Mexican presidential elections in 2018. In this scenario, considering the information from the Tweets as corpus, these have been randomly downloaded from different users and with the tagging algorithm, it has been possible to identify the commentsin to two categories defined as praises and insults, which are directed towards the presidential candidates. The tool known as CLiPS from Python, has been used for such purpose with the inclusion of the tagging algorithm. Finally, the frequency of the terms is analyzed with descriptive statistics.

4 citations


Journal ArticleDOI
TL;DR: The design process and construction of a 4-DOF robotic arm is described, involving CAD, CAM, electronics, and Matlab’s Robotics Toolbox to solve kinematics.
Abstract: A robot is a complex machine that involves the conjunction of many technologies working harmoniously together to provide to the final user, a nice interface to interact. The kind of problems needed to be solved to have a robotic arm involves overcoming lateral loads, power consumption, solution of kinematics equations, etc. Peter Corke’s Robotics Toolbox [1] is a computer library useful to design, model, visualize and simulate a robot and it is widely utilized in the present study. This paper describes the design process and construction of a 4-DOF robotic arm, involving CAD, CAM, electronics, and Matlab’s Robotics Toolbox to solve kinematics. It constitutes a low-cost platform, in a process of permanent improvement, for the study of laboratory courses of design, manufacturing, electronics and robotics, essentials for many engineering curriculums. The platform provides the student with hands-on experience that consolidates classroom lectures.

Journal ArticleDOI
TL;DR: The system for inferring implicit computable knowledge from textual data by natural deduction is introduced and a large corpus of TIL meaning procedures is obtained.
Abstract: In this paper, we introduce the system for inferring implicit computable knowledge from textual data by natural deduction. Our background system is Transparent Intensional Logic (TIL) with its procedural semantics that assigns abstract procedures known as TIL constructions to terms of natural language as their context-invariant meanings . The input data for our method are produced by the so-called Normal Translation Algorithm (NTA). The algorithm processes natural-language texts and produces TIL constructions. In this way we have obtained a large corpus of TIL meaning procedures. These procedures are furthermore processed by our algorithms for type checking and context recognition so that the rules of natural deduction for inferring computable knowledge can be afterwards applied.

Journal ArticleDOI
TL;DR: An approach to obtain information from clinical notes, based on Natural Language Processing techniques and Paragraph Vectors algorithm is presented and results show the best model for classification is the MLP model with aprecision 0.89 and f1-score 0.87, although the difference in precision between models is minimal.
Abstract: Machine learning (ML) techniques have been used to classify cancer types to support physicians in the diagnosis of a disease. Usually, these models are based on structured data obtained from clinical databases. However valuable information given as clinical notes included in patient records are not used frequently. In this paper, an approach to obtain information from clinical notes, based on Natural Language Processing techniques and Paragraph Vectors algorithm is presented. Moreover, Machine Learning models for classification of liver, breast and lung cancer patients are used. Also, a comparison and evaluation process of chosen ML models with varying parameters were conducted to obtain the best one. The ML algorithms chosen are Support Vector Machines (SVM) and Multi-Layer Perceptron (MLP). Results obtained are promising and they show the best model for classification is the MLP model with aprecision 0.89 and f1-score 0.87, although the difference in precision between models is minimal (0.02).

Journal ArticleDOI
TL;DR: The QSSC (Quito Smart Safe City), a prototype system for locating and issuing early warnings about missing persons, is presented, based on crowdsensing, which allows it to support decision-making processes that result in quick and effective solutions to events.
Abstract: This article presents the QSSC (Quito Smart Safe City), a prototype system for locating and issuing early warnings about missing persons. The city of Quito is used as a case study. QSSC is a system that integrates the community and the agents responsible for monitoring and solving the missing persons problem within a shared collaborative space. The system uses a mobile distributed architecture that allows users to report any disappearances and, if possible, to upload multimedia evidence that helps identify and locate the victims and the circumstances under which the disappearance occurred. It employs IoT communications using the Message Queuing Telemetry Transport (MQTT) protocol to exchange information. Together with various advantages provided by the Amazon Web Services (AWS) cloud, MQTT enables communications in real-time among all participating rescue parties focused on the kidnapping (indispensable given the importance of response time). The RTLS is an opportunistic, light application that minimizes resource consumption on the host mobile device and manages information efficiently. QSSC is based on crowdsensing, which allows it to support decision-making processes that result in quick and effective solutions to events. The viability and operability of the prototype described in this article have been tested against several kidnapping simulations in the city of Quito.

Journal ArticleDOI
TL;DR: After a statistical analysis with 11 different types of query, the conclusion was that the type of query affects the response time of the Database Management Systems.
Abstract: A hybrid and distributed geographical database is being developed which utilizes the database engines PostgreSQL and MongoDB for later implementation in Geoserver’s architecture. In the course of the study, it was necessary to establish whether there was difference between the response times of PostgreSQL and MongoDB per type of query and the use of geographical indexes, in order to appropriately select the Database Management System to be used by the Geoserver implemented Web Map Service. After a statistical analysis with 11 different types of query, the conclusion was that the type of query affects the response time of the Database Management Systems.

Journal ArticleDOI
TL;DR: The objective of this work is to characterize expository passages semantic structures through FOL and situation calculus with the question-answer block.
Abstract: Reading comprehension in the English language is a process that has been studied from different disciplines. Many postgraduate programs require certification in another language, hence the importance of seeking semantic patterns that allow the creation of intelligent tools to train students in the setasks. The objective of this work is to characterize expository passages semantic structures through FOL and situation calculus with the question-answer block.

Journal ArticleDOI
TL;DR: A thorough analysis of recent research in aging and age estimation is presented and insights for future research based on depth map and Kinect camera are offered.
Abstract: Since, aging is non-reversible process. Human face and gait change with time which reflects major variations in appearance, the vast majority of people are able to easily recognize human traits like emotional states, where they can tell if the person is happy, sad or angry from the face, likewise, it is easy to determine the gender of the person. However, knowing person’s age is a very challenge task. Hence, significant interest in the computer vision and pattern recognition research community is given to automatic age estimation. This paper, presents a thorough analysis of recent research in aging and age estimation. Discuss popular algorithms used in age estimation and existing models. Underline age estimation challenges especially using RGB images. Finally insights for future research based on depth map and Kinect camera.

Journal ArticleDOI
TL;DR: This document presents a way to obtain the key terminology based on labels that were manually obtained by an expert in the area, in which patterns from key terminology that were used as filters afterwards are obtained.
Abstract: The key terminology is very important for scientific works, especially for Natural Language Processing field. However, there is no optimal way to extract all the key terminology in a reliable manner. There by it is important to develop automatic methods for extracting key terms. This document presents a way to obtain the key terminology based on labels that were manually obtained by an expert in the area. Subsequently, we got POS (Part-of-the-speech) tags for each label, in which we obtained patterns from key terminology that were used as filters afterwards. Experiment 1 was tested using the labels obtained manually and the labels obtained by the proposed approach, with 60% of the corpus for training and 40% for tests. The patterns were evaluated with three different measures of evaluation such as precision, recall, and F-measure. Experiment 2 used three measures for ranking N-grams (sequence of terms), Point mutual information, Likelihood-ratio, and Chi-square. To obtain the best N-grams, we have implemented in experiment 3 intersections between the previous measures and filtering N-grams by POS patterns. Also, they were compared with the manually labelled set, evaluation measures were used to see its result, gave us a good recall moreover acceptable precision and F-measure. In experiment 4 POS patterns were tested in a much larger corpus of a different domain obtaining slightly higher results.

Journal ArticleDOI
TL;DR: This paper presents an approach that combines the deep learning framework with linguistic features for the recognition of aggressiveness in Mexican tweets and looks at the achieved results, linguistic features seem not to help theDeep learning classification for this task.
Abstract: The work of Simona Frenda and Paolo Rosso was partially funded by the Spanish MINECO under the research project SomEMBED (TIN2015-71147-C2-1-P).

Journal ArticleDOI
TL;DR: A new hybrid approach for band selection based that combines the advantage of filter and wrapper method and a new binary version of Sin Cosine Algorithm is proposed to adapt it to the band selection problem.
Abstract: Recently, hyperspectral imagery has been very active research field in manyapplications of remote sensing. Unfortunately, the large number of bandsreduces the classification accuracy and computational complexity whichcauses the Hugh phenomenon. In this paper, a new hybrid approach for bandselection based is proposed. This approach combines the advantage of filterand wrapper method. The proposed approach is composed of two phases: thefirst phase consists to reduce the number of bands by merging the highlycorrelated bands, and, the second phase uses a wrapper approach based on SinCosine Algorithm to select the optimal band subset that provides a highclassification accuracy. In addition, a new binary version of Sin CosineAlgorithm is proposed to adapt it to the band selection problem. Theperformance evaluation of the proposed approach is tested on three publiclyavailable benchmark hyperspectral images. The analysis of the resultsdemonstrates the efficiency and performance of the proposed approach.

Journal ArticleDOI
TL;DR: This paper proposes a method that attempts to extract subjective parts of document reviews that provides superior performance in terms of classification measures and combines the extracted opinions with the sentiment words returned by the lexical approach.
Abstract: The aim of sentiment classification is to automatically extract and classify a textual review as expressing a positive or negative opinion. In this paper, we study the sentiment classification problem in the Arabic language. We propose a method that attempts to extract subjective parts of document reviews. In addition, a lexicon is used to find implicit opinions and sentiments in reviews. We combine the extracted opinions with the sentiment words returned by the lexical approach. Finally, a feature reduction technique is applied. To evaluate the proposed method, support vector machines (SVM) classifier is applied for the classification task. The results indicate that our proposed approach provides superior performance in terms of classification measures.

Journal ArticleDOI
TL;DR: In this article, a cross-sectional predictive study was designed with the application of an online survey to a non-probability sample made up of 518 Mexicans, with the aim of identifying the predictor variables of the psychological responses of the Mexican adult population during the National Health Emergency due to the Covid-19 pandemic.
Abstract: Facing a global health emergency requires the emission of avoidance and prevention behaviors that contribute to the reduction of transmission and mortality associated with the outbreak. However, many of the required behavioral responses favor the emergence and maintenance of emotional psychological responses such as anxiety. With the aim of identifying the predictor variables of the psychological responses of the Mexican adult population during the National Health Emergency due to the Covid-19 pandemic, a cross-sectional predictive study was designed with the application of an online survey to a non-probability sample made up of 518 Mexicans. Statistical analysis suggests that the variables age, sex, marital status, occupation, religious beliefs, place of residence, family income, perceived health attitudes and practices, were associated with self-perceptions about the pandemic (R=.531), the emission of prosocial and health/prevention behavioral responses (R=.563) and anxiety (R=.297). It was concluded that the identification of the predictor variables allow to delineate psychological interventions that contribute to the confrontation of the current and future pandemic from a psychosocial, inclusive and sensitive approach to vulnerable groups.

Journal ArticleDOI
TL;DR: It is concluded that in addition to the technical challenge of modernization, meanings must be given to social participation and to understand the benefits of the development of science and technology as a complement to traditional knowledge, so that the projects are successful and the results are obtained.
Abstract: This paper presents an analysis, considering the scientific and technological development, as well as the legal framework, of the La Purisima Dam irrigation system in Guanajuato. The results of this research will eventually improve the productive systems of the users of the irrigation system by having knowledge of the optimal operating characteristics. The Modernization, Automation and Technification Project of the La Purisima Dam Irrigation Module, in Guanajuato, Mexico, began in 2011 and estimates an investment of 520 million pesos to technify 3 thousand hectares and increase efficiency in the use of water and its conduction. For a better performance in agricultural irrigation, which represents an increase in the overall production of the system between 40 and 85% and an increase in driving of 95% and savings of 10 million m3, which are extracted annually from the Irapuato Valle Aquifer. The objective of the research is to analyze the meanings that the agricultural producers have of the Irrigation District 011, with the purpose of favoring social participation in the implementation of the PMAYT and achieving an improvement in life. This research began in 2016, with users of the module located in Irapuato. The procedures applied in 2018 were questionnaires, semi-structured interviews and a focus group based on the methodology of the grounded theory. Among the results, the interest of producers in the knowledge of new irrigation systems stands out, but ignorance of regulations and lack of resources were detected. It is concluded that in addition to the technical challenge of modernization, meanings must be given to social participation and to understand the benefits of the development of science and technology as a complement to traditional knowledge, so that the projects are successful and the results are obtained.

Journal ArticleDOI
TL;DR: A survey on trends and methods of Machine Reading Comprehension (MRC) methods in Arabic, primarily emphasizing on two aspects (i.e., available datasets and methods).
Abstract: Machine Reading Comprehension (MRC) is an essential task in natural language understanding in which the goal is to teach machines to understand the text. Machine understanding can be evaluated through question answering techniques. Limited research has been done in Arabic reading comprehension. This study presents a survey on trends and methods of Machine Reading Comprehension (MRC) methods in Arabic. The survey summarizes recent advances in Arabic reading comprehension, primarily emphasizing on two aspects (i.e., available datasets and methods). The study provides a detailed analysis of MRC techniques and compares various datasets used to study Arabic reading compression.

Journal ArticleDOI
TL;DR: A controlled natural language based on Semantics of Business Vocabulary and Business Rules (SBVR) to help modelers and domain experts in the process of writing and validating the constraints that cannot be represented in an Entity-Relationship schema; and the Alloy language to allow a formal specification.
Abstract: Traditional methods lack the necessary or appropriate means for expressing the integrity constraints during the database conceptual modeling stage. At most, integrity constraints are informally documented and then, coded in the application. This leads to late error detection and database inconsistencies due to the incapacity of the domain expert to validate the program code. Thus, it is necessary to express such constraints in a natural and formal way in order to close the gap between modelers and domain experts, and to support the transformation to other languages and models. As a result, we propose a controlled natural language based on Semantics of Business Vocabulary and Business Rules (SBVR) to help modelers and domain experts in the process of writing and validating the constraints that cannot be represented in an Entity-Relationship schema; and the Alloy language to allow a formal specification. Inaddition, all the correspondences between the models and languages are described in order to consistently express the constraints and to lay the foundations of the automatic transformation. Finally, a case study and a usability survey show that the proposal is feasible, without abandoning a traditional and popular approach such as the Entity-Relationship model.

Journal ArticleDOI
TL;DR: In this paper, a personalized sentence generation method based on generative adversarial networks (GANs) is proposed to cope with the issue of author-specific word usage, where the frequently used function word and content word are incorporated not only as the input features but also as sentence structure constraint for the GAN training.
Abstract: The author-specific word usage is a vital feature to let readers perceive the writing style of the author. In this work, a personalized sentence generation method based on generative adversarial networks (GANs) is proposed to cope with this issue. The frequently used function word and content word are incorporated not only as the input features but also as the sentence structure constraint for the GAN training. For the sentence generation with the related topics decided by the user, the Named Entity Recognition (NER) information of the input words is also used in the network training. We compared the proposed method with the GAN-based sentence generation methods, and the experimental results showed that the generated sentences using our method are more similar to the original sentences of the same author based on the objective evaluation such as BLEU and SimHash score.

Journal ArticleDOI
TL;DR: OPAIEH, an ontology-based platform for activity identification of the elderly at home, which includes the ontological model, which allows a new activity characterization and generates a set of graphs that shows different statistics and behaviours of the users.
Abstract: Recently, the sector of the population older than 60 years of age is growing. The assistance for the elderly at their home allows to increase their autonomy and independence while they are living alone. Several fields have emerged to improve the quality of life of the elderly, and to develop environments that offer help, support and assistance during the realization of their daily activities. The identification of activities is a key piece for the provision of assistance to seniors who live alone. This work is focused on the activity recognition of the elderly who live independently at home. Here is presented OPAIEH, an ontology-based platform for activity identification of the elderly at home. The platform includes the ontological model, which allows a new activity characterization. Also, the platform includes a sensors network, a client device, and web server to perform the recognition of different activities that the elderly do inside their home. Furthermore, the platform generates a set of graphs that shows different statistics and behaviours of the users. In order toper form an experimental test on OPAIEH, a case study was developed as a proof of concept about the use of ontologies for the activity recognition task. The results encourages us to continue our work.

Journal ArticleDOI
TL;DR: In this article, the authors used open interviews carried out with five women graduates of the program, three of the master's and two of the docto-rate, to evaluate if aspects of the gender interfere in the academic trajectories.
Abstract: The research was oriented to evaluate if aspects of the gender interfere in the academic trajectories. The theore-tical-epistemological referential of the historical-cultural perspective guides this study to make it possible to analyze the existing distinctions for women in the exercise of inves-tigative activities. The method of qualitative research is of particular relevance in analyzing whether there is the dissolution of old social inequalities within the diversity of the academic environment. As a research instrument, we used open interviews carried out with five women graduates of the program, three of the master’s and two of the docto-rate. The analysis of the narratives allowed us to state that the academic career is one of the few alternatives in which women can work and study at the same time. In contrast, the assumption of multiple social roles and the effort undertaken to reconcile double working hours and the lack of family and financial support for studies they were seen as obstacles to training. Likewise, the narratives made it possible to make visible a set of “barriers” to follow the scientific career, which refer in the: double working hours, maternity, productivity in research, competence, preju-dice and gender discrimination; thus reaffirming the need to develop public policies to expand not only access, but also the permanence of women in the university.

Journal ArticleDOI
TL;DR: LSTM based deep learning models were used for the prediction of these time aware QoS parameters and the results are compared with the previous approaches and the experimental results show that the L STM based Time Series Forecasting Framework is performing better.
Abstract: The convergence of Social Mobility Analytics and Cloud (SMAC) technologies gives rise to an unforeseen aggrandization of the web services on the internet. The resilience and payment-based approach of the cloud makes it an obvious choice for the deployment of web services-based applications. Out of available web services, to gratify the similar functionalities, the choice of the web service based on the personalized quality of service (QoS) parameters plays an importantrole in determining the selection of the web service. The role of time is rarely being discussed in deciding the QoS of web services. The delivery of QoS is not made as declared due to the non-functional performance of web services correlated behavior with the invocation time. This happens because service status usually changes over time. Hence, the design of the time a ware web service recommendation system based on the personalized QoS parameters is very crucial and becomes a challenging research issue. In this study, LSTM based deep learning models were used for the prediction of these time aware QoS parameters and the results are compared with the previous approaches. The experimental results show that the LSTM based Time Series Forecasting Framework is performing better. The RMSE, MAE, and MAPE are used as an evaluation metric and their value for the prediction of Response time (RT) is found to be 0.030269, 0.02382 and 0.59773 respectively with adaptive moment estimation as the training option and is found to be 0.66988, 0.66465 and 27.9934 respectively with root mean square propagation as the training option. The RMSE, MAE, and MAPE value for the prediction of through put (TP) is found to be 0.77787, 0.4792 and 1.61 respectively with adaptive moment estimation as the training option and is found to be 0.2.7087, 1.4076 and 7.1559 respectively with root mean square propagation as the training option respectively. Thus, the experimental results show that the LSTM model of Time Series Fore casting for Web Services Recommendation Framework is performing better as compared to previous methods.

Journal ArticleDOI
TL;DR: It was observed that it is possible to take advantage of the properties of the video to evaluate the performance of the annotator and the example of the crossing of a pedestrian is presented as an example for its analysis.
Abstract: The results of the analysis of the automatic annotations of real video surveillance sequences are presented. The annotations of the frames of surveillance sequences of the parking lot of a university campus are generated. The purpose of the analysis is to evaluate the quality of the descriptions and analyze the correspondence between the semantic content of the images and the corresponding annotation. To perform the tests, a fixed camera was placed in the campus parking lot and video sequences of about 20 minutes were obtained, later each frame was annotated individually and a text repository with all the annotations was formed. It was observed that it is possible to take advantage of the properties of the video to evaluate the performance of the annotator and the example of the crossing of a pedestrian is presented as an example for its analysis.