scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computer Science & Systems Biology in 2015"


Journal ArticleDOI
TL;DR: The limitations of the existing theoretical research are discussed and several directions to improve research in optimizing chemotherapy treatment planning using real protocol treatments defined by the oncologist are provided.
Abstract: Tumors in humans are believed to be caused by a sequence of genetic abnormality. Understanding these sequences is important for improving cancer treatments. Biologists have uncovered some of the most basic mechanisms by which normal stem cells develop into cancerous tumors. These biological theories can then be transformed into mathematical models. In this paper, we review the mathematical models applied to the optimal design of cancer chemotherapy. However, chemotherapy is a complex treatment mode that requires balancing the benefits of treating tumors with the adverse toxic side effects caused by the anti-cancer drugs. Some methods of computational optimization have proven useful in helping to strike the right balance. The purpose of this paper is to discuss the limitations of the existing theoretical research and provide several directions to improve research in optimizing chemotherapy treatment planning using real protocol treatments defined by the oncologist.

55 citations


Journal ArticleDOI
TL;DR: A new R package is presented that has been designed to explore population’s LD patterns to reconstruct two key parameters of human evolution: the effective population size and the divergence time between populations.
Abstract: Objective: Estimating the effective population size (Ne) is crucial to understanding how populations evolved, expanded or shrunk. One possible approach is to compare DNA diversity, so as to obtain an average Ne over many past generations; however as the population sizes change over time, another possibility is to describe this change. Linkage Disequilibrium (LD) patterns contain information about these changes, and, whenever a large number of densely linked markers are available, can be used to monitor fluctuating population size through time. Here, we present a new R package, NeON that has been designed to explore population’s LD patterns to reconstruct two key parameters of human evolution: the effective population size and the divergence time between populations. Methods: NeON starts with binary or pairwise-LD PLINK files, and allows (a) to assign a genetic map position using HapMap (NCBI release 36 or 37) (b) to calculate the effective population size over time exploiting the relationship between Ne and the average squared correlation coefficient of LD (r2LD) within predefined recombination distance categories, and (c) to calculate the confidence interval about Ne based on the observed variation of the estimator across chromosomes; the outputs of the functions are both numerical and graphical. This package also offers the possibility to estimate the divergence time between populations given the Ne values calculated from the within-population LD data and a matrix of between-populations FST. These routines can be adapted to any species whenever genetic map positions are available. Results and Conclusion: The functions contained in the R package NeON provide reliable estimates of effective population sizes of human chromosomes from LD patterns of genome-wide SNPs data, as it is shown here for the populations contained in the CEPH panel. The NeON package enables to accommodate variable numbers of individuals, populations and genetic markers, allowing analyzing those using standard personal computers.

34 citations


Journal ArticleDOI
TL;DR: The goal of the analysis is to indicate the strengths and weaknesses of current mathematical models of dengue fever that should assist future researchers in forming models that accurately measure the variables they are studying that affect the spread and progression of the disease.
Abstract: In this paper, we compare and contrast five models of dengue fever, a serious illness that affects tropical and subtropical areas around the world. We evaluate each model using different scenarios and identify the strengths and weakness of each of the models. The goal of our analysis is to indicate the strengths and weaknesses of current mathematical models of dengue fever that should assist future researchers in forming models that accurately measure the variables they are studying that affect the spread and progression of the disease.

24 citations


Journal ArticleDOI
TL;DR: This study investigated and evaluated a model framework, for diagnostic decisions, based on a cognitive process and a Semantic Web approach, to represent the problem of urinary tract infection diagnosis.
Abstract: Decision making in the field of medical diagnosis involves a degree of uncertainty and a need to take into account the patient’s clinical parameters, the context of illness and the medical knowledge of the physician, to determine and confirm the diagnosis. In this study, we investigated and evaluated a model framework, for diagnostic decisions, based on a cognitive process and a Semantic Web approach. Fuzzy cognitive maps (FCM) are a cognitive process applying the main features of fuzzy logic and neural processors to situations involving imprecision and uncertain descriptions, in a similar way to intuitive human reasoning. We explored the use of this method for modeling clinical practice guidelines, using Semantic Web tools to implement these guidelines and for the formalization process. Twenty-five clinical and 13 diagnosis concepts were identified, to represent the problem of urinary tract infection diagnosis.

21 citations


Journal ArticleDOI
TL;DR: The CVA demonstrates the ability to accurately threshold the acuity of normal eyes compared with chart acuity under conditions of contrast, luminance and fixation times simulating normal photopic and mesopic activities and appears to provide the clinician rapidly with a better understanding of visual function under a variety of day and evening tasks.
Abstract: Purpose: The Central Vision Analyzer (CVA) is an interactive, automated computer device that rapidly thresholds central acuity under conditions mimicking customary photopic and mesopic activities. In sequence, the CVA may test up to 6 environments, and in this series was used to test 3 mesopic environments (98% and 50% MC against 1.6 cd/m2 background, 25% MC against 5 cd/m2), then 3 glare environments (98%, 10% and 8% MC, against 200 cd/m2 background). This report compares the CVA thresholded acuity with that measured utilizing standard letter acuity charts. Methods: In 481 normal eyes acuity was measured with best spectacle and contact lens refraction using both CVA and 0.1 logMAR ETDRS charts presenting similar contrast and luminance. In addition for 162 emmetropic, eyes, acuity was tested with a 15% MC chart placed outdoors with sun overhead and with sun at 15° off-axis and compared with the CVA thresholded acuity at 10% and 8% MC presented in a darkened room. Results: All CVA modules demonstrated high Pearson correlation coefficients (r=0.51 to r=0.94, p<0.01), Bland and Altman statistical similarity with the acuity measured from similar contrast charts as well as between the acuity measured with a 15% MC letter chart with the sun overhead and CVA 10% glare module and between acuity with a 15% MC chart viewed with the sun 15° off-axis and that with CVA 8% glare module presented in the darkened room. Conclusions: The CVA demonstrates the ability to accurately threshold the acuity of normal eyes compared with chart acuity under conditions of contrast, luminance and fixation times simulating normal photopic and mesopic activities and appears to provide the clinician rapidly with a better understanding of visual function under a variety of day and evening tasks.

14 citations


Journal ArticleDOI
TL;DR: A Mathematical Model of the Autonomic Nervous System and/or the Physiological Systems (developed by Dr IG Grakov) which has been incorporated into a commercialised technology which is based upon the concepts outlined.
Abstract: Despite the immense advances of medical research there remains a fundamental theoretical deficit regarding how the body is able to maintain its physiological stability. In other words, what is the mechanism which regulates homeostasis and allostasis and/or what is the relationship between genotype and the influence of the environment (phenotype)? Despite the immense amount of publicity given to the huge increases in the occurrence of the most common lifestyle related ailments (e.g. diabetes, obesity, cardiovascular disease, cancers, Alzheimer’s disease, etc) which have occurred in recent years; most people in the world still have relatively normal levels of body weight and remain free from medical problems during their lifetimes, at least until their advancing years when their body is increasingly less able to maintain its normal regulated function. This article considers whether it is now possible to understand the nature, structure and function of this regulatory mechanism in far more detail than has hitherto been possible. This neuroregulatory mechanism involves the influence of light upon brain function and hence upon the autonomic nervous system and physiological systems. There is a particular emphasis in this article upon the regulation of Blood Glucose and how Acidity plays a significant role in diabetes etiology. Finally, the article introduces a Mathematical Model of the Autonomic Nervous System and/or the Physiological Systems (developed by Dr IG Grakov) which has been incorporated into a commercialised technology which is based upon the concepts outlined.

14 citations


Journal ArticleDOI
TL;DR: A full description of a high-level language for solving arbitrary problems in heterogeneous, distributed and dynamic worlds, both physical and virtual, will be presented and discussed.
Abstract: A full description of a high-level language for solving arbitrary problems in heterogeneous, distributed and dynamic worlds, both physical and virtual, will be presented and discussed. The language is based on holistic and gestalt principles representing semantic level solutions in distributed environments in the form of self-evolving patterns. The latter are covering, grasping and matching the distributed spaces while creating active distributed infrastructures in them operating in a global-goal-driven manner but without traditional central resources. Taking into account the existing sufficient publications on the approach developed, the paper will be showing only elementary examples using the Spatial Grasp Language and key ideas of its networked implementation.

11 citations


Journal ArticleDOI
TL;DR: This research paper looks into the tool named Scapy which is based on Python language, lists out some vital commands, explanation with examples and uses of Security Testing, and gives a brief description and understandable usage of the security Testing tool.
Abstract: Security Testing is the essential method of any information System and this method is used to detect flaws in the security measures in an Information System which protect the data from an unauthorized access. Passing through the Security testing method does not ensure that flaws are not present in the System. Python is a new emerging Programming language. This research paper looks into the tool named Scapy which is based on Python language; lists out some vital commands, explanation with examples and uses of Security Testing. This research paper, being introductory one tries to give a brief description and understandable usage of the security Testing tool.

11 citations


Journal ArticleDOI
TL;DR: A novel mechanism to isolate adult rat bone marrow stem cells and test their ability to treat diabetic rats by MSCs engrafting is proposed and revealed that diabetes caused bad effects on the blood picture, pancreas and kidney functions, as well as the immune system represented by TNF.
Abstract: Background: Currently, diabetes mellitus, specifically, Type 2 diabetes is a multifactorial metabolic disorder that affects more than 348 million people worldwide. It is considered to be one of the main causes of mortality. The pathway of type 2 diabetes is characterized both by insulin resistance in muscle, fat, and liver and a relative failure of the pancreatic β cell. Despite extensive study, yet no unifying hypothesis exists to explain these defects and the proper treatment. The key goal of diabetes treatment is to prevent complications because over time, diabetes can damage the heart, blood vessels, eyes, kidneys, and nerves. Therefore there is a great need to develop new and effective therapies for treating diabetic complications early before it cause irreparable tissue damage. Recently, advances experimental evidence empowers the idea that diabetic patients may greatly benefit from cell-based therapies, which include the use of adult stem and/or progenitor cells in disease therapy. In particular, therapeutic effect of bone marrow stem cell in treating the type 2 diabetic patients. Motivation: Mesenchymal stem cells (MSCs) are adherent and pluripotent and non-hematopoietic progenitor cells. Human bone marrow MSCs have been shown to inhibit antigen-dependent CD4+ and CD8+ T cell proliferation in an allogeneic setting in vitro. They have been found to reside in most organs and tissues investigated to date, including bone marrow, adipose, dermis, muscular tissue, hair follicles, the periodontal ligament and the placenta. In addition, recent studies have shown that adult bone marrow stem cells can differentiate into several types such as blood, liver, lung, skin, muscle, neuron and insulin producing cells. This has motivated us to explore potentials of their therapeutic applications in treating diabetes mellitus or type-2 diabetes. This article proposes a novel mechanism to isolate adult rat bone marrow stem cells and test their ability to treat diabetic rats. The main focus of this research is to investigate the therapeutic effect of mesenchymal stem cells on the diabetic rats. Experimental methods, data and results: The experimental studies were carried out based on twelveweek old healthy Sprague Dawley (S.D) rats and it was used for isolation and transplantation of stem cells. We carried a total number of 40 Sprague Dawley (S.D) male rats; 12-14 weeks old age and weighting 180-250 gm were used in the experimental study. Rats were obtained from Animal House of Nile Center for Experimental Researches, Mansoura, Egypt. Animals were housed in separate metal cages, fresh and clean drinking water was supplied adlibtium through specific nipple. The animals were anesthetized by halothane, and then the skin was sterilized with 70% ethyl alcohol before cutting the skin. The femurs and tibia were carefully dissected from adherent soft tissues. Then they were placed into sterilized beaker containing 70% ethyl alcohol for 1-2 min. The bones were put in Petri dish contain Phosphate buffer saline 1X (PBS) (Hyclone, USA) for wash. The bones were taken to laminar air flow (unilab biological safety cabinet class II, china) to extract the BM. The two ends of the bones were removed using sterile scissors. Conclusion and future work: Currently, the obtained results revealed that diabetes caused bad effects on the blood picture, pancreas and kidney functions, as well as the immune system represented by TNF. Treatment the diabetic rats by MSCs engrafting improved the tested parameters towards the stats of normal case. Nevertheless, the wide application of the stem cells engrafting still needs more investigations to be assured.

11 citations


Journal ArticleDOI
TL;DR: Some conceptual ideas for developing a worker manager for representing worker and machine performance in manufacturing systems simulation models are presented.
Abstract: This paper deals with the workers and machine performance measurements in manufacturing system. In simulating the manufacturing system, discrete event simulation involves the modeling of the system as it improves throughput time and is particularly useful for modeling queuing systems. In modeling this system, an ARENA simulation model was developed, verified, and validated to determine the daily production and potential problem areas for the various request levels in the case company (Ethiopia Plastic Factory). The results obtained show that throughput time in the existing system is low because of occurrence of bottlenecks, and waiting time identified. Therefore, some basic proposals have been drawn from the result to raise awareness of the importance of considering human and machine performance variation in such simulation models. It presents some conceptual ideas for developing a worker manager for representing worker and machine performance in manufacturing systems simulation models. The company workers’ and machine performance level is raised by identifying the bottle neck area. They were recommended to amend the main basic bottle necks identified.

10 citations


Journal ArticleDOI
TL;DR: High-throughput, massively parallel sequencing approaches that do not rely on cultivation to identify specific virus-host relations such as single-cell genomic sequencing (SCGS) have become critical and advanced the capacity to understand the genomic and transcriptomic diversity that occurs during viral-host interactions in an individual uncultured host.
Abstract: Viruses are the most abundant biological entities and infectious agents present in almost every ecosystem on the planet. Yet our understanding of how viral-mediated gene transfer and metabolic reprogramming influence the evolutionary history of their hosts and microbial communities remains poor at best. At the same time, identifying and modeling the community dynamics of viruses from the environment through conventional plaque assays is complicated because less than one percent of microbial hosts have been cultivated in vitro. Computational methods in metagenomics and phage isolation techniques have limitations in identifying the uncultured hosts of most viruses. Moreover, the model system-based measurements derived from such techniques rarely reflect the network properties of natural microbial communities. To address these problems, development of high-throughput, massively parallel sequencing approaches that do not rely on cultivation to identify specific virus-host relations such as single-cell genomic sequencing (SCGS) has become critical. SCGS has advanced our capacity to understand the genomic and transcriptomic diversity that occurs during viral-host interactions in an individual uncultured host. Here, we review the major technological and biological challenges and the breakthroughs achieved, describe the remaining challenges, and provide a glimpse into the recent advancements.

Journal ArticleDOI
TL;DR: A simple and robust edge information detection process for unsharp masking sharpening that is used to produce a clear edge structure and vivid details of high-resolution images with minimal ringing and jaggy artifacts is designed.
Abstract: This paper proposes the edge-directed unsharp masking sharpening (EDUMS) method and demonstrates the method’s ability to provide single-image resolution enhancement. The EDUMS method implements an efficient single-image, super-resolution process by synergizing edge-directed information and unsharp masking sharpening. This study designs a simple and robust edge information detection process for unsharp masking sharpening that is used to produce a clear edge structure and vivid details of high-resolution images with minimal ringing and jaggy artifacts. This method is a non-iterative process that is computationally efficient for use in a real-time integrated circuit design implementation.

Journal ArticleDOI
TL;DR: A new recursive screening incremental ranking machine learning paradigm to empower the desired classifiers, especially for imbalanced training data, to create suitable data-driven clusters without prior information and later reduce the dimensionality of large biomedical data sets is proposed.
Abstract: Currently, due to the availability of massive biomedical data on each individual, both healthcare and life Science is becoming data-driven. The input-attributes are structured/un-structured data with many challenges, including sparse-binary attributes with imbalanced outcomes, non-unique distributed structure and high-dimensional data, which hamper efforts to make a clinical decision in clinical practice. In recent decades, considerable effort has been made toward overcoming most of these challenges, but still there is an essential need for significant improvements in this field, especially after integrating both omics and phenotype data for future personalized medicine. These challenges motivate us to use the state-of-the-art of big data analytics and large-scale machine learning frameworks to confront most of the challenges and provide proper clinical solutions to assess physicians in clinical practice at the bedside and subsequently provide high quality care while reducing its cost. This research proposes a new recursive screening incremental ranking machine learning paradigm to empower the desired classifiers, especially for imbalanced training data, to create suitable data-driven clusters without prior information and later reduce the dimensionality of large biomedical data sets. The new framework combines many binary-attributes based on two criteria: (i) the minimum power value for each combination and (ii) the classification power of such a combination. Next, these sets of combined attributes are investigated by physicians to select the proper set of rules that make clinical sense and subsequently to use the result to empower the desired healthcare event (binary or multinomial target) at the bedside. After empowering the target class categories, we select the k-significant risk drivers with a suitable volume of data and high correlation to the desire outcome, and next, we establish the proper segmentation using AND-OR associative relationships. Finally, we use the propensity score to handle the imbalanced data, and next, we build break-through machine learning/data mining predictive models based on functional networks’ maximum-likelihood and Newton-Raphson iterative matrix computation mechanism to expedite the implementations within high performance computing platforms, such as scalable MapReduce HDFS, Spark MLlib, and Google Sibyl. Comparative studies with both simulated and real-life biomedical databases are carried out for identifying specific biomedical and healthcare outcomes, such as asthma, breast cancer, gene mutations selection and genomic association studies for specific complex diseases. Results have shown that the proposed incremental learning scheme empower the new classifier with reliable and stable performance. The new classifier outperforms the current existing predictive models in both high quality outcome and less expensive in execution time, especially, with imbalanced and sparse with high-dimensional big biomedical data. We recommend future work to be conducted using real-life integrated clinic-genomic big data with genome-wide association studies for future personalized medicine.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the global competitiveness scenario in chemical manufacturing industries in line with lean thinking and found that chemical manufacturing industry consumption more energy and resources with high technology and as a result, the development of chemical manufacturing in Ethiopia is darting and at initial stage in contributing for the country economics in employment, GDP, and self-sustained economy.
Abstract: This study attempts to investigate the global competitiveness scenario in chemical manufacturing industries in line with lean thinking. The objective of this article is also to understand the concept of lean manufacturing, its philosophy, various tools and techniques, lean implementation benefits and barrier towards lean implementation of chemical industries in Ethiopia. Competitiveness of chemical manufacturing industry is not easily attainable since chemical products nature is variant. There are challenges which hinder the development of chemical industries production. The chemical industries contribution to the economy is very small when compared to agriculture contribution due to infant stage of the industrial group in general. The methods that can bring the chemical industries development and increase competitiveness is one lean thinking revolution. The big challenges noticed in this study found are chemical manufacturing industries have not diffused lean thinking into their process. The chemical industries consume more energy and resources with high technology. As a result of these, the development of chemical manufacturing industry in Ethiopia is darting and at initial stage in contributing for the country economics in employment, GDP, and self-sustained economy. Lean philosophy utilization to companies needs more commitment by the industrial sectors worker and kaizen institute.

Journal ArticleDOI
TL;DR: The second half of the 19th century was a period in which the interest of biologists was shifting from studying organisms, organs, or tissues to their component cells, which caused increased interest in this cell organelle.
Abstract: The second half of the 19th century was a period in which the interest of biologists was shifting from studying organisms, organs, or tissues to their component cells [3]. Schleiden and Schwann had previously shown that all tissues have a cellular origin and that both animals and plants consist of the same fundamental units of organization, cells, which interact to give rise to complex organisms [4]. In 1866, Haeckel proposed that the nucleus contains the factors responsible for the transmission of hereditary traits [5]. That caused increased interest in this cell organelle.

Journal ArticleDOI
TL;DR: A critical review of whether criticisms of the Human Brain Project are justified, which highlights fundamental limitations of current diagnostic methods which will severely constrain the ability of researchers to reach a successful conclusion.
Abstract: In an effort to unravel the workings of the human brain the European Commission established the Human Brain Project (2013) however it has been subject to intense rescrutiny, criticism and political infighting. Accordingly the aim of this article is to provide a critical review of whether, at least from the technical perspective, such criticisms are justified. The author is in the privileged position of being able to do so because he heads a company which is in the unique position of commercialising a technology, developed by its Technical Director Dr. Igor Grakov (launched in its first commercial version in 1999), which is based upon a precise, sophisticated, and detailed mathematical model of the autonomic nervous system i.e. that cognitive input can be used as the data sets for a neural simulation technique and/or mathematical model which links brain function to the regulated function of the body’s physiological and/or functional systems, the organs which are essential components of these systems, and of the pathological changes to cellular and molecular biology which are the consequence of systemic dysfunction. In other words the Strannik technology developed by Dr. Grakov meets several of the key aims and objectives of the Human Brain Project. This article highlights (i) fundamental limitations of current diagnostic methods which will severely constrain the ability of researchers to reach a successful conclusion; (ii) fundamental limitations of medical research which ignore basic principles of chemistry and widely recognised (but unfashionable) phenomena; (iii) the assumption that there is a healthy and/or ‘unhealthy’ brain although clearly the health of the brain is influenced by stress, nutritional deficits and emergent pathologies; (iv) it questions the need for ‘big data’ rather than investment in the basic research to identify the fundamental scientific principles; and (v) it is critical of the way in which contemporary biomedical research overlooks the complex and wholistic way in which the body functions.

Journal ArticleDOI
TL;DR: Preliminary results as training steps are as follows: the silent “jankens” were correctly discriminated; the Silent “season”-HMMs worked well, suggesting that this scheme might be applied to the discrimination between all the pairs of the hiraganas.
Abstract: We propose a new scheme for speaker-dependent silent speech recognition systems (SSRSs) using both single-trial electroencephalograms (EEGs) scalp-recorded and speech signals measured during overtly and covertly speaking “janken” and “season” in Japanese. This scheme consists of two phases. The learning phase specifies a Kalman filter using spectrograms of the speech signals and independent components (ICs), whose equivalent current dipole source localization (ECDL) solutions were located mainly at the Broca’s area, of the EEGs during the actual speech. In case of the “season” task, the speech signals were transformed into vowel and consonant sequences, and these relationships were learned by hidden Markov model (HMM) with Gaussian mixture densities. The decoding phase predicts spectrograms for the silent “janken” and “season” using the Kalman filter with the EEGs during the silent speech. For the silent “season”, the predicted spectrograms were inputted to the HMM, and which “season” was silently spoken was determined by the maximal log-likelihood among each HMM. Our preliminary results as training steps are as follows: the silent “jankens” were correctly discriminated; the silent “season”-HMMs worked well, suggesting that this scheme might be applied to the discrimination between all the pairs of the hiraganas

Journal ArticleDOI
TL;DR: This work analyzes the taxonomic composition of three metagenome communities from the soil sample mainly rain forest, temperate broadleaf and temperate grassland via MG-RAST to explain the potential taxonomic diversity of nitrate reducing bacteria with the dominance of Bradyrhizobium japonicum from soil sample.
Abstract: The nitrogen cycle is one of the most important nutrient cycles in terrestrial ecosystems. Environmental bacteria maintain the global nitrogen cycle by metabolizing organic as well as inorganic nitrogen compounds. It is thought that most of the microbial taxa cannot be cultured outside of their natural environment, thus, microbial diversity remains poorly described before a decade. But the metagenomic techniques developed recently have therefore greatly extended our knowledge of microbial genetic diversity. The objective of this work was to analyze the taxonomic composition of three metagenome communities from the soil sample mainly rain forest, temperate broadleaf and temperate grassland via MG-RAST. Using the M5NR database, the affinities were tested for the sequences of known metabolic function against both SEED subsystems and KEGG metabolic pathways using a maximum e-value of 1e-5. Although there are a number of metabolic functions that can be tested but we probed particularly for enzymes related to the components of nitrogen cycle. The results explain the potential taxonomic diversity of nitrate reducing bacteria with the dominance of Bradyrhizobium japonicum from soil sample.

Journal ArticleDOI
TL;DR: Certain techniques and algorithms are reviewed to deal with the puzzle of missing values whereby achieving pure data set (i.e., data set without missing value) which in-turn will lead to path of correct and accurate decision making.
Abstract: Data mining has pushed the realm of information technology beyond predictable limits. Data mining has left its permanent marks on decision making in just in few years of its inception. Missing value is one of the major factor, which can render the obtain result beyond use attained from specific data set by applying data mining technique. There could be numerous reasons for missing values in a data set such as human error, hardware malfunction etc. It is imperative to tackle the labyrinth of missing values before applying any technique of data mining; otherwise, the information extracted from data set containing missing values will lead to the path of wrong decision making. There are several techniques available to control the issue of missing values such as replacing the missing value with: (a) closest value, (b) mean value and (c) median value etc. Some algorithms are also used to deal with the problem of missing values such as k-nearest neighbour. Paper reviewed certain techniques and algorithms to deal with the puzzle of missing values whereby achieving pure data set (i.e., data set without missing value) which in-turn will lead to path of correct and accurate decision making.

Journal ArticleDOI
TL;DR: There exists a inverse relationship between the rotation angle and the phase difference to significantly reduce the occurrence of the singularity and breakage failures of the mechanism, which is consistent with biological evidences of coupling oscillators that enables the nervous system to control the complex musculoskeletal system by using a few of simple parameters frequently represented by the phase and rotation in a torus state space.
Abstract: Legged robots have a potential of being a walking machine on irregular ground. Eleven-bar linkages, Theo Jansen mechanism reproduces a smooth locomotion pattern as gait. Parallel motions have widely used in the heavy machinery and recently highlighted in a model of biological motions. The close-loop linkage simply provides a designed end-effector trajectory, whereas the trajectory is considered to be less modifiable due to the singularity problem. In the present study, the singularity on the modified Theo Jansen mechanism was addressed by introducing the parametric orbit as a new freedom point in the joint center, and analyzed its kinematics and dynamics by using multibody dynamics (MBD). The extendability of the mechanism in the viewpoint of flexibility in the gait trajectory was clearly demonstrated in the numerical simulation, providing new functional gait trajectories controlled by two control parameters that change the shape of the parametric oval in the joint center. In systematic determinant analyses of how broken trajectories were generated depending on four parameters, i.e., horizontal and vertical amplitudes and rotation angle of the joint center movement and its phase difference with the driving link, morpho-logical changes of generated trajectories in the phase-rotation-amplitude parameter space were revealed. Thus the extension capability of Theo Jansen mechanism was validated not only in smooth walking but also in jumping, climbing and running-like motions. In considering the ways of control, the present results indicated that there exists a inverse relationship between the rotation angle and the phase difference to significantly reduce the occurrence of the singularity and breakage failures of the mechanism, which is consistent with biological evidences of coupling oscillators that enables the nervous system to control the complex musculoskeletal system by using a few of simple parameters frequently represented by the phase and rotation in a torus state space.

Journal ArticleDOI
TL;DR: In a very weak isomorphism (not similitude) between brain and neural networks, an artificial form of short term memory and of acknowledgement, in Elman neural networks is proposed.
Abstract: Artificial neural networks are often understood as a good way to imitate mind through the web structure of neurons in brain, but the very high complexity of human brain prevents to consider neural networks as good models for human mind;anyway neural networks are good devices for computation in parallel. The difference between feed-forward and feedback neural networks is introduced; the Hopfield network and the multi-layers Perceptron are discussed. In a very weak isomorphism (not similitude) between brain and neural networks, an artificial form of short term memory and of acknowledgement, in Elman neural networks, is proposed.

Journal ArticleDOI
TL;DR: The present comparison among AD, FTD and older controls roughly supported the previous findings for synchronization likelihood values, unweighted graphs, clustering coefficient and characteristic path length, but there were reverse differences in small-worldness.
Abstract: In order to develop an easier and more inexpensive tool than using MEG and fMRI for diagnosing neurological diseases such as AD and FTD and checking their prognoses in future, we constructed scalp-recorded-EEGbased brain functional connectivity networks (BFCNs), and preliminarily compared Alzheimer’s disease (AD) and frontotemporal dementia (FTD) patients, their prognoses and control subjects by the BFCNs. The present comparison among AD, FTD and older controls roughly supported the previous findings for synchronization likelihood values, unweighted graphs, clustering coefficient and characteristic path length. However, there were reverse differences in small-worldness. For AD and FTD, there were several electrode positions with higher betweenness centrality than the older controls. It might be suggested that we should investigate the betweenness centrality in more details.

Journal ArticleDOI
TL;DR: Mapping of equi-luminous fluxes from a point source through a freeform lens for illumination with different luminance intensities is studied and the adequacy of the proposed methodology is proved by simulation.
Abstract: Mapping of equi-luminous fluxes from a point source through a freeform lens for illumination with different luminance intensities is studied. The freeform surface is associated with the mapping and the grids to design on a target plane. Relocation of the target grids designed for uniform illumination is proposed by comparing the grids with reference and desired ones for interpolation to achieve the desired illumination distribution. Target-grids relocation for rectangular illumination with vertical, horizontal, tilt, circular, ring and rectangular cut-off for two different luminance intensities is demonstrated. The target grids after relocation are smooth in distribution, and the lens with a smooth freeform surface is achieved. The adequacy of the proposed methodology is proved by simulation.

Journal ArticleDOI
TL;DR: Cognitive radio network and Software Defined Radio technology to improve the performance of wireless communication and MIMO technology to reduce interference and false alarm and improve probability of detection to build future generation of wireless Communication.
Abstract: Spectrum is a natural resource of communication path and it is main entity of wireless communication. At present maintain the spectrum secrecy is a big problem because of exponential growth of users and devices of wireless communication. Spectrum have two types of user first one is primary i.e. licenced and other one is secondary i.e. unlicensed user. Secondary user doesn’t need any license to operate but primary user needs to license to operate in a fix geographical area with a fixes time duration. The entire time spectrum is underutilized. Spectrum has limited frequencies so we can’t increase spectrum frequencies but we try to improve spectrum efficiency by the help of different technologies and methodologies. Cognitive radio, Software defined Radio and Spectrum sharing technique play an important role to improve spectrum efficiency but interference, false alarm and low detection is a problem that can reduce by MIMO technology. In this paper we discuss about cognitive radio network and Software Defined Radio technology to improve the performance of wireless communication and MIMO technology to reduce interference and false alarm and improve probability of detection to build future generation of wireless communication.

Journal ArticleDOI
TL;DR: In this paper, a comparison of the latest data centric protocols on different performance metrics that affect the application or wireless sensor network is presented, in which the authors compare the performance of different protocols and their up gradients.
Abstract: Many changes have been made in sensor fields which are different for different applications and there are many more which are under development. It is under research to develop sensor nodes which utilize low power and are of low cost. In this paper we have overviewed different data centric protocols and their up gradation. After this there is comparison in some of the latest data centric protocols on different performance metrics that affect the application or wireless sensor network.

Journal ArticleDOI
TL;DR: The paper examines the basic visualization of farm in 3D form for Jigawa state farming environment and the design of 3D farmer’s visualization technology and the implementation of the model.
Abstract: Web based farm management system is the collection of processes and information that is used to manage various phases of a farm and accessible on the Internet. The paper is aimed to improve the way information disseminates to farmers. It is needed for the development of agriculture to improve the life of farmers. The paper examines the basic visualization of farm in 3D form for Jigawa state farming environment. This means that, how plant of the area will be view virtually. The paper started with examining some bodies that work on agricultural activities based on how they improved in agricultural technology and how to improve the flow of information of agricultural activities through modern channels for sustainable agriculture and rural development. It looks at the design of 3D farmer’s visualization technology and the implementation of the model.

Journal ArticleDOI
TL;DR: These proposed methods are found to perform well without the use of over-sampling techniques and multiple-fold cross validation and are recommended for use in classification of imbalanced biomedical data applications.
Abstract: Background: Highly complex and computational intensive methods based on Synthetic Minority Over-sampling Technique (SMOTE) and more recently Learning Vector Quantization SMOTE (LVQ-SMOTE) have been proposed for classification problems of imbalanced biomedical data. This works presents a much simpler approach that is not computationally intensive and competes well with existing approaches. It uses principal component analysis (PCA) to generate a pseudo-variable as a linear combination of the features. From this one pseudo-variable, several classification methods are developed that classify directly based on very simple statistics. One method, the Mean Method (MM), classifies cases based on closeness to the means for the two classes from training data sets. When the number of features is very large, a feature reduction (FR) procedure is proposed to reduce misclassifications. In cases where the means of both classes are similar but their spread about their means are different, the Spread Method (SM) is proposed. A unique feature of this method is that one is able to vary the accuracy of classification between the two classes by changing the width of the window for allocation of cases. These proposed methods are found to perform well without the use of over-sampling techniques and multiple-fold cross validation. Results: The MM or the MM with FR was compared directly to recently published results for LVQ-SMOTE on six (6) data sets and gave better or much better results in every case as measured by adding the percent of true positives to the percent of true negatives. The SM was compared with LVQ-SMOTE on two (2) data sets and operating windows widths were obtained that gave much better results for the SM over LVQ-SMOTE. Conclusion: Given the simplicity, strengths, and performance of the proposed approach in comparison to current methods, these methods and procedures are recommended for use in classification of imbalanced biomedical data applications.

Journal ArticleDOI
TL;DR: Homology modelling and docking studies are used to understand and analyze the residues important for agonist and antagonist binding, and the residue variations which may play important role in ligand binding are identified.
Abstract: α2-adrenergic receptors play a key role in the regulation of sympathetic system, neurotransmitter release, blood pressure and intraocular pressure. Although α2-adrenergic receptors mediate a number of physiological functions in vivo and have great therapeutic potential, the absence of crystal structure of α2-adrenergic receptor subtypes is a major hindrance in the drug design efforts. The therapeutic efficacy of the available drugs is not selective for subtype specificity (α2a, α2b and α2c) leading to unwanted side effects. We used Homology modelling and docking studies to understand and analyze the residues important for agonist and antagonist binding. We have also analyzed binding site volume, and the residue variations which may play important role in ligand binding. We have identified residues through our modelling and docking studies, which would be critical in giving subtype specificity and may help in the development of future subtype-selective drugs.

Journal ArticleDOI
TL;DR: The goal of the Com-Com is to present an open environment of applied computing services and to encourage researchers across Europe to participate in its extending, interchanging or improving to strengthen the EU research base.
Abstract: The Com-Com is the user-centric environment which provides researchers with tailored frameworks to support their computational needs. It addresses existing and new user communities in both research and commercial fields. Technically the Com-Com provides dynamic infrastructure, dynamic service provision and user-driven application development across the domains. End users can create new applications for solving their computational tasks easily by combining ready-made interdisciplinary services available in the networked Repository and incorporate their own functionalities. Since services may be offered by different enterprises and communicate over the network, they provide an advanced distributed computing infrastructure for both intra- and cross-enterprise application integration and collaboration. The approach in hands potentially opens a door to rapid creating applied software for Exaflops HPC and Exabytes data. Nowadays the Com-Com can provide applications developing in the life science, environment, engineering, physics, computational chemistry, medicine, data mining research by collecting already existing web-services been developed by different research communities EGI, Flatworld, FI-WARE, SAP, ESRC. The goal of the Com-Com is to present an open environment of applied computing services and to encourage researchers across Europe to participate in its extending, interchanging or improving. The Com-Com stack presents flexibility enabling users to form dynamic teams, dynamic collections of cross domain services and dynamic infrastructure to run the services on. The Com-Com may enhance the capabilities of research organizations who lack resource both in human and technical terms by better integrating researches across international scientific communities with the final aim to strengthen the EU research base.

Journal ArticleDOI
TL;DR: The three-dimensional structure of Hsp60_Pb18 is a high reliability model, which displays a remarkable similarity with the three distinct domains of GroEL subunit-equatorial, intermediate and apical domains, and thus contribute to the knowledge of the fungus biology and to determine vaccines and/ or therapeutic targets.
Abstract: Paracoccidioides brasiliensis is a dimorphic fungus that causes paracoccidioido mycosis (PCM), an endemic mycosis in Latin America. PCM is a chronic, granulomatous, and progressive disease, which has a wide clinical spectrum of manifestations. Although it knows that the main clinical forms are consequences of fungus-host interaction, immune response in PCM is still an open field. The antigenic complexity of P. brasiliensis and the role of most antigens have been poorly explored, thereby decreasing the chances of finding vaccine and therapeutic targets for PCM. Recent results from our group have shown that heat shock protein of 60-kDa from P. brasiliensis strain 18 (Hsp60_Pb18) has a possible detrimental effect on the course of the PCM. Here, we show the molecular model of Hsp60_Pb18 that was generated with the program MODELLER9V8. The model validation was performed using PROCHECK and VERIFY3D. According to the results, the three-dimensional structure of Hsp60_Pb18 is a high reliability model, which displays a remarkable similarity with the three distinct domains of GroEL subunit-equatorial, intermediate and apical domains. This study will provide direction and continuity of studies to characterize Hsp60_ Pb18 and their domains, and thus contribute to the knowledge of the fungus biology and to determine vaccines and/ or therapeutic targets.