scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Applied Mathematics and Computer Science in 2010"


Journal ArticleDOI
TL;DR: A new thinning algorithm is proposed that presents interesting properties in terms of processing quality and algorithm clarity, enriched with examples, which makes it useful and versatile for a variety of applications.
Abstract: This paper aims at three aspects closely related to each other: first, it presents the state of the art in the area of thinning methodologies, by giving descriptions of general ideas of the most significant algorithms with a comparison between them. Secondly, it proposes a new thinning algorithm that presents interesting properties in terms of processing quality and algorithm clarity, enriched with examples. Thirdly, the work considers parallelization issues for intrinsically sequential algorithms of thinning. The main advantage of the suggested algorithm is its universality, which makes it useful and versatile for a variety of applications.

174 citations


Journal ArticleDOI
TL;DR: The use of set-membership methods in fault diagnosis (FD) and fault tolerant control (FTC) using a deterministic unknown-but-bounded description of noise and parametric uncertainty (interval models).
Abstract: This paper reviews the use of set-membership methods in fault diagnosis (FD) and fault tolerant control (FTC). Setmembership methods use a deterministic unknown-but-bounded description of noise and parametric uncertainty (interval models). These methods aims at checking the consistency between observed and predicted behaviour by using simple sets to approximate the exact set of possible behaviour (in the parameter or the state space). When an inconsistency is detected between the measured and predicted behaviours obtained using a faultless system model, a fault can be indicated. Otherwise, nothing can be stated. The same principle can be used to identify interval models for fault detection and to develop methods for fault tolerance evaluation. Finally, some real applications will be used to illustrate the usefulness and performance of set-membership methods for FD and FTC.

157 citations


Journal ArticleDOI
TL;DR: A new approach to the design and development of complex rule-based systems for control and decision support, based on ALSV(FD) logic, is described, which ensures high density and transparency of visual knowledge representation.
Abstract: This paper describes a new approach, the HeKatE methodology, to the design and development of complex rule-based systems for control and decision support. The main paradigm for rule representation, namely, eXtended Tabular Trees (XTT), ensures high density and transparency of visual knowledge representation. Contrary to traditional, flat rule-based systems, the XTT approach is focused on groups of similar rules rather than on single rules. Such groups form decision tables which are connected into a network for inference. Efficient inference is assured as only the rules necessary for achieving the goal, identified by the context of inference and partial order among tables, are fired. In the paper a new version of the language-XTT22-is presented. It is based on ALSV(FD) logic, also described in the paper. Another distinctive feature of the presented approach is a top-down design methodology based on successive refinement of the project. It starts with Attribute Relationship Diagram (ARD) development. Such a diagram represents relationships between system variables. Based on the ARD scheme, XTT tables and links between them are generated. The tables are filled with expert-provided constraints on values of the attributes. The code for rule representation is generated in a humanreadable representation called HMR and interpreted with a provided inference engine called HeaRT. A set of software tools supporting the visual design and development stages is described in brief.

70 citations


Journal ArticleDOI
TL;DR: The present work assumes that the paths of the moving sources are unknown, but they are sufficiently smooth to be approximated by combinations of given basis functions, which makes it possible to reduce the source detection and estimation problem to that of parameter identification.
Abstract: In a typical moving contaminating source identification problem, after some type of biological or chemical contamination has occurred, there is a developing cloud of dangerous or toxic material. In order to detect and localize the contamination source, a sensor network can be used. Up to now, however, approaches aiming at guaranteeing a dense region coverage or satisfactory network connectivity have dominated this line of research and abstracted away from the mathematical description of the physical processes underlying the observed phenomena. The present work aims at bridging this gap and meeting the needs created in the context of the source identification problem. We assume that the paths of the moving sources are unknown, but they are sufficiently smooth to be approximated by combinations of given basis functions. This parametrization makes it possible to reduce the source detection and estimation problem to that of parameter identification. In order to estimate the source and medium parameters, the maximum-likelihood estimator is used. Based on a scalar measure of performance defined on the Fisher information matrix related to the unknown parameters, which is commonly used in optimum experimental design theory, the problem is formulated as an optimal control one. From a practical point of view, it is desirable to have the computations dynamic data driven, i.e., the current measurements from the mobile sensors must serve as a basis for the update of parameter estimates and these, in turn, can be used to correct the sensor movements. In the proposed research, an attempt will also be made at applying a nonlinear model-predictive-control-like approach to attack this issue.

61 citations


Journal ArticleDOI
TL;DR: Simulation results show the effectiveness of the sliding mode control method in the presence of nonlinearities and disturbances, and high performance of the proposed controller.
Abstract: The main goal here is to design a proper and efficient controller for a ship autopilot based on the sliding mode control method A hydrodynamic numerical model of CyberShip II including wave effects is applied to simulate the ship autopilot system by using time domain analysis To compare the results similar research was conducted with the PD controller, which was adapted to the autopilot system The differences in simulation results between two controllers are analyzed by a cost function composed of a heading angle error and rudder deflection either in calm water or in waves Simulation results show the effectiveness of the method in the presence of nonlinearities and disturbances, and high performance of the proposed controller

53 citations


Journal ArticleDOI
TL;DR: In a series of simulations, a progressive reduction of the permissible search space for the leg movements leads to the evolution of effective gait patterns that could be used to control a real hexapod walking robot.
Abstract: A biologically inspired approach to feasible gait learning for a hexapod robotThe objective of this paper is to develop feasible gait patterns that could be used to control a real hexapod walking robot. These gaits should enable the fastest movement that is possible with the given robot's mechanics and drives on a flat terrain. Biological inspirations are commonly used in the design of walking robots and their control algorithms. However, legged robots differ significantly from their biological counterparts. Hence we believe that gait patterns should be learned using the robot or its simulation model rather than copied from insect behaviour. However, as we have found tahula rasa learning ineffective in this case due to the large and complicated search space, we adopt a different strategy: in a series of simulations we show how a progressive reduction of the permissible search space for the leg movements leads to the evolution of effective gait patterns. This strategy enables the evolutionary algorithm to discover proper leg co-ordination rules for a hexapod robot, using only simple dependencies between the states of the legs and a simple fitness function. The dependencies used are inspired by typical insect behaviour, although we show that all the introduced rules emerge also naturally in the evolved gait patterns. Finally, the gaits evolved in simulations are shown to be effective in experiments on a real walking robot.

47 citations


Journal ArticleDOI
TL;DR: This work forms a model of anycast connections and proposes a heuristic algorithm based on the Lagrangean relaxation aimed to optimize jointly routes for anycast and unicast connections in connection-oriented networks.
Abstract: Our discussion in this article centers around various issues related to the use of anycasting in connection-oriented computer networks. Anycast is defined as a one-to-one-of-many transmission to deliver a packet to one of many hosts. Anycasting can be applied if the same content is replicated over many locations in the network. Examples of network techniques that apply anycasting are Content Delivery Networks (CDNs), Domain Name Service (DNS), Peer-to-Peer (P2P) systems. The role of anycasting is growing concurrently with the popularity of electronic music, movies, and other content required by Internet users. In this work we focus on the optimization of anycast flows in connection-oriented networks. We formulate a model of anycast connections and next propose a heuristic algorithm based on the Lagrangean relaxation aimed to optimize jointly routes for anycast and unicast connections. Results of numerical experiments are presented and evaluated. Finally, we analyze briefly problems related to anycasting in dynamic routing and multi-layer networks.

45 citations


Journal ArticleDOI
TL;DR: It is shown that it is possible to find computationally attainable properties of p-CT images which allow pointing out the cancerous lesion and can be used in computer aided medical diagnosis.
Abstract: The analysis of prostate images is one of the most complex tasks in medical images interpretation. It is sometimes very difficult to detect early prostate cancer using currently available diagnostic methods. But the examination based on perfusion computed tomography (p-CT) may avoid such problems even in particularly difficult cases. However, the lack of computational methods useful in the interpretation of perfusion prostate images makes it unreliable because the diagnosis depends mainly on the doctor's individual opinion and experience. In this paper some methods of automatic analysis of prostate perfusion tomographic images are presented and discussed. Some of the presented methods are adopted from papers of other researchers, and some are elaborated by the authors. This presentation of the method and algorithms is important, but it is not the master scope of the paper. The main purpose of this study is computational (deterministic and independent) verification of the usefulness of the p-CT technique in a specific case. It shows that it is possible to find computationally attainable properties of p-CT images which allow pointing out the cancerous lesion and can be used in computer aided medical diagnosis.

43 citations


Journal ArticleDOI
TL;DR: In this paper, the Internet Shopping Optimization Problem (ISOP) is defined in a formal way and a proof of its strong NP-hardness is provided and polynomial time algorithms for special cases of the problem are described.
Abstract: A high number of Internet shops makes it difficult for a customer to review manually all the available offers and select optimal outlets for shopping. A partial solution to the problem is brought by price comparators which produce price rankings from collected offers. However, their possibilities are limited to a comparison of offers for a single product requested by the customer. The issue we investigate in this paper is a multiple-item multiple-shop optimization problem, in which total expenses of a customer to buy a given set of items should be minimized over all available offers. In this paper, the Internet Shopping Optimization Problem (ISOP) is defined in a formal way and a proof of its strong NP-hardness is provided. We also describe polynomial time algorithms for special cases of the problem.

43 citations


Journal ArticleDOI
TL;DR: A recently developed two-purpose supervisory predictive set-point optimizer is discussed, designed to perform simultaneously two goals: economic optimization and constraints handling for the underlying unconstrained direct controllers.
Abstract: The subject of this paper is to discuss selected effective known and novel structures for advanced process control and optimization. The role and techniques of model-based predictive control (MPC) in a supervisory (advanced) control layer are first shortly discussed. The emphasis is put on algorithm efficiency for nonlinear processes and on treating uncertainty in process models, with two solutions presented: the structure of nonlinear prediction and successive linearizations for nonlinear control, and a novel algorithm based on fast model selection to cope with process uncertainty. Issues of cooperation between MPC algorithms and on-line steady-state set-point optimization are next discussed, including integrated approaches. Finally, a recently developed two-purpose supervisory predictive set-point optimizer is discussed, designed to perform simultaneously two goals: economic optimization and constraints handling for the underlying unconstrained direct controllers.

43 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to provide a gradient clustering algorithm in its complete form, suitable for direct use without requiring a deeper statistical knowledge, and the values of all parameters are effectively calculated using optimizing procedures.
Abstract: The aim of this paper is to provide a gradient clustering algorithm in its complete form, suitable for direct use without requiring a deeper statistical knowledge. The values of all parameters are effectively calculated using optimizing procedures. Moreover, an illustrative analysis of the meaning of particular parameters is shown, followed by the effects resulting from possible modifications with respect to their primarily assigned optimal values. The proposed algorithm does not demand strict assumptions regarding the desired number of clusters, which allows the obtained number to be better suited to a real data structure. Moreover, a feature specific to it is the possibility to influence the proportion between the number of clusters in areas where data elements are dense as opposed to their sparse regions. Finally, the algorithm-by the detection of oneelement clusters-allows identifying atypical elements, which enables their elimination or possible designation to bigger clusters, thus increasing the homogeneity of the data set.

Journal ArticleDOI
TL;DR: This paper discusses two closely related problems arising in environmental monitoring: source localization problem linked to the question How can one find an unknown "contamination source" and an associated sensor placement problem: Where should the authors place sensors that are capable of providing the necessary "adequate data" for that.
Abstract: In this paper we discuss two closely related problems arising in environmental monitoring. The first is the source localization problem linked to the question How can one find an unknown "contamination source"? The second is an associated sensor placement problem: Where should we place sensors that are capable of providing the necessary "adequate data" for that? Our approach is based on some concepts and ideas developed in mathematical control theory of partial differential equations.

Journal ArticleDOI
TL;DR: A new approach to fuzzy classification in the case of missing data is presented and theorems which allow determining the structure of the rough-neuro-fuzzy classifier are given.
Abstract: The paper presents a new approach to fuzzy classification in the case of missing data. Rough-fuzzy sets are incorporated into logical type neuro-fuzzy structures and a rough-neuro-fuzzy classifier is derived. Theorems which allow determining the structure of the rough-neuro-fuzzy classifier are given. Several experiments illustrating the performance of the roughneuro-fuzzy classifier working in the case of missing features are described.

Journal ArticleDOI
TL;DR: It is shown that if there is a common quadratic Lyapunov function for the stability of all subsystems, then the switched system is stable under arbitrary switching.
Abstract: We establish a unified approach to stability analysis for switched linear descriptor systems under arbitrary switching in both continuous-time and discrete-time domains. The approach is based on common quadratic Lyapunov functions incorporated with linear matrix inequalities (LMIs). We show that if there is a common quadratic Lyapunov function for the stability of all subsystems, then the switched system is stable under arbitrary switching. The analysis results are natural extensions of the existing results for switched linear state space systems.

Journal ArticleDOI
TL;DR: In the discussed suboptimal MPC algorithm the neural multi-model is linearised on-line and, as a result, the future control policy is found by solving of a quadratic programming problem.
Abstract: This paper discusses neural multi-models based on Multi Layer Perceptron (MLP) networks and a computationally efficient nonlinear Model Predictive Control (MPC) algorithm which uses such models. Thanks to the nature of the model it calculates future predictions without using previous predictions. This means that, unlike the classical Nonlinear Auto Regressive with eXternal input (NARX) model, the multi-model is not used recurrently in MPC, and the prediction error is not propagated. In order to avoid nonlinear optimisation, in the discussed suboptimal MPC algorithm the neural multi-model is linearised on-line and, as a result, the future control policy is found by solving of a quadratic programming problem.

Journal ArticleDOI
TL;DR: The results of experiments prove that the proposed solution is more effective, in terms of the usage of programmable device resources, compared with the classical ones.
Abstract: The paper presents one concept of decomposition methods dedicated to PAL-based CPLDs. The proposed approach is an alternative to the classical one, which is based on two-level minimization of separate single-output functions. The key idea of the algorithm is to search for free blocks that could be implemented in PAL-based logic blocks containing a limited number of product terms. In order to better exploit the number of product terms, two-stage decomposition and BDD-based decomposition are to be used. In BDD-based decomposition methods, functions are represented by Reduced Ordered Binary Decision Diagrams (ROBDDs). The results of experiments prove that the proposed solution is more effective, in terms of the usage of programmable device resources, compared with the classical ones.

Journal ArticleDOI
TL;DR: This paper provides a uniform and detailed formal basis for control flow graphs combining known definitions and results with new aspects, and defines statement coverage and branch coverage such that coverage notions correspond to node coverage, and edge coverage, respectively.
Abstract: The control flow of programs can be represented by directed graphs In this paper we provide a uniform and detailed formal basis for control flow graphs combining known definitions and results with new aspects Two graph reductions are defined using only syntactical information about the graphs, but no semantical information about the represented programs We prove some properties of reduced graphs and also about the paths in reduced graphs Based on graphs, we define statement coverage and branch coverage such that coverage notions correspond to node coverage, and edge coverage, respectively

Journal ArticleDOI
TL;DR: Simple necessary and sufficient conditions for robust stability in the general case and in the case of systems with a linear uncertainty structure in two sub-cases: a unity rank uncertainty structure and nonnegative perturbation matrices are established.
Abstract: The paper is devoted to the problem of robust stability of positive continuous-time linear systems with delays with structured perturbations of state matrices. Simple necessary and sufficient conditions for robust stability in the general case and in the case of systems with a linear uncertainty structure in two sub-cases: (i) a unity rank uncertainty structure and (ii) nonnegative perturbation matrices are established. The problems are illustrated with numerical examples.

Journal ArticleDOI
TL;DR: The paper discusses the problem of rule weight tuning in neuro-fuzzy systems with parameterized consequences in which rule weights and the activation of the rules are not interchangeable and several heuristics with experimental results showing the advantage of their usage are presented.
Abstract: The paper discusses the problem of rule weight tuning in neuro-fuzzy systems with parameterized consequences in which rule weights and the activation of the rules are not interchangeable. Some heuristic methods of rule weight computation in neuro-fuzzy systems with a hierarchical input domain partition and parameterized consequences are proposed. Several heuristics with experimental results showing the advantage of their usage are presented.

Journal ArticleDOI
TL;DR: First, a fault tolerant controller is proposed to guarantee exponentially stability of the switched systems with time delay to take into account switched time delay systems with Lipschitz nonlinearities and structured uncertainties.
Abstract: This paper investigates the problem of fault tolerant control of a class of uncertain switched nonlinear systems with time delay under asynchronous switching. The systems under consideration suffer from delayed switchings of the controller. First, a fault tolerant controller is proposed to guarantee exponentially stability of the switched systems with time delay. The dwell time approach is utilized for stability analysis and controller design. Then the proposed approach is extended to take into account switched time delay systems with Lipschitz nonlinearities and structured uncertainties. Finally, a numerical example is given to illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This work proposes a novel ensemble machine learning method that is able to learn rules, solve problems in a parallel way and adapt parameters used by its components, and may be treated as a meta classifier system.
Abstract: Self-adaptation is a key feature of evolutionary algorithms (EAs). Although EAs have been used successfully to solve a wide variety of problems, the performance of this technique depends heavily on the selection of the EA parameters. Moreover, the process of setting such parameters is considered a time-consuming task. Several research works have tried to deal with this problem; however, the construction of algorithms letting the parameters adapt themselves to the problem is a critical and open problem of EAs. This work proposes a novel ensemble machine learning method that is able to learn rules, solve problems in a parallel way and adapt parameters used by its components. A self-adaptive ensemble machine consists of simultaneously working extended classifier systems (XCSs). The proposed ensemble machine may be treated as a meta classifier system. A new self-adaptive XCS-based ensemble machine was compared with two other XCS-based ensembles in relation to one-step binary problems: Multiplexer, One Counts, Hidden Parity, and randomly generated Boolean functions, in a noisy version as well. Results of the experiments have shown the ability of the model to adapt the mutation rate and the tournament size. The results are analyzed in detail.

Journal ArticleDOI
TL;DR: The paper describes the application of the traffic engineering framework together with application layer procedures as mechanisms for the reduction of network latency lags in networked control systems with high dynamic control plants.
Abstract: The paper describes the application of the traffic engineering framework together with application layer procedures as mechanisms for the reduction of network latency lags. These mechanisms allow using standard and inexpensive hardware and software technologies typically applied for office networking as a means of realising networked control systems (NCSs) with high dynamic control plants, where a high dynamic control plant is the one that requires the sampling period several times shorter than communication lags induced by a network. The general discussion is illustrated by experimental results obtained in a laboratory NCS with the magnetic levitation system (MLS), which is an example of a structurally unstable plant of high dynamics.

Journal ArticleDOI
TL;DR: It is shown that the HDPPN mark-dynamic and trajectory-d dynamic properties of equilibrium, stability and final decision points coincide under some restrictions, and an algorithm for optimum hierarchical trajectory planning is proposed.
Abstract: We provide a framework for hierarchical specification called Hierarchical Decision Process Petri Nets (HDPPNs). It is an extension of Decision Process Petri Nets (DPPNs) including a hierarchical decomposition process that generates less complex nets with equivalent behavior. As a result, the complexity of the analysis for a sophisticated system is drastically reduced. In the HDPPN, we represent the mark-dynamic and trajectory-dynamic properties of a DPPN. Within the framework of the mark-dynamic properties, we show that the HDPPN theoretic notions of (local and global) equilibrium and stability are those of the DPPN. As a result in the trajectory-dynamic properties framework, we obtain equivalent characterizations of that of the DPPN for final decision points and stability. We show that the HDPPN mark-dynamic and trajectory-dynamic properties of equilibrium, stability and final decision points coincide under some restrictions. We propose an algorithm for optimum hierarchical trajectory planning. The hierarchical decomposition process is presented under a formal treatment and is illustrated with application examples.

Journal ArticleDOI
TL;DR: The paper discusses in detail all aspects of PP-1 cipher design including S-box construction, permutation and round key scheduling, and some processing speed test results are given and compared with those of other ciphers.
Abstract: A totally involutional, highly scalable PP-1 cipher is proposed, evaluated and discussed. Having very low memory requirements and using only simple and fast arithmetic operations, the cipher is aimed at platforms with limited resources, e.g., smartcards. At the core of the cipher's processing is a carefully designed S-box. The paper discusses in detail all aspects of PP-1 cipher design including S-box construction, permutation and round key scheduling. The quality of the PP-1 cipher is also evaluated with respect to linear cryptanalysis and other attacks. PP-1's concurrent error detection is also discussed. Some processing speed test results are given and compared with those of other ciphers.

Journal ArticleDOI
TL;DR: The present paper establishes the notion of an ultra regular covering space, studies its various properties and calculates an automorphism group of the ultra regularCovering space, and develops the concept of compatible adjacency of a digital wedge.
Abstract: In order to classify digital spaces in terms of digital-homotopic theoretical tools, a recent paper by Han (2006b) (see also the works of Boxer and Karaca (2008) as well as Han (2007b)) established the notion of regular covering space from the viewpoint of digital covering theory and studied an automorphism group (or Deck's discrete transformation group) of a digital covering. By using these tools, we can calculate digital fundamental groups of some digital spaces and classify digital covering spaces satisfying a radius 2 local isomorphism (Boxer and Karaca, 2008; Han, 2006b; 2008b; 2008d; 2009b). However, for a digital covering which does not satisfy a radius 2 local isomorphism, the study of a digital fundamental group of a digital space and its automorphism group remains open. In order to examine this problem, the present paper establishes the notion of an ultra regular covering space, studies its various properties and calculates an automorphism group of the ultra regular covering space. In particular, the paper develops the notion of compatible adjacency of a digital wedge. By comparing an ultra regular covering space with a regular covering space, we can propose strong merits of the former.

Journal ArticleDOI
TL;DR: The method of using the operational map of robot surrounding to improve self-localisation accuracy of the robot camera and to reduce the size of the Kalman-filter state-vector with respect to the vector size involving point-wise environment features only is described.
Abstract: Visual simultaneous localisation and map-building systems which take advantage of some landmarks other than point-wise environment features are not frequently reported. In the following paper the method of using the operational map of robot surrounding, which is complemented with visible structured passive landmarks, is described. These landmarks are used to improve self-localisation accuracy of the robot camera and to reduce the size of the Kalman-filter state-vector with respect to the vector size involving point-wise environment features only. Structured landmarks reduce the drift of the camera pose estimate and improve the reliability of the map which is built on-line. Results of simulation experiments are described, proving advantages of such an approach.

Journal ArticleDOI
TL;DR: The classical Cayley-Hamilton theorem is extended to 2D fractional systems described by the Roesser model and necessary and sufficient conditions for the positivity and stabilization by the state-feedback of fractional 2D linear systems are established.
Abstract: A new class of fractional 2D linear discrete-time systems is introduced. The fractional difference definition is applied to each dimension of a 2D Roesser model. Solutions of these systems are derived using a 2D Z-transform. The classical Cayley-Hamilton theorem is extended to 2D fractional systems described by the Roesser model. Necessary and sufficient conditions for the positivity and stabilization by the state-feedback of fractional 2D linear systems are established. A procedure for the computation of a gain matrix is proposed and illustrated by a numerical example.

Journal ArticleDOI
TL;DR: A method for evaluating the importance of GO terms which compose multi-attribute rules and a new algorithm of rule induction is proposed in order to obtain a more synthetic and more accurate description of gene groups than the description obtained by initially determined rules.
Abstract: In this paper we present a method for evaluating the importance of GO terms which compose multi-attribute rules. The rules are generated for the purpose of biological interpretation of gene groups. Each multi-attribute rule is a combination of GO terms and, based on relationships among them, one can obtain a functional description of gene groups. We present a method which allows evaluating the influence of a given GO term on the quality of a rule and the quality of a whole set of rules. For each GO term, we compute how big its influence on the quality of generated set of rules and therefore the quality of the obtained description is. Based on the computed quality of GO terms, we propose a new algorithm of rule induction in order to obtain a more synthetic and more accurate description of gene groups than the description obtained by initially determined rules. The obtained GO terms ranking and newly obtained rules provide additional information about the biological function of genes that compose the analyzed group of genes.

Journal ArticleDOI
TL;DR: An approximate method of solving the fractional (in the time variable) equation which describes the processes lying between heat and wave behavior with the use of the iterative GMRES method.
Abstract: This paper presents an approximate method of solving the fractional (in the time variable) equation which describes the processes lying between heat and wave behavior. The approximation consists in the application of a finite subspace of an infinite basis in the time variable (Galerkin method) and discretization in space variables. In the final step, a large-scale system of linear equations with a non-symmetric matrix is solved with the use of the iterative GMRES method.

Journal ArticleDOI
TL;DR: A mathematical model of HIV-1 infection including the saturation effect of healthy cell proliferation and its dynamics in tissue culture where the infection spreads directly from infected cells to healthy cells trough cell-to-cell contact is derived.
Abstract: In this paper we derive a model describing the dynamics of HIV-1 infection in tissue culture where the infection spreads directly from infected cells to healthy cells trough cell-to-cell contact. We assume that the infection rate between healthy and infected cells is a saturating function of cell concentration. Our analysis shows that if the basic reproduction number does not exceed unity then infected cells are cleared and the disease dies out. Otherwise, the infection is persistent with the existence of an infected equilibrium. Numerical simulations indicate that, depending on the fraction of cells surviving the incubation period, the solutions approach either an infected steady state or a periodic orbit.