Learning and Convergence of Fuzzy Cognitive Maps Used in Pattern Recognition
read more
Citations
A review on methods and software for fuzzy cognitive maps
Identifying the Components and Interrelationships of Smart Cities in Indonesia: Supporting Policymaking via Fuzzy Cognitive Systems
FCM Expert: Software Tool for Scenario Analysis and Pattern Classification Based on Fuzzy Cognitive Maps
Fuzzy Cognitive Maps Based Models for Pattern Classification: Advances and Challenges
Neural Networks And Fuzzy Systems A Dynamical Systems Approach To Machine Intelligence
References
Particle swarm optimization
A logical calculus of the ideas immanent in nervous activity
Individual Comparisons by Ranking Methods
The particle swarm - explosion, stability, and convergence in a multidimensional complex space
Related Papers (5)
Frequently Asked Questions (13)
Q2. What are the future works in "Learning and convergence of fuzzy cognitive maps used in pattern recognition" ?
As a future work, the authors will be focused on hybridizing the proposed learning algorithm with existing learning rules for neural networks.
Q3. What is the widely accepted divergence measure?
A widely accepted divergence measure is the Kullback–Leibler divergence [27] between the true model and the approximating candidate model.
Q4. What is the significance degree of the Wilcoxon signed rank test?
The Wilcoxon signed rank test is a nonparametric method employed in hypothesis testing situations, involving a design with two samples.
Q5. How many swarm particles are set to 80?
As mentioned, to minimize the error function (6) the authors adopt the constricted PSO Type-1 introduced by Clerk and Kennedy [23] as numerical optimizer, where the number of swarm particles is set to 80, the inertia weight 𝜔 = 0.7298 and 𝑐1 = 𝑐2 = 1.496.
Q6. What is the heuristic search approach used in this study?
In their study the authors adopt an heuristic search approach since population-based metaheuristics are capable of finding near-optimal solutions in a reasonable execution time, thus ignoring analytical properties of the target function (e.g., convexity, continuity, differentiability or gradient information).
Q7. What is the function that computes the prediction error?
Equation (6) shows the error function to be optimized, where 𝑋 is the candidate solution generated by the selected optimizer, 𝐾 denotes the number of training patterns, 0 ≤ 𝐹(. ) ≤ 1 is a function that computes the prediction error achieved by the classifier, whereas 0 ≤ 𝐻(. ) ≤ 1 represents the accumulated convergence error during updating the activation value of neurons.
Q8. How many sequence positions were used in the proposed classifier?
Such sites were biologically determined from clinic assays in infected patients and they allowed an averaged reduction rate of 80% regarding the total number of sequence positions.
Q9. What is the mapping of the input and output neurons?
In the second step, the mapping ℳ: [0,1]𝑁 → [0,1]𝑀−𝑁 corresponds to the updating rule that computesthe activation value of output neurons.
Q10. What is the definition of a heuristic procedure for a non-discrete?
In this section the authors describe a heuristic procedure called Stability based on Sigmoid Functions (SSF) for non-discrete FCM-based systems [10] that allows improving the system convergence without altering the weights configuration.
Q11. What is the expected resistance class for the inhibitor?
each training pattern comprises the activation value of the 𝑁 input neurons, and the expected resistance class for the inhibitor (i.e., 0-susceptible and 1-resistant).
Q12. What is the function that determines the predicted class label?
Observe that the responses at the previous discrete-time steps are not considered since they are not used when computed the predicted class, instead, such responses are evaluated when analyzing the convergence of the FCM-based classifier.
Q13. What is the description of the learning method?
More importantly, during simulations the authors observed that their learning method was capable of producing the same classification accuracy (see Table 1) for APV, IDV, RTV and ATV, while for the remaining inhibitors it achieved better results.