# Showing papers in "viXra in 2015"

•

TL;DR: The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures ofSNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses.

Abstract: Simplified neutrosophic set Single valued neutrosophic set Interval neutrosophic set Cosine similarity measure Medical diagnosis a b s t r a c t Objective: In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. Materials and methods: The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Results: Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. Conclusions: The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses.

227 citations

•

TL;DR: In this article, multi-valued neutrosophic sets (MVNSs) are introduced, which allow the truthmembership, indeterminacy membership and falsity-membership degree have a set of crisp values between zero and one, espectively.

Abstract: In recent years, hesitant fuzzy sets (HFSs) and neutrosophic sets (NSs) have become a subject of great interest for researchers and have been widely applied to multi-criteria group decision-making (MCGDM) problems. In this paper, multi-valued neutrosophic sets (MVNSs) are introduced, which allow the truth-membership, indeterminacy membership and falsity-membership degree have a set of crisp values between zero and one, espectively.

136 citations

•

TL;DR: In this article, a trapezoidal neutrosophic set, some operational rules, score and accuracy functions for trapezoid fuzzy numbers are proposed. But the set is not defined.

Abstract: Based on the combination of trapezoidal fuzzy numbers and a single valued neutrosophic set, this paper proposes a trapezoidal neutrosophic set, some operational rules, score and accuracy functions for trapezoidal neutrosophic numbers.

134 citations

•

TL;DR: In this article, a method of multi-criteria decision making that combines interval neutrosophic sets and TOPSIS involving the relative likelihood-based comparison relations of the performances of alternatives that are aggregated into interval numbers is presented.

Abstract: The main purpose of this paper is to provide a method of multi-criteria decision making that combines interval neutrosophic sets and TOPSIS involving the relative likelihood-based comparison relations of the performances of alternatives that are aggregated into interval numbers.

133 citations

•

TL;DR: Wang et al. as discussed by the authors proposed similarity measures between single-valued neutrosophic sets (SVNSs) based on tangent function and weighted similarity measures of SVNSs considering the importance of each element.

Abstract: Similarity measures play an important role in pattern recognition and medical diagnosis, and then existing medical diagnosis methods deal scarcely with the multi-period medical diagnosis problems with neutrosophic information. Hence, this paper proposed similarity measures between single-valued neutrosophic sets (SVNSs) based on tangent function and weighted similarity measures of SVNSs considering the importance of each element.

111 citations

•

TL;DR: In this paper, it was shown that the non-perturbative Scale-Symmetric Theory (SST) is the lacking part of Theory of Everything (ToE), which leads to the Higgs field composed of non-gravitating tachyons.

Abstract: Here we showed that the non-perturbative Scale-Symmetric Theory (SST) is the lacking part of Theory of Everything (ToE). General Relativity (GR) leads to the Higgs field composed of the non-gravitating tachyons. Due to the succeeding phase transitions of the superluminal Higgs field, there appear different scales - it is the foundations of the Scale-Symmetric Theory. Theories of three scales are dual i.e. ratios of physical quantities in these scales concerning the same mechanisms, have the same values. Due to the succeeding phase transitions, there appear the binary systems of superluminal closed strings (i.e. the entanglons that are responsible for the quantum entanglement), neutrinos and neutrino-antineutrino pairs the luminal Einstein spacetime consists of, cores of baryons, and cores of cosmic structures (the cores of protoworlds) that evolution leads to dark matter, dark energy, and to the expanding universes. There appear three additional interactions i.e. viscosity of tachyons, superluminal linear quantum entanglement, and volumetric confinement. There appears the four-particle symmetry that solves many problems. Here as well we described the internal dynamics of baryons that leads to the atom-like structure of baryons and next to mesons and to the composite Higgs boson with a mass of 125 GeV. Among many other things, we described symmetries and laws of conservation that result from the initial parameters, we calculated the physical constants from initial conditions, coupling constants, running coupling for nuclear strong interactions and masses of quarks. Due to the superluminal Higgs field, the neutrinos acquire their gravitational mass. Theoretical results are much better than results obtained within Standard Model. We apply 7 parameters only and very simple mathematics.

99 citations

•

TL;DR: A new outranking approach for multi-criteria decision-making (MCDM) problems is developed in the context of a simplified neutrosophic environment, where the truth-membership degree, indeterminacy- membership degree and falsity-memberships degree for each element are singleton subsets in [0,1].

Abstract: tIn this paper, a new outranking approach for multi-criteria decision-making (MCDM) problems is devel-oped in the context of a simplified neutrosophic environment, where the truth-membership degree,indeterminacy-membership degree and falsity-membership degree for each element are singleton sub-sets in [0,1].

95 citations

•

TL;DR: In this paper, a new concept called "generalized neutrosophic soft set" is presented, which incorporates the beneficial properties of both generalized neutroophic set introduced by A.Salama [7] and soft set techniques proposed by Molodtsov [4].

Abstract: In this paper we present a new concept called “generalized neutrosophic soft set”. This concept incorporates the beneficial properties of both generalized neutrosophic set introduced by A.A.Salama [7] and soft set techniques proposed by Molodtsov [4]. We also study some properties of this concept. Some definitions and operations have been introduced on generalized neutrosophic soft set. Finally we present an application of generalized neuutrosophic soft set in decision making problem.

85 citations

••

TL;DR: The paper presents the extension ofVIKOR method for the solution of the multicriteria decision making problems, namely VIKOR-IVNS, developed in the context of interval-valued neutrosophic sets.

Abstract: The paper presents the extension of VIKOR method for the solution of the multicriteria decision making problems, namely VIKOR-IVNS. The original VIKOR method was proposed for the solution of the decision problems with the conflicting and non-common measurable criteria. In this paper, a new extension of the crisp VIKOR method has been proposed. This extension is developed in the context of interval-valued neutrosophic sets.

69 citations

•

TL;DR: Moskaliuk et al. as mentioned in this paper showed that the quantum value of the Heisenberg Uncertainty Principle (HUP) is likely not recoverable due to the Bicep 2 mistake.

Abstract: This paper is, with the permission of Stepan Moskaliuk similar to what he will put in the conference proceedings of the summer teaching school and workshop for Ukrainian PhD physics students as given in Bratislava, as of summer 2015. With his permission, this paper will be in part reproduced here for this journal.First of all, we restate a proof of a highly localized special case of a metric tensor uncertainty principle first written up by Unruh. Unruh did not use the Roberson-Walker geometry which we do, and it so happens that the dominant metric tensor we will be examining, is variation in . The metric tensor variations given by , and are negligible, as compared to the variation . Afterwards, what is referred to by Barbour as emergent duration of time is from the Heisenberg Uncertainty principle(HUP) applied to in such a way as to give, in the Planckian space-time regime a nonzero minimum non zero lower ground to a massive graviton, . The lower bound to the massive graviton, is influenced by and kinetic energy which is in the Planckian emergent duration of time as . We find from version of the Heisenberg Uncertainty Principle (HUP), that the quantum value of the Heisenberg Uncertainty Principle (HUP) is likely not recoverable due to . I.e. is consistent with non-curved space, so no longer holds. This even if we take the stress energy tensor approximation where the fluid approximation is used. Our treatment of the inflaton is via Handley et al, where we consider the lower mass limits of the graviton as due to when the inflaton is many times larger than a Potential energy, with a kinetic energy (KE) proportional to , with initial degrees of freedom, and T initial temperature .Leading to non-zero initial entropy as stated in Appendix A. In addition we also examine a Ricci scalar value at the boundary between Pre Planckian to Planckian regime of space-time, setting the magnitude of k as approaching flat space conditions right after the Planck regime. Furthermore, we have an approximation as to initial entropy production. N ~ Finally, this entropy is N, and we get an initial version of the cosmological “constant” as Appendix D which is linked to initial value of a graviton mass. Appendix E, is for the Riemannian- Penrose inequality, which is either a nonzero NLED scale factor or quantum bounce as of LQG. Note that , Appendix F gives conditions so that a pre Planckian kinetic energy( inflaton) value greater than Potential energy occurs, which is foundational to the lower bound to Graviton mass. We will in the future add more structure to this calculation so as to confirm via a precise calculation that the lower bound to the graviton mass, is about 10^-70 grams. Our lower bound is a dimensional approximation so far. We will make it exact. We conclude in this document with Appendix G, which is comparing our Pre Planckian space-time metric Heisenberg Uncertainty Principle with the generalized uncertainty principle in quantum gravity. Our result is different from the one given by Ali, Khali and Vagenas in that our energy fluctuation, is not proportional to that of processes of energy connected to Black hole physics, and we also allow for the possibility of Pre Planckian time. Whereas their result, (and the generalized string theory Heisenberg Uncertainty principle) have a more limited regime of interpolation of final results. We do come up with equivalent bounds to recover and the deviation of fluctuations of energy, but with very specific bounds upon the parameters of Ali, Khali, and Vegenas, but this has to be more fully explored. Finally, we close with a comparison of what this new Metric tensor uncertainty principle presages as far as avoiding the Bicep 2 mistake, and the different theories of gravity, as reviewed in Appendix H

65 citations

•

TL;DR: In this paper, a method of multi-criteria decision-making that combines simplified neutrosophic linguistic sets and normalized Bonferroni mean operator is proposed to address the situations where the criterion values take the form of simplified neutrophic linguistic numbers and the criterion weights are known.

Abstract: The main purpose of this paper is to provide a method of multi-criteria decision-making that combines simplified neutrosophic linguistic sets and normalized Bonferroni mean operator to address the situations where the criterion values take the form of simplified neutrosophic linguistic numbers and the criterion weights are known.

•

TL;DR: A new rough cotangent similarity measure between two rough neutrosophic sets is proposed and a numerical example of the medical diagnosis is provided to show the effectiveness and flexibility of the proposed method.

Abstract: In this paper, we define a rough cosine similarity measure between two rough neutrosophic sets. The notions of rough neutrosophic sets (RNS) will be used as vector representations in 3D-vector space.

•

TL;DR: The purpose of the study is to propose some power aggregation operators based on neutrosophic number which is used to deal with multiple attributes group decision making problems more effectively.

Abstract: Neutrosophic number (NN) is a useful tool which is used to overcome the difficulty of describing indeterminate evaluation information.

•

TL;DR: In this paper, it was shown that a genuine portion of a superfluid corresponds to a large heff phase near criticality at least and that also in other phase transition like phenomena a phase transition to dark phase occurs near the vicinity.

Abstract: Quantum criticality is one of the corner stone assumptions of TGD. The value of Kahler coupling strength fixes quantum TGD and is analogous to critical temperature. TGD Universe would be quantum critical. What does this mean is however far from obvious and I have pondered the notion repeatedly both from the point of view of mathematical description and phenomenology. Superfluids exhibit rather mysterious looking effects such as fountain effect and what looks like quantum coherence of superfluid containers which should be classically isolated. These findings serve as a motivation for the proposal that genuine superfluid portion of superfluid corresponds to a large heff phase near criticality at least and that also in other phase transition like phenomena a phase transition to dark phase occurs near the vicinity.

•

TL;DR: A collision of two very big pieces of space leads to the initial conditions for the inflation field and next to the succeeding phase transitions of the superluminal Higgs field composed of the non-gravitating tachyons as discussed by the authors.

Abstract: A collision of two very big pieces of space leads to the initial conditions for the inflation field and next to the succeeding phase transitions of the superluminal inflation field (i.e. of the superluminal Higgs field composed of the non-gravitating tachyons) so there appear different scales. Due to the inflation, there appear two boundaries of our Cosmos that cause that the basic physical constants are invariant. The fourth phase transition of the Higgs field, described within the Scale-Symmetric Theory, leads to the cosmological structures (protoworlds) which appeared in our Cosmos after the inflation. Evolution of the protoworlds leads to the dark matter, dark energy, and expanding universes - the three components of a universe consist of the components of the luminal Einstein spacetime that appeared during the inflation (the components are the neutrino-antineutrino pairs). The first phase transition leads to the superluminal spin-1 entanglons responsible for the quantum entanglement, the second phase transition leads to the luminal Einstein spacetime, whereas the third to the cores of baryons and electrons. The dark matter is entangled with the expanding baryonic matter (it is the long-distance directional quantum entanglement), the dark energy consists of free neutrino-antineutrino pairs whereas in hadrons and electron-like leptons, the neutrino- antineutrino pairs are confined (it is the volumetric confinement) and/or entangled (it is the short-distance directional entanglement). For both the confinement and quantum entanglement are responsible the entanglons the neutrinos consist of. Universes are created inside the core of protoworlds (it looks similar to creation of neutral pion in the core of baryons). Gravitational mass of the core of each protoworld is equal to the superluminal non- gravitating energy frozen inside each stable neutrino (stable are only the electron- and muon- neutrinos). The protoworld-neutrino transition causes that entanglons responsible for quantum entanglement of the neutrino-antineutrino pairs a protoworld consists of, are frozen in the new neutrino so the neutrino-antineutrino pairs are entangled only with the baryonic matter - it means that the core of protoworlds transforms into the dark matter. The inflows of the dark matter and dark energy into the baryonic matter cause the exits of the universes from their black-hole state. To describe correctly dynamics of an expanding universe, we must know that the speed of light in "vacuum" c, is the speed in relation to source or in relation to a last- interaction object/detector - it follows from the quantum entanglement of light with a last- interaction object that fixes the speed c. The inflows of the dark matter fix the radial speed of the front of expanding baryonic matter - in our Universe it is equal to 0.6415c. It causes that the relative speed of light emitted by most distant galaxies in relation to galaxies placed close to the centre of the expanding Universe is 0.3585c. Such cosmological dynamics shows that

•

TL;DR: In this article, the Choquet integral and the interval neutrosophic set theory are combined to make multi-criteria decision for problems under neutroophic fuzzy environment, and a ranking index is proposed according to its geometrical structure.

Abstract: In this paper, the Choquet integral and the interval neutrosophic set theory are combined to make multi-criteria decision for problems under neutrosophic fuzzy environment. Firstly, a ranking index is proposed according to its geometrical structure, and an approach for comparing two interval neutrosophic numbers is given. Then, a ≤L implied operation-invariant total order which satisfies order-preserving condition is proposed.

•

TL;DR: In this paper, the authors introduce the notion of neutrosophic quadruple numbers, which are a quasi-classical system that deals with quasi-terms/concepts/attributes, i.e. they are partially true/membership/probable (t%), partially indeterminate (i%), and partially false/nonmembership /improbable (f%).

Abstract: Symbolic (or Literal) Neutrosophic Theory is referring to the use of abstract symbols (i.e. the letters T, I, F, or their refined indexed letters Tj, Ik, Fl) in neutrosophics. In the first chapter we extend the dialectical triad thesis-antithesis-synthesis (dynamics of and , to get a synthesis) to the neutrosophic tetrad thesis-antithesis-neutrothesis-neutrosynthesis (dynamics of , , and , in order to get a neutrosynthesis). In the second chapter we introduce the neutrosophic system and neutrosophic dynamic system. A neutrosophic system is a quasi- or (t,i,f)–classical system, in the sense that the neutrosophic system deals with quasi-terms/concepts/attributes, etc. [or (t,i,f)-terms/concepts/attributes], which are approximations of the classical terms/concepts/attributes, i.e. they are partially true/membership/probable (t%), partially indeterminate (i%), and partially false/nonmembership/improbable (f%), where t,i,f are subsets of the unitary interval [0,1]. In the third chapter we introduce for the first time the notions of Neutrosophic Axiom, Neutrosophic Deducibility, Neutrosophic Axiomatic System, Degree of Contradiction (Dissimilarity) of Two Neutrosophic Axioms, etc. The fourth chapter we introduced for the first time a new type of structures, called (t, i, f)-Neutrosophic Structures, presented from a neutrosophic logic perspective, and we showed particular cases of such structures in geometry and in algebra. In any field of knowledge, each structure is composed from two parts: a space, and a set of axioms (or laws) acting (governing) on it. If the space, or at least one of its axioms (laws), has some indeterminacy of the form (t, i, f) ≠ (1, 0, 0), that structure is a (t, i, f)-Neutrosophic Structure. In the fifth chapter we make a short history of: the neutrosophic set, neutrosophic numerical components and neutrosophic literal components, neutrosophic numbers, etc. The aim of this chapter is to construct examples of splitting the literal indeterminacy (I) into literal sub-indeterminacies (I1,I2,…,Ir), and to define a multiplication law of these literal sub-indeterminacies in order to be able to build refined I-neutrosophic algebraic structures. In the sixth chapter we define for the first time three neutrosophic actions and their properties. We then introduce the prevalence order on {T,I,F} with respect to a given neutrosophic operator “o”, which may be subjective - as defined by the neutrosophic experts. And the refinement of neutrosophic entities , , and . Then we extend the classical logical operators to neutrosophic literal (symbolic) logical operators and to refined literal (symbolic) logical operators, and we define the refinement neutrosophic literal (symbolic) space. In the seventh chapter we introduce for the first time the neutrosophic quadruple numbers (of the form a+bT+cI+dF) and the refined neutrosophic quadruple numbers. Then we define an absorbance law, based on a prevalence order, both of them in order to multiply the neutrosophic components T,I,F or their sub-components T_j,I_k,F_l and thus to construct the multiplication of neutrosophic quadruple numbers.

••

TL;DR: T tangent similarity measure of neutrosophic refined set is proposed and its properties are studied and it is shown that it is an extension of tangent Similarity measure of single valued neutrosophile sets.

Abstract: In this paper, cotangent similarity measure of neutrosophic refined set is proposed and some of its properties are studied. Finally, using this refined cotangent similarity measure of single valued neutrosophic set, an application on educational stream selection is presented.

•

TL;DR: A novel parallel interacting MCMC scheme, called orthogonal MCMC (O-MCMC), where a set of “vertical” parallel MCMC chains share information using some “horizontal” MCMC techniques working on the entire population of current states, allowing an efficient combination of global exploration and local approximation.

Abstract: Monte Carlo (MC) methods are widely used in statistics, signal processing and machine learning. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In order to foster better exploration of the state space, specially in high-dimensional applications, several schemes employing multiple parallel MCMC chains have been recently introduced. In this work, we describe a novel parallel interacting MCMC scheme, called orthogonal MCMC (O-MCMC), where a set of “vertical” parallel MCMC chains share information using some ”horizontal” MCMC techniques working on the entire population of current states. More specifically, the vertical chains are led by random-walk proposals, whereas the horizontal MCMC techniques employ independent proposals, thus allowing an efficient combination of global exploration and local approximation. The interaction is contained in these horizontal iterations. Within the analysis of different implementations of O-MCMC, novel schemes for reducing the overall computational cost of parallel multiple try Metropolis (MTM) chains are also presented. Furthermore, a modified version of O-MCMC for optimization is provided by considering parallel simulated annealing (SA) algorithms. We also discuss the application of O-MCMC in a big bata framework. Numerical results show the advantages of the proposed sampling scheme in terms of efficiency in the estimation, as well as robustness in terms of independence with respect to initial values and parameter choice.

•

TL;DR: A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper.

Abstract: A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework.

•

TL;DR: A generalized distance measure and its similarity measures between single neutrosophic multisets (SVNMs) are proposed and applied to a medical diagnosis problem with incomplete, indeterminate and inconsistent infor- mation.

Abstract: This paper proposes a generalized distance measure and its similarity measures between single valued neutrosophic multisets (SVNMs).

•

TL;DR: The Prize committee (claymath.org), which requires publication in worldwide reputable mathematics journal and at least two years of following scientific admiration as discussed by the authors. But it is not always recommended.

Abstract: There is Prize committee (claymath.org), which requires publication in worldwide reputable mathematics journal and at least two years of following scientific admiration. Why then the Grisha Perelman has published only in a forum (arXiv), publication was unclear as the crazy sketch; but mummy child ``Grisha'' have being forced to accept the Millennium Prize? Am I simply ugly or poor? If the following text would not be accepted by committee as the pay-able proofs (but I hope for) then let at least it builds your confidence to refer to these conjectures and problems (which now are having my answers), as the achieved facts. I see no logical problems with all these plain facts, are you with me at last? It is your free choice to be blind and discriminative ignorant or be better one. One even can ignore own breathing and, thus, die. One can ignore what's ever in this world. But it is not always recommended. Please respect my copyrights!

•

TL;DR: In this article, it was shown that the curvature of space-time is quantized by the energy advantage in the formation of the Planck black hole, and that three-dimensional space is a consequence of energy advantage.

Abstract: We discuss the gravitational collapse of a photon. It is shown that when the photon gets Planck energy, it turns into a black hole (as a result of interaction with the object to be measured). It is shown that three-dimensional space is a consequence of energy advantage in the formation of the Planck black holes. New uncertainty relations established on the basis of Einstein’s equations. It is shown that the curvature of space-time is quantized.

•

TL;DR: R rough netrosophic multi- attribute decision making based on grey relational analysis is presented and a numerical example is provided to illustrate the applicability and efficiency of the proposed approach.

Abstract: This paper presents rough netrosophic multi- attribute decision making based on grey relational analysis While the concept of neutrosophic sets is a powerful logic to deal with indeterminate and inconsistent data, the theory of rough neutrosophic sets is also a powerful mathematical tool to deal with incompleteness The rating of all alternatives is expressed with the upper and lower approximation operator and the pair of neutrosophic sets which are characterized by truth-membership degree, indeterminacy-membership degree, and falsity- membership degree Weight of each attribute is partially known to decision maker We extend the neutrosophic grey relational analysis method to rough neutrosophic grey relational analysis method and apply it to multi- attribute decision making problem Information entropy method is used to obtain the partially known attribute weights Accumulated geometric operator is defined to transform rough neutrosophic number (neutrosophic pair) to single valued neutrosophic number Neutrosophic grey relational coefficient is determined by using Hamming distance between each alternative to ideal rough neutrosophic estimates reliability solution and the ideal rough neutrosophic estimates un-reliability solution Then rough neutrosophic relational degree is defined to determine the ranking order of all alternatives Finally, a numerical example is provided to illustrate the applicability and efficiency of the proposed approach

•

JSSATE Noida

^{1}TL;DR: The authors are suggesting NRDM and Rank Sets to solve the imprecise query based on Rank Neutrosophic search which is a combination - Neut ROSophic Proximity search and α-Neutrosophile-equality Search.

Abstract: In this paper, we have we have introduced a new intelligent soft-computing method of neutrosophic search with ranks and a new neutrosophic rank sets for neutrosophic relational data model (NRDM).

••

TL;DR: The basic operations are introduced on n-valued interval neutrosophic sets such as; union, intersection, addition, multiplication, scalar multiplication, Scalar division, truthfavorite and false-favorite and an efficient approach for group multi-criteria decision making is proposed.

Abstract: In this paper a new concept is called n-valued interval neutrosophic sets is given. The basic operations are introduced on n-valued interval neutrosophic sets such as; union, intersection, addition, multiplication, scalar multiplication, scalar division, truthfavorite and false-favorite. Then, some distances between n-valued interval neutrosophic sets (NVINS) are proposed. Also, we propose an efficient approach for group multi-criteria decision making based on n-valued interval neutrosophic sets. An application of n-valued interval neutrosophic sets in medical diagnosis problem is given.

•

TL;DR: In this article, the authors proposed a new nuclear generator which allows to convert any matter to nuclear energy in accordance with the Einstein equation E=mc2, which is based upon tapping the energy potential of a Micro Black Hole (MBH) and the Hawking radiation created by this MBH.

Abstract: Author offers a new nuclear generator which allows to convert any matter to nuclear energy in accordance with the Einstein equation E=mc2. The method is based upon tapping the energy potential of a Micro Black Hole (MBH) and the Hawking radiation created by this MBH. As is well-known, the vacuum continuously produces virtual pairs of particles and antiparticles, in particular, the photons and anti-photons. The MBH event horizon allows separating them. Anti-photons can be moved to the MBH and be annihilated; decreasing the mass of the MBH, the resulting photons leave the MBH neighborhood as Hawking radiation. The offered nuclear generator (named by author as AB-Generator) utilizes the Hawking radiation and injects the matter into MBH and keeps MBH in a stable state with near-constant mass. The AB-Generator can not only produce gigantic energy outputs but should be hundreds of times cheaper than a conventional electric generation processes. The AB-Generator can be used in aerospace as a photon rocket or as a power source for numerous space vehicles. Many scientists expect the Large Hadron Collider at CERN will produce one MBH every second and the technology to capture them may be used for the AB-Generator. Key words: Production of nuclear energy, Micro Black Hole, energy AB-Generator, photon rocket. * Presented as Paper AIAA-2009-5342 in 45 Joint Propulsion Conferences, 2–5 August, 2009, Denver, CO, USA.

••

TL;DR: A quality clay-brick selection approach based on multi-attribute decisionmaking with single valued neutrosophic grey relational analysis is presented.

Abstract: The purpose of this paper is to present quality clay-brick selection approach based on multi-attribute decisionmaking with single valued neutrosophic grey relational analysis. Brick plays a significant role in construction field. So it is important to select quality clay-brick for construction based on suitable mathematical decision making tool.

•

TL;DR: The single valued triangular neutrosophic number (SVTrN-number) as discussed by the authors is a generalization of triangular fuzzy numbers and triangular intuitionistic fuzzy numbers, which is defined from a philosophical point of view.

Abstract: The single valued triangular neutrosophic number (SVTrN-number) is simply an ordinary number whose precise value is somewhat uncertain from a philosophical point of view, which is a generalization of triangular fuzzy numbers and triangular intuitionistic fuzzy numbers.

•

TL;DR: In this paper, the condition of the maximum of Deng entropy has been discussed and proofed, which is usefull for the application of the Deng entropy in the context of evidence theory.

Abstract: Dempster Shafer evidence theory has widely used in many applications due to its advantages to handle uncertainty. Deng entropy, has been proposed to measure the uncertainty degree of basic probability assignment in evidence theory. It is the generalization of Shannon entropy since that the BPA is degenerated as probability, Deng entropy is identical to Shannon entropy. However, the maximal value of Deng entropy has not been disscussed until now. In this paper, the condition of the maximum of Deng entropy has been disscussed and proofed, which is usefull for the application of Deng entropy.