scispace - formally typeset
Search or ask a question

Showing papers presented at "Foundations of Computational Intelligence in 2011"


Proceedings Article
01 Jan 2011
TL;DR: Decoy routing is presented, a mechanism capable of circumventing common network filtering strategies by placing the circumvention service in the network itself – where a single device could proxy traffic between a significant fraction of hosts – instead of at the edge.
Abstract: We present decoy routing, a mechanism capable of circumventing common network filtering strategies. Unlike other circumvention techniques, decoy routing does not require a client to connect to a specific IP address (which is easily blocked) in order to provide circumvention. We show that if it is possible for a client to connect to any unblocked host/service, then decoy routing could be used to connect them to a blocked destination without cooperation from the host. This is accomplished by placing the circumvention service in the network itself – where a single device could proxy traffic between a significant fraction of hosts – instead of at the edge.

109 citations


Proceedings Article
01 Jan 2011
TL;DR: The design and implemention of a web censorship monitor, called CensMon, which can successfully detect censored content and spot the filtering technique used by the censor is presented.
Abstract: The Internet has traditionally been the most free medium for publishing and accessing information. It is also quickly becoming the dominant medium for quick and easy access to news. It is therefore not surprising that there are significant efforts to censor certain news articles or even entire web sites. For this reason, it is paramount to try to detect what is censored and by whom. In this paper we present the design and implemention of a web censorship monitor, called CensMon. CensMon is distributed in nature, operates automatically and does not rely on Internet users to report censored web sites, can differentiate access network failures from possible censorship, and uses multiple input streams to determine what kind of censored data to look for. Our evaluation shows that CensMon can successfully detect censored content and spot the filtering technique used by the censor.

77 citations


Proceedings Article
01 Jan 2011
TL;DR: A technical analysis of DNS error traffic monetization evident in 66,000 Netalyzr sessions is conducted, including fingerprinting derived from patterns seen in the resulting ad landing pages.
Abstract: Internet Service Providers (ISPs) increasingly try to grow their profit margins by employing “error traffic monetization,” the practice of redirecting customers whose DNS lookups fail to advertisement-oriented Web servers. A small industry of companies provides the associated machinery for ISPs to engage in this monetization, with the companies often participating in operating the service as well. We conduct a technical analysis of DNS error traffic monetization evident in 66,000 Netalyzr sessions, including fingerprinting derived from patterns seen in the resulting ad landing pages. We identify major players in this industry, their ISP affiliations over time, and available user opt-out mechanisms. One monetization vendor, Paxfire, transgresses the error-based model and also reroutes all user search queries to Bing, Yahoo, and (sometimes) Google via proxy servers controlled or provided by Paxfire.

68 citations


Proceedings Article
01 Jan 2011
TL;DR: An empirical analysis of TOM-Skype censorship and surveillance is presented and five conjectures are presented that are believed to be formal enough to be hypotheses that the Internet censorship research community could potentially answer with more data and appropriate computational and analytic techniques.
Abstract: We present an empirical analysis of TOM-Skype censorship and surveillance. TOM-Skype is an Internet telephony and chat program that is a joint venture between TOM Online (a mobile Internet company in China) and Skype Limited. TOM-Skype contains both voice-overIP functionality and a chat client. The censorship and surveillance that we studied for this paper is specific to the chat client and is based on keywords that a user might type into a chat session. We were able to decrypt keyword lists used for censorship and surveillance. We also tracked the lists for a period of time and witnessed changes. Censored keywords range from obscene references, such as Œs o (two girls one cup, the motivation for our title), to specific passages from 2011 China Jasmine Revolution protest instructions, such as ý %™i¦S3eM (McDonald’s in front of Chunxi Road in Chengdu). Surveillance keywords are mostly related to demolitions in Beijing, such asuƒa AEA (Ling Jing Alley demolition). Based on this data, we present five conjectures that we believe to be formal enough to be hypotheses that the Internet censorship research community could potentially answer with more data and appropriate computational and analytic techniques.

51 citations


Proceedings Article
01 Jan 2011
TL;DR: The problem of mapping internet filtering, or censorship, at a finer-grained level than the national is examined, in the belief that users in different areas of a country, or users accessing the internet through different providers or services, may experience differences in the filtering applied to their internet connectivity.
Abstract: We examine the problem of mapping internet filtering, or censorship, at a finer-grained level than the national, in the belief that users in different areas of a country, or users accessing the internet through different providers or services, may experience differences in the filtering applied to their internet connectivity. In investigating this possibility, we briefly consider services that may be used by researchers to experience a remote computer’s view of the internet. More importantly, we seek to stimulate discussion concerning the potentially serious legal and ethical concerns that are intrinsic to this form of research.

31 citations


Proceedings Article
01 Jan 2011
TL;DR: Cloud-based Onion Routing (COR) is described, which builds onion-routed tunnels over multiple anonymity service providers and through multiple cloud hosting providers, dividing trust while forcing censors to incur large collateral damage.
Abstract: Internet censorship and surveillance have made anonymity tools increasingly critical for free and open Internet access. Tor, and its associated ecosystem of volunteer traffic relays, provides one of the most secure and widely-available means for achieving Internet anonymity today. Unfortunately, Tor has limitations, including poor performance, inadequate capacity, and a susceptibility to wholesale blocking. Rather than utilizing a large number of volunteers (as Tor does), we propose moving onion-routing services to the “cloud” to leverage the large capacities, robust connectivity, and economies of scale inherent to commercial datacenters. This paper describes Cloud-based Onion Routing (COR), which builds onion-routed tunnels over multiple anonymity service providers and through multiple cloud hosting providers, dividing trust while forcing censors to incur large collateral damage. We discuss the new security policies and mechanisms needed for such a provider-based ecosystem, and present some preliminary benchmarks. At today’s prices, a user could gain fast, anonymous network access through COR for only pennies per day.

19 citations


Proceedings ArticleDOI
11 Apr 2011
TL;DR: How semi-fuzzy quantifiers are a useful tool for modeling linguistic summaries from data in two aspects: how they provide a systematic mechanism for performing the data summarization task involving fuzzy quantifiers and also how they can be used for the detection of quantified patterns in data.
Abstract: In this paper we discuss how semi-fuzzy quantifiers are a useful tool for modeling linguistic summaries from data in two aspects: how they provide a systematic mechanism for performing the data summarization task involving fuzzy quantifiers that are different from the usual unary and binary ones and also how they can be used for the detection of quantified patterns in data.

18 citations


Proceedings ArticleDOI
11 Apr 2011
TL;DR: It is shown how bistable neural pools can perform tasks such as binary and stack-like counting, and how they can realize hierarchical organization in parallel computing.
Abstract: Information can be encoded in spiking neural network (SNN) by precise spike-time relations. This hypothesis can explain cell assembly formation, such as polychronous group (PNG), a notion created to explain how groups of neurons fire time-locked to each other, not necessarily synchronously. In this paper we present a set of PNGs capable of retaining triggering events in bistable states. Triggering events may be data or computational controls. Both, data and control signals are memorized as a result of intrinsic operational PNG attributes, and no neural plasticity mechanisms are involved. This behavior can be fundamental for several computational operations in SNNs. It is shown how bistable neural pools can perform tasks such as binary and stack-like counting, and how they can realize hierarchical organization in parallel computing.

14 citations


Proceedings ArticleDOI
11 Apr 2011
TL;DR: A phase transition phenomenon is found in the complexity of random instances of the Traveling Salesman Problem under the 2-exchange neighbor system using the two descriptors of complexity proposed.
Abstract: This work is related to the search of complexity measures for instances of combinatorial optimization problems. Particularly, we have carried out a study about the complexity of random instances of the Traveling Salesman Problem under the 2-exchange neighbor system. We have proposed two descriptors of complexity: the proportion of the size of the basin of attraction of the global optimum over the size of the search space and the proportion of the number of different local optima over the size of the search space. We have analyzed the evolution of these descriptors as the size of the problem grows. After that, and using our complexity measures, we find a phase transition phenomenon in the complexity of the instances.

13 citations


Proceedings ArticleDOI
11 Apr 2011
TL;DR: A new method for decision making based on the use of probabilities, weighted averages and ordered weighted averaging (OWA) operators is developed, thus representing the subjective and the objective information and the attitudinal character in a more complete way.
Abstract: We develop a new method for decision making based on the use of probabilities, weighted averages and ordered weighted averaging (OWA) operators. We analyze a method that it is able to deal with several aggregation structures thus obtaining a more general formulation that represents the information in a more complete way. We introduce a new aggregation operator that aggregates a wide range of other aggregation operators. Therefore, we can include in the same formulation a wide range of concepts and representing how relevant they are in the aggregation. We call it the unified aggregation operator. By using this aggregation operator we can deal with a wide range of complex structures, for example, we can aggregate in a decision making problem several structures of probabilities, weighted averages and OWA operators. Thus, the information we provide is more complete because in real world problems the information comes from different sources and this needs to be considered in the aggregation process. We study the applicability of this new approach and we see that it is very broad because real world problems are better assessed with this new model. We focus on a multi-person decision making example where we use several structures of probabilities, weighted averages and OWA operators, thus representing the subjective and the objective information and the attitudinal character in a more complete way.

10 citations


Proceedings Article
01 Jan 2011
TL;DR: A named entity extraction framework that can extract the names of people, places, and organizations from text such as a news story and a maximum entropy approach is used, because of its flexibility.
Abstract: Tracking Internet censorship is challenging because what content the censors target can change daily, even hourly, with current events. The process must be automated because of the large amount of data that needs to be processed. Our focus in this paper is on automated probing of keyword-based Internet censorship, where natural language processing techniques are used to generate keywords to probe for censorship with. In this paper we present a named entity extraction framework that can extract the names of people, places, and organizations from text such as a news story. Previous efforts to automate the study of keyword-based Internet censorship have been based on semantic analysis of existing bodies of text, such as Wikipedia, and so could not extract meaningful keywords from the news to probe with. We have used a maximum entropy approach for named entity extraction, because of its flexibility. Our preliminary results suggest that this approach gives good results with only a rudimentary understanding of the target language. This means that the approach is very flexible, and while our current implementation is for Chinese we anticipate that extending the framework to other languages such as Arabic, Farsi, and Spanish will be straightforward because of the maximum entropy approach. In this paper we present some testing results as well as some preliminary results from probing China’s GET request censorship and search engine filtering using this framework.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: This paper points out some fruitful cross-fertilizations between the possibilistic representation of information and several views of granulation emphasizing the idea of clusters of points that can be identified respectively on the basis of their closeness, or of their common labeling in terms of properties.
Abstract: Two important ideas at the core of Zadeh's seminal contributions to fuzzy logic and approximate reasoning are the notions of granulation and of possibilistic uncertainty. In this paper, elaborating on the basis of some formal analogy, recently made by the authors, between possibility theory and formal concept analysis, we suggest other bridges between theories for which the concept of granulation is central. We highlight the common features between the notion of extensional fuzzy set with respect to a similarity relation and the notion of concept. We also discuss the case of fuzzy rough sets. Thus, we point out some fruitful cross-fertilizations between the possibilistic representation of information and several views of granulation emphasizing the idea of clusters of points that can be identified respectively on the basis of their closeness, or of their common labeling in terms of properties.

Proceedings Article
Wendy Seltzer1
01 Jan 2011
TL;DR: The history of copyright enforcement measures and counter-measures and their attack and riposte can be instructive in the Internet freedom arena and provides a domestic analog and preview of Internet censorship in other contexts.
Abstract: U.S. policymakers proclaim their commitment to Internet freedom while simultaneously endorsing restrictions on Internet exchange. Unfortunately, the tools – legal and technical – built to block copyright infringement, counterfeit sales, online gambling, or indecency, often find use to censor lawful expression here and abroad. In particular, the United States and its entertainment industries have prioritized online copyright enforcement such that its attack and riposte can be instructive in the Internet freedom arena. 1 Copyright as Information-Control The United States Internet is largely free from government-mandated censorship. The 1997 ACLU v. Reno set an early bar, striking as unconstitutional provisions of the Communications Decency Act that would have required Internet Service Providers to block children’s access to materials deemed “harmful to minors.”[2] The First Amendment, the Supreme Court held, forbade these restrictions on speech. While parents in their homes (and later, libraries and schools operating with federal funds) might filter their children’s Internet connections, a law mandating ISP-controlled blocking was not “narrowly tailored” to government purposes. Copyright, however, stands as one of the rare permissible restrictions on speech. As the Court said in Eldred v. Ashcroft, copyright is an “engine of free expression,” and therefore, “The First Amendment securely protects the freedom to make – or decline to make – one’s own speech; it bears less heavily when speakers assert the right to make other people’s speeches.” [8] While numerous scholars [16, 22, 21, 26] and litigants [10, 12] have criticized copyright’s seeming freepass from First Amendment scrutiny, its anomalous information-control has persisted. In response, technologists and hackers have joined the academic and legal critics of copyright. The history of copyright enforcement measures and counter-measures thus provides a domestic analog and preview of Internet censorship in other contexts. 1.1 Squeezing Filesharing Online copyright debates took hold in the mid 1990s, as Internet connectivity spread, “rippers” and MP3 compression enabled the public to extract and save digital tracks from music CDs, and sites arose to help people exchange music. Early music-sharers operated through central servers, depositing files and retrieving others from BBSs, FTP servers, and websites. Beyond simple file-exchange, My.MP3.com recognized CDs from a user’s drive and transferred copies of their tracks to an online virtual “locker.” As all of these methods involved copying, the music studios successfully argued that the unauthorized reproductions of their copyrighted works infringed copyright. [7, 5] Centralized architecture made these early sites easy to find and squash. Napster claimed both technical and legal innovation when it was released in 1999. The peer-to-peer software distributed the burdens of file storage and the sharing activity, directing peer users to transfer files to one another so Napster itself never copied the files. Yet the Ninth Circuit found that architecture insufficient to avoid copyright liability. Because the company maintained a central directory of files and routing information, its owners were liable for contributory and vicarious infringement of copyright: Napster knowingly materially contributed to infringement, and it profited from infringing activity it had the right or ability to control. [6] The next generation of peer-to-peer software decentralized further still: Morpheus, KaZaA, and Grokster moved the directory and routing information to supernodes nominated from among peer computers, requiring only a bootstrap download to join the network. After the Ninth Circuit found this architecture escaped Napster’s

Proceedings ArticleDOI
11 Apr 2011
TL;DR: The well known game Iterated Prisoner's Dilemma is examined as a test case for a new algorithm of genetic search known as Multiple Agent Genetic Networks (MAGnet), which facilitates the movement of not just the agents, but also the problem instances which a population of agents is working to solve in parallel.
Abstract: The well known game Iterated Prisoner's Dilemma (IPD) is examined as a test case for a new algorithm of genetic search known as Multiple Agent Genetic Networks (MAGnet). MAGnet facilitates the movement of not just the agents, but also the problem instances which a population of agents is working to solve in parallel. This allows for simultaneous classification of problem instances and search for solution to those problems. As this is an initial study, there is a focus on the ability of MAGnet to classify problem instances of IPD playing agents. A problem instance of IPD is a single opponent. A good classification method, called fingerprinting, for IPD exists and allows for verification of the comparison. Results found by MAGnet are shown to be logical classifications of the problems based upon player strategy. A subpopulation collapse effect is shown which allows the location of both difficult problem instances and the existence of general solutions to a problem.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: To help comparison of intervals according to different relevant order relations, a general comparison index is proposed and some properties are illustrated.
Abstract: The comparison of intervals, following several order relations, is relevant in interval linear programming methods to solve many real-life problems. The determination of coefficients as crisp values is practically impossible in reality, where data sources are often uncertain, vague and incomplete. The uncertainty can be modelled with coefficients that vary in intervals, making possible as decision making process that is common in the areas of soft sciences like social science, economics, finance and management studies. To help comparison of intervals according to different relevant order relations, a general comparison index is proposed and some properties are illustrated.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: This paper focuses the attention on the estimation of the generalization error of a classifier through a test set, and surveys and compares old and new techniques, in terms of quality of the estimation, easiness of use, and rigorousness of the approach.
Abstract: In this paper, we focus the attention on one of the oldest problems in pattern recognition and machine learning: the estimation of the generalization error of a classifier through a test set. Despite this problem has been addressed for several decades, the last word has not yet been written, as new proposals continue to appear in the literature. Our objective is to survey and compare old and new techniques, in terms of quality of the estimation, easiness of use, and rigorousness of the approach.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: This paper proposes a fast routing algorithm based on Hopfield Neural Networks (HNN) for GPU, considering some implementation issues, and analyzes the memory bottlenecks, the complexity of the HNN and how the kernel functions should be implemented.
Abstract: Although some interesting routing algorithms based on HNN were already proposed, they are slower when compared to other routing algorithms. Since HNN are inherently parallel, they are suitable for parallel implementations, such as Graphic Processing Units (GPU). In this paper we propose a fast routing algorithm based on Hopfield Neural Networks (HNN) for GPU, considering some implementation issues. We analyzed the memory bottlenecks, the complexity of the HNN and how the kernel functions should be implemented. We performed simulations for five different variations of the routing algorithm for two communication network topologies. We achieved speed-ups up to 55 when compared to the simplest version implemented in GPU and up to 40 when compared to the CPU version. These new results suggest that it is possible to use the HNN for routing in real networks.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: Merging Fuzzy ART (MFuART) generates the number of clusters automatically and with good cluster quality with the use of over-clustering and merging mechanism.
Abstract: Real-world datasets usually involve class overlap. It has been observed that, in general, the performance of clustering algorithms degrade with the increasing overlapping degree. The main challenge for clustering overlapping data is the determination of the appropriate number of clusters and division of the overlapping region. This paper proposes a novel method based on Fuzzy ART clustering to handle the overlapping data without demanding a priori the number of clusters. With the use of over-clustering and merging mechanism, Merging Fuzzy ART (MFuART) generates the number of clusters automatically and with good cluster quality.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: The main goal of this paper is the introduction of a generalized notion of reductant, called G-reductant, which only depends on (a finite number of) predicate symbols instead of ground atoms (whose number is always infinite for programs considering at least a non constant function symbol in their signature).
Abstract: Fuzzy extensions of logic programming often require the notion of reductant to ensure completeness when working with some lattices modeling the concept of truth degree beyond the simpler case of true and false. Initially introduced in the context of generalized annotated logic programming, some adaptations of this theoretical tool have been proposed for the more recent and flexible framework of multi-adjoint logic programming. To the best of our knowledge, all of them suffer the important problem of usually requiring an infinite set of reductants (one for each ground atom) for being added to a given program in order to guarantee its completeness. The main goal of this paper is the introduction of a generalized notion of reductant, called G-reductant, which only depends on (a finite number of) predicate symbols instead of ground atoms (whose number is always infinite for programs considering at least a non constant function symbol in their signature). More exactly, given a predicate p/n in the signature of a fuzzy program p, we generate just a single G-reductant with head p(X 1 , … , X n ) (being X 1 , … , X n different variable symbols) which covers all the possible calls to p in a completely safe way. Since the number of G-reductants is finite for programs with a finite number of predicates, our notion can be really applied in practice in contrast with older versions of reductants which are only applicable at a non-practical, but purely theoretical level.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: A notion of monotonicity of a fuzzy relation as a graded property is introduced and a connection to special models of a monotone fuzzy rule base system will be provided.
Abstract: In this contribution, we will introduce a notion of monotonicity of a fuzzy relation as a graded property. We will study its behavior w.r.t. fuzzy set operations and show relationship between monotone crisp functions and monotone fuzzy relations. Finally, a connection to special models of a monotone fuzzy rule base system will be provided.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: An extension of the database relational model which incorporates vague or imprecise data is presented, and a sound and complete axiomatic system to manipulate these dependencies is introduced, named Simplification Logic for fuzzy functional dependencies.
Abstract: In this work, an extension of the database relational model which incorporates vague or imprecise data is presented. Specifically, we extend the concept of functional dependency to Fuzzy Attributes Tables. This extension is based on the use of a residuated lattice as a truthfulness value set. For this goal, the domains are enriched with fuzzy similarity relations, the atomic values of the tables become fuzzy, and the functional dependencies are also fuzzy and based on the similarity relations. Moreover, we introduce a sound and complete axiomatic system to manipulate these dependencies, named Simplification Logic for fuzzy functional dependencies.

Proceedings Article
01 Jan 2011
TL;DR: An internal BBC prototype system built in 2010 to detect online censorship of its content is examined, and potential improvements are evaluated, and the BBC’s use of circumvention tools are reviewed.
Abstract: News organizations are often the targets of Internet censorship. This paper will look at two technical considerations for the BBC, based on its distribution of non-English content into countries such as Iran and China, where the news services are permanently unavailable from the official BBC websites: blocking detection and circumvention. This study examines an internal BBC prototype system built in 2010 to detect online censorship of its content, and evaluates potential improvements. It will also review the BBC‟s use of circumvention tools, and consider the impact and execution of pilot services for Iran and China. Finally, the study will consider the technical delivery of the BBC‟s news output, and the methods it employs to bypass Internet censorship.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: The concept of a bipolar query, meant as a database query that involves both positive and negative conditions is discussed from the point of view of flexible database querying and an affective computing perspective - in its affect and judgment related setting that is decision making oriented - is outlined and advocated.
Abstract: The concept of a bipolar query, meant as a database query that involves both positive and negative conditions is discussed from the point of view of flexible database querying. A new possible perspective is outlined which is related to the modeling of affects that play a crucial role in real world human centric decision making, and are also known to involve a positive and negative valuation which are the crucial elements of bipolar queries. The aggregation of the matching degrees against the negative and positive conditions to derive an overall matching degree is considered taking into account the Lacroix and Lavency approach [1] for bipolar queries as the point of departure. It is shown that the use of a multiple valued logic based formalism for the representation of positive and negative evaluations boils down to a logical type evaluation function that is in line with Grabisch, Greco and Pirlot's [2] general approach to bivariate bipolar multicriteria decision making. Then, an affective computing perspective - in its affect and judgment related setting that is decision making oriented - is outlined and advocated.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: This work focuses on the assignment of a fuzzy stable model semantics to inconsistent classical logic programs on the basis of the separation of the notion of inconsistency and uncertainty.
Abstract: Based on the recently proved fact that the continuity of the connectives involved in a normal residuated logic program ensures the existence of fuzzy stable models, we focus on the assignment of a fuzzy stable model semantics to inconsistent classical logic programs on the basis of the separation of the notion of inconsistency and uncertainty.

Proceedings Article
01 Jan 2011
TL;DR: A new type of information theoretic method to determine the appropriate quantity of information to be contained in neural networks by maximizing the relative information, namely, the ratio of mutual information between competitive units and input patterns to the total information in networks.
Abstract: In this paper, we propose a new type of information theoretic method to determine the appropriate quantity of information to be contained in neural networks. Though information theoretic methods have been extensively applied to neural networks, they have been concerned with information maximization and minimization. In the present paper, we point out the necessity of paying due attention to the content of the obtained information, or the quality of the information content. We should explore more exactly what kinds of information should be obtained in learning. We applied this idea to information theoretic competitive learning in which mutual information between competitive units and input patterns can be used to realize competitive processes. We do not maximize simply the mutual information but the relative information, namely, the ratio of mutual information between competitive units and input patterns to the total information in networks. By maximizing the relative information, we can produce total information in which the maximum mutual information is included. We applied this method to two data sets from the machine learning database, namely, the glass data and the musk problem. The experimental results are summarized by the following three points. First, the relative information could be maximized, meaning that the peak values of relative information could be obtained for both sets of data. Second, improved quantization and topographic errors were obtained by maximizing the relative information. Third, when the relative information was maximized, clearer class structures could be obtained in terms of the U-matrix and conditional mutual information.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: This tutorial introduces the novel concept of a fuzzy rule based network whose nodes are fuzzy rule bases and the connections between the nodes are the interactions between these rule bases.
Abstract: This tutorial introduces the novel concept of a fuzzy rule based network whose nodes are fuzzy rule bases and the connections between the nodes are the interactions between these rule bases. A fuzzy network is viewed as a fuzzy system with networked rule bases as opposed to fuzzy systems with single or multiple rule bases. A comparison between different types of fuzzy systems is presented as well as formal models for fuzzy networks such as Boolean matrices, binary relations, block schemes and topological expressions.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: The name Reliable Support Vector Machines (RSVM) is adopted for models built according to the proposed algorithm, showing a significant improvement in classification accuracy.
Abstract: Starting from the theoretical framework of reliable learning, a new classification algorithm capable of using prior information on the reliability of a training set has been developed. It consists in a straightforward modification of the standard technique adopted in the conventional Support Vector Machine (SVM) approach: the knowledge about reliability, encoded by adding a binary label to each example of the training set (asserting if the classification is reliable or not), is employed to properly modify the constrained optimization problem for the generalized optimal hyperplane. Hence, the name Reliable Support Vector Machines (RSVM) is adopted for models built according to the proposed algorithm. Specific tests have been carried out to verify how RSVM performs in comparison with standard SVM, showing a significant improvement in classification accuracy.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: A new class of fuzzy implications called (h, e)-implications is introduced that satisfy a classical property of some types of implications derived from uninorms and it is proved that they do not intersect with any of the most used classes of implications.
Abstract: A new class of fuzzy implications called (h, e)-implications is introduced. They are implications generated from an additive generator of a representable uninorm in a similar way of Yager's f- and g-implications which are generated from additive generators of continuous Archimedean t-norms and t-conorms, respectively. In addition, they satisfy a classical property of some types of implications derived from uninorms that is I(e, y) = y for all y ∈ [0, 1] and they are another example of a fuzzy implication satisfying the exchange principle but not the law of importation for any t-norm, in fact for any function F : [0, 1]2 → [0, 1]. Other properties of these implications are studied in detail such as other classical tautologies: contrapositive symmetry and distributivity. Finally, it is proved that they do not intersect with any of the most used classes of implications.

Proceedings Article
01 Jan 2011
TL;DR: This work identifies the pertinent network architectural principles, and uses these to propose a new legal framework for device attachment that, combined with standardized interfaces and protocols, can ensure an open network that supports innovation in devices.
Abstract: Much of the research on an Internet of Things assumes that users will be able to connect devices without consent by or interference from their service providers. However, in cable and satellite television networks, cellular networks, and some broadband Internet networks, the service provider often only allows use of set-top boxes, smart phones, and residential gateways obtained directly from the provider. The ability of a provider to implement such restrictions is limited by communications law. We propose a set of user and service provider rights. We identify the pertinent network architectural principles, and use these to propose a new legal framework for device attachment that, combined with standardized interfaces and protocols, can ensure an open network that supports innovation in devices.