scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Universal Computer Science in 2007"


Journal Article
TL;DR: Results of a recent feedback study with six healthy subjects with no or very little experience with BCI control are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials.
Abstract: The Berlin Brain-Computer Interface (BBCI) project develops an EEG-based BCI system that uses machine learning techniques to adapt to the specific brain signatures of each user. This concept allows to achieve high quality feedback already in the very first session without subject training. Here we present the broad range of investigations and experiments that have been performed within the BBCI project. The first kind of experiments analyzes the predictability of performing limbs from the premovement (readiness) potentials including successful feedback experiments. The limits with respect to the spatial resolution of the somatotopy are explored by contrasting brain patterns of movements of (1) left vs. right foot, (2) index vs. little finger within one hand, and (3) finger vs. wrist vs. elbow vs. shoulder within one arm. A study of phantom movements of patients with traumatic amputations shows the potential applicability of this BCI approach. In a complementary approach, voluntary modulations of sensorimotor rhythms caused by motor imagery (left hand vs. right hand vs. foot) are translated into a proportional feedback signal. We report results of a recent feedback study with six healthy subjects with no or very little experience with BCI control: Half of the subjects achieved an information transfer rate above 35 bits per minute (bpm). Furthermore, one subject used the BBCI to operate a mental typewriter in free spelling mode. The overall spelling speed was 4.5 letters per minute including the time needed for the correction errors. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials.

118 citations


Journal Article
TL;DR: The development of .an OWL decisional Ontology built upon set of experience, which would make decisional DNA, that is, explicit knowledge of formal decision events, a useful element in multiple systems and technologies, as well as in the construction of the e-decisional community.
Abstract: Collecting, distributing and sharing knowledge in a knowledge-explicit way is a significant task for any company. However, collecting decisional knowledge in the form of formal decision events as the fingerprints of a company is an utmost advance. Such decisional fingerprint is called decisional DNA. Set of experience knowledge structure can assist on accomplishing this purpose. In addition, Ontology-based technology applied to set of experience knowledge structure would facilitate distributing and sharing companies' decisional DNA. Such possibility would assist in the development of an e-decisional community, which will support decision-makers on their overwhelming job. The purpose of this paper is to explain the development of .an OWL decisional Ontology built upon set of experience, which would make decisional DNA, that is, explicit knowledge of formal decision events, a useful element in multiple systems and technologies, as well as in the construction of the e-decisional community.

92 citations


Journal Article
TL;DR: This note considers a further important ingredient related to brain functioning, the astrocyte cells which fed neurons with nutrients, implicitly controlling their functioning, in spiked neural P systems.
Abstract: Spiking neural P systems are computing models inspired from the way the neurons communicate by means of spikes, electrical impulses of identical shapes. In this note we consider a further important ingredient related to brain functioning, the astrocyte cells which fed neurons with nutrients, implicitly controlling their functioning. Specifically, we introduce in our models only one feature of astrocytes, formulated as a control of spikes traffic along axons. A normal form is proved (for systems without forgetting rules) and decidability issues are discussed.

84 citations


Journal Article
TL;DR: The properties and algorithms of the Gray code are summarised, and some interesting applications of the code are also treated.
Abstract: Here we summarise the properties and algorithms of the Gray code De- scriptions are given of the Gray code definition, algorithms and circuits for gener- ating the code and for conversion between binary and Gray code, for incrementing, counting, and adding Gray code words Some interesting applications of the code are also treated Java implementations of the algorithms in this paper are available at: http://wwwjucsorg/jucs 13 11/the gray code/data/DoranGrayProgramszip

80 citations


Journal Article
TL;DR: A hybrid fuzzy scheme based on fuzzy particle swarm optimization and variable neigh- borhood search to solve the Quadratic Assignment Problem (QAP) and it is theoretically proved that the variable neighborhood particle swarm algorithm converges with a prob- ability of 1 towards the global optimal.
Abstract: Recently, Particle Swarm Optimization (PSO) algorithm has exhibited good performance across a wide range of application problems. A quick review of the literature reveals that research for solving the Quadratic Assignment Problem (QAP) using PSO approach has not much been investigated. In this paper, we design a hy- brid meta-heuristic fuzzy scheme, called as variable neighborhood fuzzy particle swarm algorithm (VNPSO), based on fuzzy particle swarm optimization and variable neigh- borhood search to solve the QAP. In the hybrid fuzzy scheme, the representations of the position and velocity of the particles in the conventional PSO is extended from the real vectors to fuzzy matrices. A new mapping is introduced between the particles in the swarm and the problem space in an efficient way. We also attempt to theoretically prove that the variable neighborhood particle swarm algorithm converges with a prob- ability of 1 towards the global optimal. The performance of the proposed approach is evaluated and compared with other four different algorithms. Empirical results il- lustrate that the approach can be applied for solving quadratic assignment problems effectively.

71 citations


Journal Article
TL;DR: A system that takes as input a list of plain keywords pro- vided by a user and translates them into a query expressed in a formal language without ambiguity, which discovers the semantics of user keywords by consulting the knowledge represented by many (heterogeneous and distributed) ontologies.
Abstract: The technology in the field of digital media generates huge amounts of tex- tual information every day, so mechanisms to retrieve relevant information are needed. Under these circumstances, many times current web search engines do not provide users with the information they seek, because these search tools mainly use syntax based techniques. However, search engines based on semantic and context information can help overcome some of the limitations of current alternatives. In this paper, we propose a system that takes as input a list of plain keywords pro- vided by a user and translates them into a query expressed in a formal language without ambiguity. Our system discovers the semantics of user keywords by consulting the knowledge represented by many (heterogeneous and distributed) ontologies. Then, context information is used to remove ambiguity and build the most probable query. Our experiments indicate that our system discovers the user's information need bet- ter than traditional search engines when the semantics of the request is not the most popular on the Web.

57 citations


Journal Article
Jean-Raymond Abrial1
TL;DR: This paper gives a tutorial introduction to the ideas behind system development using the B-Method, the crucial relationship between requirements and formal model leads to systems that are correct by construction.
Abstract: This paper gives a tutorial introduction to the ideas behind system development using the B-Method. Properly handled, the crucial relationship between requirements and formal model leads to systems that are correct by construction. Some industrial successes are outlined.

57 citations


Journal Article
TL;DR: The role of mashups in complementing and enhancing digital journals by providing insights into the quality academic content, extent of coverage, and the enabling of expanded services is highlighted.
Abstract: The WWW is currently experiencing a revolutionary growth due to its increasing participative community software applications. This paper highlights an emerging application development paradigm on the WWW, called mashup. As blogs have enabled anyone to become a publisher, mashups stimulate web development by allowing anyone to combine existing data to develop web applications. Current applications of mashups include tracking of events such as crime, hurricanes, earthquakes, meta-search integration of data and media feeds, interactive games, and as an organizer for web resources. The implications of this emerging web integration and structuring paradigm remains yet to be explored fully. This paper describes mashups from a number of angles, highlighting current developments while providing sufficient illustrations to indicate its potential implications. It also highlights the role of mashups in complementing and enhancing digital journals by providing insights into the quality academic content, extent of coverage, and the enabling of expanded services. We present pioneering initiatives for the Journal of Universal Computer Science in our efforts to harness the collective intelligence of a collaborative scholarly network.

52 citations


Journal Article
TL;DR: This approach, based on semantic web technologies, relies on formalised ontologies, semantic annotations of scientific articles and knowledge extraction from texts to help biologists to annotate their documents and at facilitating their information retrieval task.
Abstract: This paper describes an ontology-based approach aiming at helping biologists to annotate their documents and at facilitating their information retrieval task. Our approach, based on semantic web technologies, relies on formalised ontologies, semantic annotations of scientific articles and knowledge extraction from texts. We propose a method/system for the generation of ontology-based semantic annotations (MeatAnnot) and a system allowing biologists to draw advanced inferences on these annotations (MeatSearch). This approach was proposed to support biologists working on DNA microarray experiments in the validation and the interpretation of their results, but it can probably be extended to other massive analyses of biological events (as provided by proteomics, metabolomics…).

51 citations


Journal Article
TL;DR: A method for determining consensus of hierarchical incomplete ordered partitions and coverings of sets is presented and also algorithms of consensus determining for a finite set of hierarchical partial partition and covering are presented.
Abstract: A method for determining consensus of hierarchical incomplete ordered partitions and coverings of sets is presented in this chapter. Incomplete ordered partitions and coverings are often used in expert information analysis. These structures should be useful when an expert has to classify elements of a set into given classes, but referring to several elements he does not know to which classes they should belong. The hierarchical ordered partition is a more general structure than incomplete ordered partition. In this chapter we present definitions of the notion of hierarchical incomplete ordered partitions and coverings of sets. The distance functions between hierarchical incomplete ordered partitions and coverings are defined. We present also algorithms of consensus determining for a finite set of hierarchical incomplete ordered partitions and coverings.

49 citations


Journal Article
TL;DR: A comparison framework is introduced that conceptually analyzes and classifies reusable learning design solutions and processes that drive the creation of ready-to-run UoLs and provides a comprehensible representation of such processes and units of reuse over two dimensions, namely granularity and completeness.
Abstract: IMS Learning Design (IMS LD) is an interoperable and standardized language that enables the computational representation of Units of Learning (UoLs). However, its adoption and extensive use in real practice largely depends on the extent to which teachers can design and author their own UoLs according to the requirements of their educational situations. Many of the proposed design processes for facilitating the creation of UoLs are based on the reuse of complete or non-complete learning design solutions at different levels of granularity. This paper introduces a comparison framework that conceptually analyzes and classifies reusable learning design solutions and processes that drive the creation of ready-to-run UoLs. The framework provides a comprehensible representation of such processes and units of reuse over two dimensions, namely granularity and completeness. It also offers a frame for discussing issues, such as the proper level of reuse, of existing and forthcoming proposals. Finally, it opens the path to other strands for future research such as providing language independence of learning designs or proposing approaches for the selection of the reusable solutions.

Journal Article
TL;DR: This paper overviews the Verification Grand Challenge, a large scale multinational intiative designed to significantly increase the interoperability, applicability and uptake of formal development techniques.
Abstract: This paper overviews the Verification Grand Challenge, a large scale multinational intiative designed to significantly increase the interoperability, applicability and uptake of formal development techniques. Results to date are reviewed, and next steps are outlined.

Journal Article
TL;DR: The challenge now is to achieve general acceptance of formal methods as a part of industrial development of high quality systems, particularly trusted systems as mentioned in this paper, and we are all going to be discussing how to achieve this, but before that we should maybe ask the other questions: What are the real benefits of formal method and why should we care about them? When and Where should we expect to use them, and Who should be involved?
Abstract: The web site for this conference states that: “The challenge now is to achieve general acceptance of formal methods as a part of industrial development of high quality systems, particularly trusted systems.” We are all going to be discussing How to achieve this, but before that we should maybe ask the other questions: What are the real benefits of formal methods and Why should we care about them? When and Where should we expect to use them, and Who should be involved? I will suggest some answers to those questions and then describe some ways that the benefits are being realised in practice, and what I think needs to happen for them to become more widespread.

Journal Article
TL;DR: This paper shows how to make a common understanding between IMS Learning Design and Moodle, mapping related elements in both to get a list of pairs easy to translate from one to another, and to define also aList of requirements for this protocol.
Abstract: Mapping the specification IMS Learning Design and the Course Management System Moodle is a logical step forward on interoperability between eLearning systems and specifications in order to increase the best acceptance of the specifications into the widespread world of the eLearning systems and to ensure the standardization of the outputs from the systems to be used in others. IMS Learning Design and Moodle look for a common understanding focused on the integration of information packages modelled by each part in the other. The final goal aims at Moodle playing an IMS LD package. A second step will map a Moodle course to be used in any IMS LD complaint tool. The Unit of Learning in IMS LD and the course in Moodle become the perfect couple where to find several elements that should match each other. This paper shows how to make this understanding, mapping related elements in both to get a list of pairs easy to translate from one to another, and to define also a list of requirements for this protocol.

Journal Article
TL;DR: A novel Voting-based Clustering Algorithm (VCA) is proposed for energy-efficient data dissemination in wireless sensor networks that combines load balancing, energy and topology information together by using very simple voting mechanisms.
Abstract: Clustering provides an effective mechanism for energy-efficient data delivery in wireless sensor networks. To reduce communication cost, most clustering algorithms rely on a sensor’s local properties in electing cluster heads. They often result in unsatisfactory cluster formations, which may cause the network to suffer from load imbalance or extra energy consumption. In this paper, we propose a novel Voting-based Clustering Algorithm (VCA) for energy-efficient data dissemination in wireless sensor networks. This new approach lets sensors vote for their neighbors to elect suitable cluster heads. VCA is completely distributed, locationunaware and independent of network size and topology. It combines load balancing, energy and topology information together by using very simple voting mechanisms. Simulation results show that VCA can reduce the number of clusters by 5-25% and prolong the lifetime of a sensor network by 10-30% over that of existing energy-efficient clustering protocols.

Journal Article
TL;DR: The deleterious effects of contaminant metals present in feeds to a catalytic cracker can be reduced to a greater extent than that obtainable with conventionally practiced passivation techniques utilizing antimony treated catalysts by pretreating the catalyst with hydrogen sulfide.
Abstract: The deleterious effects of contaminant metals present in feeds to a catalytic cracker can be reduced to a greater extent than that obtainable with conventionally practiced passivation techniques utilizing antimony treated catalysts by pretreating the catalyst with hydrogen sulfide.

Journal Article
TL;DR: The specification, validation and verification of system and soft- ware requirements using the SCR tabular method and tools are described and an overview of each of the ten tools in theSCR toolset is presented.
Abstract: This paper describes the specification, validation and verification of system and soft- ware requirements using the SCR tabular method and tools. An example is presented to illustrate the SCR tabular notation, and an overview of each of the ten tools in the SCR toolset is presented.

Journal Article
TL;DR: A Perspective-oriented Edu- cational Modeling Language (PoEML) that simplifies and facilitates the modeling of alternatives and the performance of changes and separation of the modeling in several concerns that can be managed almost indepen- dently.
Abstract: Educational Modeling Languages (EMLs) have been proposed to support the modeling of educational units. Currently, there are some EML proposals devoted to provide a computational base, enabling the software processing and execution of educational units' models. In this context, flexibility is a key requirement in order to support alternatives and changes . This paper presents a Perspective-oriented Edu- cational Modeling Language (PoEML) that simplifies and facilitates the modeling of alternatives and the performance of changes. The key point of the proposal is the separation of the modeling in several concerns that can be managed almost indepen- dently. As a result, changes at each concern can be performed without affecting to other concerns, or affecting in controlled ways.

Journal Article
TL;DR: Effective rates of asymptotic regularity are given for the Halpern iterations of nonexpansive self-mappings of nonempty convex sets in normed spaces.
Abstract: In this paper we obtain new effective results on the Halpern iterations of nonexpansive mappings using methods from mathematical logic or, more specifi- cally, proof-theoretic techniques. We give effective rates of asymptotic regularity for the Halpern iterations of nonexpansive self-mappings of nonempty convex sets in normed spaces. The paper presents another case study in the project of proof mining ,w hich is concerned with the extraction of effective uniform bounds from (prima-facie) ineffective proofs.

Journal Article
TL;DR: This paper introduces model checking, originally conceived for checking finite state systems, and surveys its evolution to encompass finitely checkable properties of systems with un- bounded state spaces, and its application to software and other systems.
Abstract: This paper introduces model checking, originally conceived for checking finite state systems. It surveys its evolution to encompass finitely checkable properties of systems with un- bounded state spaces, and its application to software and other systems.

Journal Article
TL;DR: A model describing how to design socio-technical environments that will promote collaboration in group activities that was developed and used to conduct experiments for studying the collaborative learning process.
Abstract: Collaborative learning environments require carefully crafted designs -both technical and social This paper presents a model describing how to design socio-technical environments that will promote collaboration in group activities A game was developed based on this model This tool was used to conduct experiments for studying the collaborative learning process Testing with this system revealed some strengths and weaknesses, which are being addressed in the on-going research

Journal ArticleDOI
TL;DR: In this paper, a new family called graded sparse graphs, arising from generically pinned (completely immobilized) bar-and-joint frameworks, was defined and proved to also form matroids.
Abstract: Sparse graphs and their associated matroids play an important role in rigidity theory, where they capture the combinatorics of generically rigid structures. We define a new family called {\bf graded sparse graphs}, arising from generically pinned (completely immobilized) bar-and-joint frameworks and prove that they also form matroids. We address five problems on graded sparse graphs: {\bf Decision}, {\bf Extraction}, {\bf Components}, {\bf Optimization}, and {\bf Extension}. We extend our {\bf pebble game algorithms} to solve them.

Journal Article
TL;DR: The issue of linking to papers forward in time is explored in the context of a particular journal which has existed for the past 13 years with over 1500 published papers and means of identifying the relevance (or relatedness) of papers are explored.
Abstract: We are approaching an era where research materials will be stored more and more as digital resources on the World Wide Web. This of course will enable easier access to online publications. As the number of electronic publications expands, it will, however, become a challenge for individuals to find related or relevant papers. Related papers could be papers written by the same team of authors or by one of the authors, or even papers that deal with the same topic but were written by other authors. This, of course, raises the issue of linking to papers forward in time, or as we call it "links into the future". To be concrete, while reading a paper written in the year 1980, it would be nice to know if the same author has written another related paper in 1990's or if the same author has written a paper earlier, all this without making an explicit search. Based on the ascertained interest of a person reading a particular paper from a digital repository, an auto-suggestion facility could be useful to indicate papers in the same area, category and subject that might potentially be of interest to the reader. One is typically interested in finding related papers by the same author or by one of the authors of a paper. This feature can be implemented in two ways. The first is by creating links from this paper to all the relevant papers and updating it periodically for new papers appearing on the World Wide Web. Another way is by going through the references of all papers appearing on the WWW. Based on the references, one can create mutual links to the papers that are referred to. In this paper, we focus on offering personalised services beyond standard global access. We explore means of identifying the relevance (or relatedness) of papers. A related paper can mean different things to different people as explained above. Ideally, related papers are found and made accessible using links into the future that could be customised to suit the needs of individual users. In this paper, we will focus on a subset of the problem. We explore links into the future in the context of a particular journal which has existed for the past 13 years with over 1500 published papers. We discuss problems that arise in this restricted context while providing details of partial implementations. We plan to pursue our ideas in a more general setting in future implementations.

Journal Article
TL;DR: This paper simplifies a recent model of computation considered in (Margenstern et al. 2005), namely accepting network of evolutionary processors, by moving the filters from the nodes to the edges, and proposes characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of evolutionary processor with filtered connections.
Abstract: In this paper we simplify a recent model of computation considered in (Margenstern et al. 2005), namely accepting network of evolutionary processors, by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that input and output filters, respectively, of the two nodes connected by the edge coincide. Thus, the possibility of controlling the computation in such networks seems to be diminished. In spite of this observation these simplified networks have the same computational power as accepting networks of evolutionary processors, that is they are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of evolutionary processors with filtered connections.

Journal Article
TL;DR: El articulo intenta proporcionar conocimiento sobre los elementos que deben ser considerados con respecto a los instrumentos de calidad dirigidos a la ensenanza virtual.
Abstract: El articulo intenta proporcionar conocimiento sobre los elementos que deben ser considerados con respecto a los instrumentos de calidad dirigidos a la ensenanza virtual

Journal Article
TL;DR: A semantic-based automated Skill Man- agement System is proposed, which supports competences search and creation and focuses on a novel algorithm exploiting advanced services for the one-to-one assignment of a set of individuals to aSet of tasks, endowed of logical explanation features for missing/conflicting skills.
Abstract: Knowledge management is characterized by many different activities rang- ing from the elicitation of knowledge to its storing, sharing, maintenance, usage and creation. Skill management is one of such activities, with its own peculiarities, as it focuses on full exploitation of knowledge individuals in an organization have, in order to carry out at best given tasks. In this paper a semantic-based automated Skill Man- agement System is proposed, which supports competences search and creation. The system implements an approach exploiting the formalism and the reasoning services provided by Description Logics. The approach embeds also non standard Description Logics reasoning services to extend the set of provided features. Here we present main characteristics of our system and focus on a novel algorithm exploiting advanced in- ference services for the one-to-one assignment of a set of individuals to a set of tasks, endowed of logical explanation features for missing/conflicting skills.

Journal Article
TL;DR: Several metrics for evaluating automatic methods for ranking schema elements are proposed and discussed, and the creation of a test collection for evaluating such methods is described, upon which several ranking methods for RDF schemas are evaluated.
Abstract: Ranking is a ubiquitous requirement whenever we confront a large collec- tion of atomic or interrelated artifacts. This paper elaborates on this issue for the case of RDF schemas. Specifically, several metrics for evaluating automatic methods for ranking schema elements are proposed and discussed. Subsequently the creation of a test collection for evaluating such methods is described, upon which several ranking methods (from simple to more sophisticated) for RDF schemas are evaluated. This for- mal way for evaluating ranking methods, apart from yielding credible and repeatable results, gave us some interesting insights to the problem. Finally, our experiences from exploiting these ranking methods for visualizing RDF schemas, specifically for deriving and visualizing top-k schema subgraphs, are reported.

Journal Article
TL;DR: This paper finds recommendation techniques as a suitable method of making negotiation smarter and more efficient and has introduced into the negotiation thread the following improvements: content-based recommendation and collaborative one.
Abstract: In this paper we deal with the problem of non-intuitive and low-efficient negotiations between agents in agent based system. We find recommendation techniques as a suitable method of making negotiation smarter and more efficient. We have introduced into the negotiation thread the following improvements: content-based recommendation and collaborative one. These two types of recommendation are discussed. On the base of a negotiation algorithm we have studied how introduced recommendation methods can improve the whole process of finding mutually acceptable agreements among agents.

Journal Article
TL;DR: A protocol for the communication and termination de- tection in this system is presented and a protocol to implement the protocol is described using high-level scenarios, i.e., scenarios where, recursively, the cells may themselves contain scenarios, at a lower, refined level.
Abstract: A model (consisting of rv-systems), a core programming language (for de- veloping rv-programs), several specification and analysis techniques appropriate for modeling, programming and reasoning about interactive computing systems have been introduced by Stefanescu in 2004 using register machines and space-time duality, see (Stefanescu 2006, Stefanescu 2006b). Later on, Dragoi and Stefanescu have introduced structured programming techniques for programming rv-systems and have presented a kernel programming language AGAPIA v0.1 for interactive computing systems, see (Dragoi and Stefanescu 2006a, Dragoi and Stefanescu 2006b). AGAPIA v0.1 has a restricted format for program construction, using a "3-level" gram- mar for their definition: the procedure starts with simple while programs, then modules are defined, and finally AGAPIA v0.1 programs are obtained applying structured rv- programming statements on top of modules. In the current paper the above restriction is completely removed. By an appropriate reshaping interface technique, general programs may be encapsulated into modules, allowing to reiterate the above "3-level" construction of programs, now starting with arbitrary AGAPIA programs, not with simple while programs. This way, high-level interactive programs are obtained. The extended version is called AGAPIA v0.2. As a case study we consider a cluster of computers, each having a dynamic set of running processes. We present a protocol for the communication and termination de- tection in this system and implement the protocol in our AGAPIA v0.2 language. We also describe the operational semantics of the program using high-level scenarios, i.e., scenarios where, recursively, the cells may themselves contain scenarios, at a lower, refined level.

Journal Article
TL;DR: A general decision-making method for advising the users in further usage of Internet path at particular time and date based on the clustering and tree classification data mining techniques, which has been confirmed in real-life experiment.
Abstract: In this paper we propose an application of data mining methods in the prediction of the availability and performance of Internet paths. We deploy a general decision-making method for advising the users in further usage of Internet path at particular time and date. The method is based on the clustering and tree classification data mining techniques. The usefulness of our method for prediction the Internet path behavior has been confirmed in real-life experiment. The active Internet measurements were performed to gather the end-to-end latency and packet routing information. The knowledge gathered has been analyzed using a professional data mining package via neural clustering and decision tree algorithms. The results show that the data mining can be efficiently used for the purpose of the forecasting the network behavior. We propose to build a network performance monitoring and prediction service based on proposed data mining procedure. We address our approach especially to the non-networkers of such networking frameworks as Grid and overlay networks who want to schedule their network activity but who want to be left free from networking issues to concentrate on their work.