scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Information Processing Systems in 2008"


Journal ArticleDOI
TL;DR: It is suggested that home robots are more effective as regards children's learning concentration, learning interest and academic achievement than other types of instructional media (such as: books with audiotape and WBI) for English as a foreign language.
Abstract: Human-Robot Interaction (HRI), based on already well-researched Human-Computer Interaction (HCI), has been under vigorous scrutiny since recent developments in robot technology. Robots may be more successful in establishing common ground in project-based education or foreign language learning for children than in traditional media. Backed by its strong IT environment and advances in robot technology, Korea has developed the world's first available e-Learning home robot. This has demonstrated the potential for robots to be used as a new educational media - robot-learning, referred to as 'r-Learning'. Robot technology is expected to become more interactive and user-friendly than computers. Also, robots can exhibit various forms of communication such as gestures, motions and facial expressions. This study compared the effects of non-computer based (NCB) media (using a book with audiotape) and Web-Based Instruction (WBI), with the effects of Home Robot-Assisted Learning (HRL) for children. The robot gestured and spoke in English, and children could touch its monitor if it did not recognize their voice command. Compared to other learning programs, the HRL was superior in promoting and improving children's concentration, interest, and academic achievement. In addition, the children felt that a home robot was friendlier than other types of instructional media. The HRL group had longer concentration spans than the other groups, and the p-value demonstrated a significant difference in concentration among the groups. In regard to the children's interest in learning, the HRL group showed the highest level of interest, the NCB group and the WBI group came next in order. Also, academic achievement was the highest in the HRL group, followed by the WBI group and the NCB group respectively. However, a significant difference was also found in the children's academic achievement among the groups. These results suggest that home robots are more effective as regards children's learning concentration, learning interest and academic achievement than other types of instructional media (such as: books with audiotape and WBI) for English as a foreign language.

179 citations


Journal ArticleDOI
TL;DR: The characteristic of Home-eNB is overviewed and the mobility management issues and the related approaches in 3GPP LTE based Home- eNB systems are described.
Abstract: The specification of the Home Evolved NodeB (Home-eNB), which is a small base station designed for use in residential or small business environment, is currently ongoing in 3GPP LTE (Long Term Evolution) systems. One of the key requirements for its feasibility in the LTE system is the mobility management in the deployment of the numerous Home-eNBs and other 3GPP network. In this paper, we overview the characteristic of Home-eNB and also describe the mobility management issues and the related approaches in 3GPP LTE based Home-eNB systems. Keywords : Home-eNB, 3GPP LTE (Long Term Evolution), Mobility Management 1. Introduction issues of HeNB in 3GPP LTE systems and related approaches A significant interest within the telecommunications industry has recently focused on the femto-cell which is defined broadly as low-cost, low-power cellular base stations that operate in licensed spectrum to connect conventional, unmodified mobile terminals to a mobile operator’s network [1][2][3]. Femto-cell has been actively discussed in 3

43 citations


Journal ArticleDOI
TL;DR: This paper deals with the geometric fitting algorithms for parametric curves and surfaces in 2-D/3-D space, which estimate the curve/surface parameters by minimizing the square sum of the shortest distances between the Curve/surface and the given points.
Abstract: This paper deals with the geometric fitting algorithms for parametric curves and surfaces in 2-D/3-D space, which estimate the curve/surface parameters by minimizing the square sum of the shortest distances between the curve/surface and the given points. We identify three algorithmic approaches for solving the nonlinear problem of geometric fitting. As their general implementation we describe a new algorithm for geometric fitting of parametric curves and surfaces. The curve/surface parameters are estimated in terms of form, position, and rotation parameters. We test and evaluate the performances of the algorithms with fitting examples.

27 citations


Journal ArticleDOI
TL;DR: This paper analyzes the handover latency of mobile IP and mobile SCTP over IPv6 networks and sees that the mSCTP handover can provide smaller hand over latency than the Mobile IP.
Abstract: This paper analyzes the handover latency of mobile IP and mobile SCTP over IPv6 networks. The analytical results are compared with performance by experiment over Linux testbed. For the analysis, we consider the two handover scenarios: horizontal handover and vertical handover. From the results, we see that the mSCTP handover can provide smaller handover latency than the Mobile IP. Moreover, the mSCTP can give much smaller handover latency for the vertical handover where has enough sojourn time of MN in the overlapping region between different subnet regions, compared to the horizontal handover.

26 citations


Journal ArticleDOI
TL;DR: This research proposes a new strategy where documents are encoded into string vectors and modified version of k means algorithm to be adaptable to string vectors for text clustering.
Abstract: This research proposes a new strategy where documents are encoded into string vectors and modified version of k means algorithm to be adaptable to string vectors for text clustering Traditionally, when k means algorithm is used for pattern classification, raw data should be encoded into numerical vectors This encoding may be difficult, depending on a given application area of pattern classification For example, in text clustering, encoding full texts given as raw data into numerical vectors leads to two main problems: huge dimensionality and sparse distribution In this research, we encode full texts into string vectors, and modify the k means algorithm adaptable to string vectors for text clustering

22 citations


Journal ArticleDOI
TL;DR: This research proposes a new neural network for text categorization which uses alternative representations of documents to numerical vectors which is called NTC (Neural Text Categorizer) in this research.
Abstract: This research proposes a new neural network for text categorization which uses alternative representations of documents to numerical vectors. Since the proposed neural network is intended originally only for text categorization, it is called NTC (Neural Text Categorizer) in this research. Numerical vectors representing documents for tasks of text mining have inherently two main problems: huge dimensionality and sparse distribution. Although many various feature selection methods are developed to address the first problem, the reduced dimension remains still large. If the dimension is reduced excessively by a feature selection method, robustness of text categorization is degraded. Even if SVM (Support Vector Machine) is tolerable to huge dimensionality, it is not so to the second problem. The goal of this research is to address the two problems at same time by proposing a new representation of documents and a new neural network using the representation for its input vector.

18 citations


Journal ArticleDOI
TL;DR: It is found that Europe seems to be much more rigid in its thinking on robots and especially has a negative view on educational robots, but European children will be eager to play with educational robots even though their parents have anegative view of them.
Abstract: Europeans are much more rigid in their thinking on robots and especially have a negative view on robots as peers since they regard robots as labor machine s . Recently, Korea invented several educational robots as peer tutors. Therefore, study was needed to determine the difference in cultural acceptability for educational robots between Korea and Europe (Spain). We found that Europe seems to be much more rigid in its thinking on robots and especially has a negative view on educational robots. Korean parents have a strong tendency to see robots as 'the friend of children,' while on the other hand, European parents tend to see educational robots as 'machines or electronics'. Meanwhile, the expectation of children on educational robots showing identification content was higher in Europe than in Korea since European children are familiar with costume parties. This result implied that we may find a Korean market for educational robots earlier than a European market, but European children will be eager to play with educational robots even though their parents have a negative view of them.

17 citations


Journal ArticleDOI
TL;DR: This research proposes a new strategy where documents are encoded into string vectors and modified version of KNN to be adaptable to string vectors for text categorization.
Abstract: This research proposes a new strategy where documents are encoded into string vectors and modified version of KNN to be adaptable to string vectors for text categorization. Traditionally, when KNN are used for pattern classification, raw data should be encoded into numerical vectors. This encoding may be difficult, depending on a given application area of pattern classification. For example, in text categorization, encoding full texts given as raw data into numerical vectors leads to two main problems: huge dimensionality and sparse distribution. In this research, we encode full texts into string vectors, and modify the supervised learning algorithms adaptable to string vectors for text categorization.

8 citations


Journal ArticleDOI
TL;DR: An autonomic -interleaving Registry Overlay Network (RgON) is proposed that enables web-services' providers/consumers to publish/discover services' advertisements, WSDL documents and empowers consumers to discover web services associated with these advertisements.
Abstract: The Web Services infrastructure is a distributed computing environment for service-sharing. Mechanisms for Web services Discovery proposed so far have assumed a centralized and peer-to-peer (P2P) registry. A discovery service with centralized architecture, such as UDDI, restricts the scalability of this environment, induces performance bottleneck and may result in single points of failure. A discovery service with P2P architecture enables a scalable and an efficient ubiquities web service discovery service that needs to be run in self-organized fashions. In this paper, we propose an autonomic -interleaving Registry Overlay Network (RgON) that enables web-services' providers/consumers to publish/discover services' advertisements, WSDL documents. The RgON, doubtless empowers consumers to discover web services associated with these advertisements within constant D logical hops over constant K physical hops with reasonable storage and bandwidth utilization as shown through simulation.

7 citations


Journal ArticleDOI
TL;DR: OWL visualization scheme that supports class information in detail is presented, based on concept of social network, and it is implemented on OWL visualization plug-in on that is the most famous ontology editor.
Abstract: In recent years, numerous studies have been attempted to exploit ontology in the area of ubiquitous computing Especially, some kinds of ontologies written in OWL are proposed for major issues in ubiquitous computing such like context-awareness OWL is recommended by W3C as a descriptive language for representing ontology with rich vocabularies However, developers struggle to design ontology using OWL, because of the complex syntax of OWL The research for OWL visualization aims to overcome this problem, but most of the existing approaches unfortunately do not provide efficient interface to visualize OWL ontology Moreover, as the size of ontology grows bigger, each class and relation are difficult to represent on the editing window due to the small size limitation of screen In this paper, we present OWL visualization scheme that supports class information in detail This scheme is based on concept of social network, and we implement OWL visualization plug-in on that is the most famous ontology editor

5 citations


Journal ArticleDOI
TL;DR: A policy adjuster to handle workflow management polices and resource management policies using policy decision scheme is devised and implemented with workflow management functions based on policy quorum based resource management system for providing poincare geometry-characterized ECG analysis and virtual heart simulation service.
Abstract: This paper proposes a policy adjuster-driven Grid workflow management system for collaborative healthcare platform, which supports collaborative heart disease diagnosis applications. To select policies according to service level agreement of users and dynamic resource status, we devised a policy adjuster to handle workflow management polices and resource management policies using policy decision scheme. We implemented this new architecture with workflow management functions based on policy quorum based resource management system for providing poincare geometry-characterized ECG analysis and virtual heart simulation service. To evaluate our proposed system, we executed a heart disease identification application in our system and compared the performance to that of the general workflow system and PQRM system under different types of SLA. Keywords: Grid Worfklow, Collaborative Healthcare Platform, SLA, Policy Adjuster 1. Introduction There have been many attempts to create healthcare system or platform by combining the existing medical service with IT such as sensor and communication technology. In other words, the general health clinical data and information are exchanged electronically, and the patient, doctor, hospital and laboratories can be connected through communication network to perform medical practices. Such healthcare system is enabled and supported by IT, yet the core technology will be high-speed information exchange technology and high performance computing technology in distributed environment. Grid technology can meet these requirements by bringing heterogeneous resources together and allocating them efficiently to applications. Medical applications are no longer restricted to one medical institute. Some collaborated applications like heart disease simulator, an instance of workflow jobs, require several resources work together. So the healthcare platform requires collaborative job scheduling function based on resource status which is the core part of Grid resource management system [2][3]. However, the complexity of resource management for QoS guarantee in Grid significantly increases as the grid computing grows. One of the promising approaches trying to solve this problem is policy based resource management system [2] that is suitable to the complex management environments. To manage resource in Grid, it is required to have a sort of grid resource broker and scheduler that manage resources and jobs in the virtual Grid environment. Authors in [3] suggested policy-based resource management system (PQRM) architecture by consideration of Grid Service Level Agreement (SLA) and abstract architecture of grid resource management. In medical healthcare environment, beside most of single-program applications, there exist many applications which require co-process of many programs following a strict processing order. Since these applications are executed on Grid, they are called Grid workflows. Current Grid resource management systems do not consider complexity of collaborative medical applications and service level guarantee at workflow level. In this paper, we propose Grid workflow-driven healthcare platform for collaborative applications. In the sense of optimum management, PQRM [5] provides optimum resource quorum for applications, so we develop workflow-Integrated PQRM as one instance for this workflow-driven healthcare platform. PQRM’s available resource quorum is optimum resource set in resource management point of view and our workflow-driven system can provide optimum resource set in workflow management point of view. Two management philosophies require a policy decision scheme to adjust based on user’s SLA. We evaluate the proposed workflow-driven healthcare platform by measuring the completion time of collaborative heart disease simulator under different types of user SLA. DOI : 10.3745/JIPS.2008.4.3.103

Journal ArticleDOI
TL;DR: Simulation results on the QualNet simulator indicate that MMMP outperforms IEEE 802.11 on all performance metrics and can efficiently handle a large range of traffic intensity.
Abstract: In this paper, we discuss a novel reservation-based, asynchronous MAC protocol called `Multi-rate Multi-hop MAC Protocol` (MMMP) for multi-hop ad hoc networks that provides QoS guarantees for multimedia traffic. MMMP achieves this by providing service differentiation for multirate real-time traffic (both constant and variable bit rate traffic) and guaranteeing a bounded end-to-end delay for the same while still catering to the throughput requirements of non real time traffic. In addition, it administers bandwidth preservation via a feature called `Smart Drop` and implements efficient bandwidth usage through a mechanism called `Release Bandwidth`. Simulation results on the QualNet simulator indicate that MMMP outperforms IEEE 802.11 on all performance metrics and can efficiently handle a large range of traffic intensity. It also outperforms other similar state-of-the-art MAC protocols.

Journal ArticleDOI
TL;DR: The Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface, and results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches.
Abstract: Clusters have become a popular alternative for building high-performance parallel computing systems. Today’s high-performance system area network (SAN) protocols such as VIA and IBA significantly reduce user-to-user communication latency by implementing protocol stacks outside of operating system kernel. However, emerging parallel applications require a significant improvement in communication latency. Since the time required for transferring data between host memory and network interface (NI) make up a large portion of overall communication latency, the reduction of data transfer time is crucial for achieving low-latency communication. In this paper, Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface. The EDT employs cache coherence interface hardware to directly transfer data between the host and NI. An EDT-based network interface was modeled and simulated on the Linux-based, complete system simulation environment, Linux/SimOS. Our simulation results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches. The EDT-based NI attains 17% to 38% reduction in user-to-user message time compared to the cache-coherent DMA-based NIs for a range of message sizes (64 bytes ~ 4 Kbytes) in a SAN environment.

Journal ArticleDOI
TL;DR: This paper proposes a process and methods extracting abnormal quality unit lists to solve three problems of existing method and shows that proposed mechanism could be effectively used after analyzing improved effects taken from automotive company's claim data adaptation for two years.
Abstract: Most enterprises have controlled claim data related to marketing, production, trade and delivery. They can extract the engineering information needed to the reliability of unit from the claim data, and also detect critical and latent reliability problems. Existing method which could detect abnormal quality unit lists in early stage from claim database has three problems: the exclusion of fallacy probability in claim, the false occurrence of claim fallacy alarm caused by not reflecting inventory information and too many excessive considerations of claim change factors. In this paper, we propose a process and methods extracting abnormal quality unit lists to solve three problems of existing method. Proposed one includes data extraction process for reliability measurement, the calculation method of claim fallacy alarm probability, the method for reflecting inventory time in calculating claim reliability and the method for identification of abnormal quality unit lists. This paper also shows that proposed mechanism could be effectively used after analyzing improved effects taken from automotive company's claim data adaptation for two years.

Journal ArticleDOI
TL;DR: This paper represents a vertical test method which connects a system test level and an integration test level in testing stage by using UML, and suggests that staged testing activities are more tightly connected by traceability.
Abstract: Traceability has been held as an important factor in testing activities as well as model- driven development. Vertical traceability affords us opportunities to improve manageability from models and test cases to a code in testing and debugging phase. This paper represents a vertical test method which connects a system test level and an integration test level in testing stage by using UML. An experiment how traceability works to effectively focus on error spots has been included by using concrete examples of tracing from models to the code. We often observe the cases in which paying heavy cost to testing and maintenance due to the small errors induced by mistakes. If traceability between different views and levels of abstraction are not provided then their testing or maintenance requires a great effort. Those cases suggest that great expenses should be produced after completion of a software system as well as under construction. In the context of testing and maintenance activities, we often look over model to the code back and forth to figure out error spots. At present the requirement or design specification is used broadly for a software system test(1). In requirements and design stage, we analyze these artifacts to check errors because the artifacts have faults. In testing stage, we test a system by validation and verification. However, the need of short time-to-market requires that staged testing activities are more tightly connected by traceability. If testers found faults in system level testing, sometimes they need to look down the code level behalf of just noticing faults to unit authors. Developers have to modify source code bearing errors to make correctness after finding out faults in testing stage. It is difficult to inspect a logical structure and algorithm of source code because most of test methods based on UML are black box testing style. Therefore, it is hard to grasp the error spot and trace a code line. Moreover, both test team and development team need to spend lots of cost and time to communicate about errors in case that test team or quality assurance team and development team work separately. When test team executes testing based on UML specification and finds any faults they should explain not only those faults but also various data and process used to find faults to developers. That means that it needs lots of effort to communicate each other. In cases of this, if it is possible to associate the relation between test cases and source code they can decrease costs and effort of communicating to correct errors although they work separately.

Journal ArticleDOI
TL;DR: The design and implementation of a two-tier DBMS for handling massive data and providing faster response time is described, which performs significantly better than disk- oriented DBMS with an added advantage to manage massive data at the same time.
Abstract: This paper describes the design and implementation of a two-tier DBMS for handling massive data and providing faster response time. In the present day, the main requirements of DBMS are figured out using two aspects. The first is handling large amounts of data. And the second is providing fast response time. But in fact, Traditional DBMS cannot fulfill both the requirements. The disk-oriented DBMS can handle massive data but the response time is relatively slower than the memory-resident DBMS. On the other hand, the memory-resident DBMS can provide fast response time but they have original restrictions of database size. In this paper, to meet the requirements of handling large volumes of data and providing fast response time, a two-tier DBMS is proposed. The cold-data which does not require fast response times are managed by disk storage manager, and the hot-data which require fast response time among the large volumes of data are handled by memory storage manager as snapshots. As a result, the proposed system performs significantly better than disk- oriented DBMS with an added advantage to manage massive data at the same time.

Journal ArticleDOI
TL;DR: The use of fragmented bandwidth is discussed to improve the performance of staggered striping in a multimedia system and useful retrieval patterns are identified and disruptions can be easily eliminated.
Abstract: In this paper, we discuss the use of fragmented bandwidth to improve the performance of staggered striping in a multimedia system. It is observed that potential disruptions can occur when nonconsecutive idle disks are used for displaying multimedia objects. We have identified useful retrieval patterns and shown that with proper selections of fragmented disks and a simple buffering scheme, disruptions can be easily eliminated.

Journal ArticleDOI
TL;DR: A tester structure expression language, a tester language with a novel format that can be directly applied to the ATE without conversion, and the number of man-hours required in order to select an ATE could be reduced to 1/10.
Abstract: VLSI chips have been tested using various automatic test equipment (ATE). Although each ATE has a similar structure, the language for ATE is proprietary and it is not easy to convert a test program for use among different ATE vendors. To address this difficulty we propose a tester structure expression language, a tester language with a novel format. The developed language is called the general tester language (GTL). Developing an interpreter for each tester, the GTL program can be directly applied to the ATE without conversion. It is also possible to select a cost-effective ATE from the test program, because the program expresses the required ATE resources, such as pin counts, measurement accuracy, and memory capacity. We describe the prototype environment for the GTL and the tester selection tool. The software size of the prototype is approximately 27,800 steps and 15 man-months were required. Using the tester selection tool, the number of man-hours required in order to select an ATE could be reduced to 1/10. A GTL program was successfully executed on actual ATE.

Journal ArticleDOI
TL;DR: Experimental results show that one third of the association rules derived based on the support and confidence criteria are not significant, that is, the antecedent and consequent of the rules are not correlated.
Abstract: Minimum support and confidence have been used as criteria for generating association rules in all association rule mining algorithms. These criteria have their natural appeals, such as simplicity; few researchers have suspected the quality of generated rules. In this paper, we examine the rules from a more rigorous point of view by conducting statistical tests. Specifically, we use contingency tables and chi-square test to analyze the data. Experimental results show that one third of the association rules derived based on the support and confidence criteria are not significant, that is, the antecedent and consequent of the rules are not correlated. It indicates that minimum support and minimum confidence do not provide adequate discovery of meaningful associations. The chi-square test can be considered as an enhancement or an alternative solution.