scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Information Science and Engineering in 2005"


Journal Article
TL;DR: A parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data, which demonstrates the usefulness of the proposed PPSO algorithm.
Abstract: Particle swarm optimization (PSO) is an alternative population-based evolutionary computation technique. It has been shown to be capable of optimizing hard mathematical problems in continuous or binary space. We present here a parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data. The first strategy is designed for solution parameters that are independent or are only loosely correlated, such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to parameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid communication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO algorithm.

250 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient algorithm to implement a k-means clustering that produces clusters comparable to slower methods, but with much better performance.
Abstract: The k-means algorithm is one of the most widely used methods to partition a dataset into groups of patterns. However, most k-means methods require expensive distance calculations of centroids to achieve convergence. In this paper, we present an efficient algorithm to implement a k-means clustering that produces clusters comparable to slower methods. In our algorithm, we partition the original dataset into blocks; each block unit, called a unit block (UB), contains at least one pattern. We can locate the centroid of a unit block (CUB) by using a simple calculation. All the computed CUBs form a reduced dataset that represents the original dataset. The reduced dataset is then used to compute the final centroid of the original dataset. We only need to examine each UB on the boundary of candidate clusters to find the closest final centroid for every pattern in the UB. In this way, we can dramatically reduce the time for calculating final converged centroids. In our experiments, this algorithm produces comparable clustering results as other k-means algorithms, but with much better performance.

59 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that MEMISP outperforms both GSP and PrefixSpan without the need for either candidate generation or database projection, and can efficiently mine sequence databases of any size, for any minimum support values.
Abstract: Sequential pattern mining is a challenging issue because of the high complexity of temporal pattern discovering from numerous sequences. Current mining approaches either require frequent database scanning or the generation of several intermediate databases. As databases may fit into the ever-increasing main memory, efficient memory-based discovery of sequential patterns is becoming possible. In this paper, we propose a memory indexing approach for fast sequential pattern mining, named MEMISP. During the whole process, MEMISP scans the sequence database only once to read data sequences into memory. The find-then-index technique is recursively used to find the items that constitute a frequent sequence and constructs a compact index set which indicates the set of data sequences for further exploration. As a result of effective index advancing, fewer and shorter data sequences need to be processed in MEMISP as the discovered patterns get longer. Moreover, we can estimate the maximum size of the total memory required, which is independent of the minimum support threshold, in MEMISP. Experimental results indicate that MEMISP outperforms both GSP and PrefixSpan (general version) without the need for either candidate generation or database projection. When the database is too large to fit into memory in a batch, we partition the database, mine patterns in each partition, and validate the true patterns in the second pass of database scanning. Experiments performed on extra-large databases demonstrate the good performance and scalability of MEMISP, even with very low minimum support. Therefore, MEMISP can efficiently mine sequence databases of any size, for any minimum support values.

49 citations


Journal Article
TL;DR: This paper presents the optimization of shunt active power filter parameters based on fuzzy logic control, and shows the effectiveness of fuzzy logic controllers in optimizing the PWM technique and the values of the passive parameters of the active filter.
Abstract: This paper presents the optimization of shunt active power filter parameters based on fuzzy logic control. The current active filter control is based on constant, fuzzy hysteresis band techniques, which are employed to derive the switching signals and also to choose the optimal value of the decoupling inductance. The DC voltage controller optimizes the energy storage of the DC capacitor, where proportional integral and fuzzy logic controllers are employed. Simulation results, obtaining using MATLAB/SIMULINK under several loads configurations, are presented and discussed. They show the effectiveness of fuzzy logic controllers in optimizing the PWM technique and the values of the passive parameters of the active filter.

43 citations


Journal ArticleDOI
TL;DR: This work uses level λ fuzzy numbers and level (λ, ρ) interval-valued fuzzy numbers to fuzzify demand and product in the constraints and gets transportation problem in the fuzzy sense based on statistical data.
Abstract: In crisp transportation, trying to fuzzify the amount of supply of the ith origin a(subscript i), and the amount of demand of the jth destination b(subscript j), we use level λ fuzzy numbers and level (λ, ρ) interval-valued fuzzy numbers to fuzzify a(subscript i) and b(subscript j) in the constraints. We get transportation problem in the fuzzy sense. We also cooperate some statistical concepts and corresponding to (1-α)×100% statistical confidence intervals of the amount of supply and the amount of demand. We use level (1-β, 1-α) interval-valued fuzzy numbers to fuzzify demand and product in the constraints. Then we get transportation problem in the fuzzy sense based on statistical data.

42 citations


Journal Article
TL;DR: This paper fuzzify the storing cost a, backorder cost b, cost of placing an order c, total demand r, order quantity q, and shortage quantity s as the triangular fuzzy numbers in the total cost to obtain the fuzzy total cost.
Abstract: In this paper, we consider fuzzy inventory with backorder. First, we fuzzify the storing cost a, backorder cost b, cost of placing an order c, total demand r, order quantity q, and shortage quantity s as the triangular fuzzy numbers in the total cost. From these, we can obtain the fuzzy total cost. Using the signed distance method to defuzzify, we get the estimate of the total cost in the fuzzy sense. Two special cases of the optimal solutions on fuzzifying the storage quantity and order quantity as triangular fuzzy numbers will be treated numerically by the Nedler-Mead algorithm.

39 citations


Journal ArticleDOI
TL;DR: A group key management architecture and key agreement protocols for secure communication in mobile ad-hoc wireless networks overseen by Unmanned Aerial Vehicles (UAVs) are described and the Implicitly Certified Public Keys method is used, which reduces the overhead of the certificate validation checking process and improves computational efficiency.
Abstract: In recent years, mobile ad-hoc networks have received a great deal of attention in both academia and industry because they provide anytime-anywhere networking services. As wireless networks are rapidly deployed in the future, secure wireless environment will be mandatory. In this paper, we describe a group key management architecture and key agreement protocols for secure communication in mobile ad-hoc wireless networks (MANETs) overseen by Unmanned Aerial Vehicles (UAVs). We use the Implicitly Certified Public Keys method, which reduces the overhead of the certificate validation checking process and improves computational efficiency. The architecture uses a two-layered key management approach, where a group of nodes is divided into: 1) cell groups consisting of ground nodes, and 2) control groups consisting of cell group managers. The chief benefit of this approach is that the effects of a membership change are restricted to the single cell group.

35 citations


Journal Article
TL;DR: This method has been applied to the western part of the Algerian power network, and the results have been found to be satisfactory compared with other results obtained using classical methods.
Abstract: A genetic algorithm is used to solve an economic dispatch problem. The chromosome contains only the encoding of a normalized incremental cost system. Therefore, the total number of bits of a chromosome is entirely independent of the number of units. In the first case, the transmission line losses are calculated using the Newton-Raphson method and kept constant. In the second case, the transmission line losses are considered as a linear function of the real generated power. The coefficients are calculated using the Gauss-Seidel method. This method has been applied to the western part of the Algerian power network, and the results have been found to be satisfactory compared with other results obtained using classical methods.

32 citations


Journal Article
TL;DR: This work presents linkage algorithms that can be used to discover the patterns (such as confused concepts, substitute concepts, and hidden wrong concepts) in concept maps to support assessment and teachers can use the discovered patterns to become aware of the conceptions of students, but also to improve students' conceptions efficiently.
Abstract: Concept maps have been adopted extensively in teaching and assessment. Assess ment schemes, including the closeness index and the N-G method, have also been widely applied to evaluate the quality of students' concept maps. Teachers must make great efforts to evaluate students' concept maps, because present concept map assessment schemes do not reveal ways to help them improve such maps. Additionally, teachers cannot easily provide constructive suggestions to students to improve their learning, particularly when concept maps incorporate many concept; and links. This work presents linkage algorithms that can he used to discover the patterns (such as confused concepts, substitute concepts, and hidden wrong concepts) in concept maps to support assessment. Teachers can use the discovered patterns not onty to become aware of the conceptions of students, but also to improve students' conceptions efficiently.

31 citations


Journal ArticleDOI
TL;DR: A destination-oriented representation to represent chromosomes, three general genetic operators (selection, crossover, and mutation), four types of operators (Chromosome C crossover, Individual Crossover, Chromosome Mutation, and Individual Mutation), and four mutation heuristics are employed in the GA method, which shows that the solution model can obtain a near optimal solution.
Abstract: Because optical WDM networks will become a realistic choice for buildings backbones, multicasting in the WDM network should be supported for various network applications. In this paper, a new multicast problem, Multicast Routing under Delay Constraint Problem (MRDCP), routing a request with delay bound to all destinations in a WDM network with different light splitting is solved by genetic algorithms (GAs), where different light splitting means that nodes in the network can transmit one copy or multiple copies to other nodes by using the same wavelength. The MRDCP can be reduced to the Minimal Steiner Tree Problem (MSTP) which has been shown to be NP-Complete. We propose a destination-oriented representation to represent chromosomes, three general genetic operators (selection, crossover, and mutation), four types of operators (Chromosome Crossover, Individual Crossover, Chromosome Mutation, and Individual Mutation). Four mutation heuristics (Random Mutation (RM), Cost First Mutation (CFM), Delay First Mutation (DFM), and Hybrid Mutation (HM)) are employed in the GA method. Finally, experimental results show that our solution model can obtain a near optimal solution.

30 citations


Journal Article
TL;DR: A web-based computerized system was designed and implemented based on the enhanced S-P model, which can be used to diagnose both time dependent information and problem solving abilities with respect to test items and test-takers.
Abstract: The cognitive diagnostic test can be used to understand the learning effects (such as strengths and weaknesses) of learners for a specific subject area. Based on the evaluation results of the diagnostic test, instructors may suggest or give students additional material on the subject area for students who do not meet the requirements. The S-P (Student-Problem) model has been used for this purpose for a long time. However, the current S-P model pays little attention to a student’s response time for each test item or for the entire question set during the test. The student’s response time for each test item can be an important factor for instructors wanting to diagnose each student’s individual ability in problem solving. Also, there are few computerized diagnostic test analysis systems available that are designed to support both text-diagram-based and/or multimedia-based presentation test items. In this research, we incorporate the response time, difficulty index, and discriminatory index of each test item into an S-P model during the analysis. Specifically, we employ two terms: 1) the nimbleness of thinking of a student, which can be measured based on the response time for answering each test item, and 2) the problem solving ability of a student, which can be measured based on the student’s ability to solve adaptive type questions with various difficulty levels and discrimination powers. With the incorporation of these parameters, an enhanced S-P model is presented. It can be used to diagnose both time dependent information and problem solving abilities with respect to test items and test-takers. A web-based computerized system was designed and implemented based on the enhanced S-P model for both text-diagram type presentation test items and multimedia type presentation test items. Practical examples were investigated and experimental studies conducted using the cognitive diagnostic computerized system to demonstrate the rationality and applicability of the proposed enhanced S-P model.

Journal ArticleDOI
TL;DR: A novel forecasting technique using a hybrid BPNN-weighted Grey-CLSP (BWGC) prediction that employs a back-propagation neural net (BPNN) to automatically adjust a linear combination of GM(1, 1|α) prediction and the cumulated 3-point least squared linear prediction (C3LPS) to resolve this overshooting problem.
Abstract: Conventional GM(1, 1|α) prediction always produces the huge singleton residual error around the turning point region of a time series and this phenomenon is called overshooting. A novel forecasting technique using a hybrid BPNN-weighted Grey-CLSP (BWGC) prediction that employs a back-propagation neural net (BPNN) to automatically adjust a linear combination of GM(1, 1|α) prediction and the cumulated 3-point least squared linear prediction (C3LPS) is presented herein to resolve this overshooting problem. This is because utilizing an underestimated output from C3LPS to offset an overshoot predicted output from the grey prediction will dramatically reduce the big residual error. This model exhibits a smoothing effect on the forecast to yield better an accuracy for the non-periodic short-term prediction. A three-layer BPNN with a structure of 5×14×2 multilayer-perceptron is used to tune the weights for both models. This approach was verified to be not only suitable for a stochastic type prediction (international stock price indices forecasting) but also for an inertia type prediction (forecasting the path of a typhoon).

Journal ArticleDOI
TL;DR: This study employs a kind of Grid Computing technology, called the ”Data Grid” to integrate idle computer resources in enterprises into e-learning platforms, thus eliminating the need to purchase costly high-level servers and other equipment.
Abstract: The overall popularity of the Internet has helped e-learning become a hot method for learning in recent years. Over the Internet, learners can freely absorb new knowledge without restrictions on time or place. Many companies have adopted e-learning to train their employees. An e-learning system can make an enterprise more competitive by increasing the knowledge of its employees. E-learning has been shown to have impressive potential in e-commerce. At present, most e-learning environment architectures use single computers or servers as their structural foundations. As soon as their work loads increase, their software and hardware must be updated or renewed. This is a big burden on organizations that lack sufficient funds. Thus, in this study we employ a kind of Grid Computing technology, called the ”Data Grid” to integrate idle computer resources in enterprises into e-learning platforms, thus eliminating the need to purchase costly high-level servers and other equipment.

Journal Article
TL;DR: Comparisons between the subband-energy features extracted from the wavelet transform and the conventional DCT, using the Brodatz texture database, demonstrate that the proposed method offers the best textural pattern retrieval accuracy and yields a much higher classification rate.
Abstract: The multiresolution wavelet transform is an effective procedure in texture analysis. However, many images are still compressed by the methods based on the discrete cosine transform (DCT). Thus, decompression of the inverse DCT is required to yield the textural features based on the wavelet transform for the DCT-coded image. This investigation adopts the multiresolution reordered features in texture analysis. The proposed features are directly generated using the DCT coefficients of the encoded image. Comparisons between the subband-energy features extracted from the wavelet transform and the conventional DCT, using the Brodatz texture database, demonstrate that the proposed method offers the best textural pattern retrieval accuracy and yields a much higher classification rate. The proposed DCT features are expected to be very useful and efficient in retrieving and classifying texture patterns in large DCT-coded image databases.

Journal Article
TL;DR: In this article, the authors proposed a method which can be used to perform real-time tracking of moving vehicles on highways, in addition to tracking regular cars, they also track a vehicle performing a lane change.
Abstract: We propose a method which can be used to perform real-time tracking of moving vehicles on highways. In addition to tracking regular cars, the proposed method can also track a vehicle performing a lane change. The proposed method consists of two sections: a detection and a tracking. In the detection section, we use entropy-based features to check for the existence of vehicles. Then, we perform tracking based on the entropy features derived in the detection section. By conducting a great number of experiments, we have demonstrated the efficiency as well as the effectiveness of the proposed system.

Journal Article
TL;DR: It is proved that the matching degree between two trapezoid-shaped membership functions can be obtained without traversing all the elements in the universal disclosure set.
Abstract: Fuzzy logic has been successfully applied in various fields, but these applications have usually suffered from the problem of low speed. Typically, calculation of the matching degree requires very high latency, which limits the overall inference speed. In this paper, we prove that the matching degree between two trapezoid-shaped membership functions can be obtained without traversing all the elements in the universal disclosure set. Based on this analysis, we present an effective hardware unit that can be used to obtain the matching degree very quickly. Moreover, a pipelined parallel VLSI fuzzy inference processor is proposed to take advantage of our basic idea. The proposed hardware architecture has been implemented using 0.35μm process technology. To the best of our knowledge, our fuzzy inference processor is the only existing architecture that can tackle 64 rules with fuzzified inputs at a speed of 7 MFLIPS.

Journal ArticleDOI
TL;DR: Simulation results show that the GPS-QHRA better optimizes the flooding overhead and the mean paths in highly mobility environments compared with Zone Hierarchical Link State (ZHLS) algorithm, which partitions each zone into a square and does not adopt the cluster head concept.
Abstract: This work presents a novel GPS-based Quorum Hybrid Routing Algorithm (GPS-QHRA), which is a cluster-based approach protocol for cellular-based ad hoc wireless networks. Each node equipped with GPS knows in which zones they are located. Based on the results reported in [12], cellular-based management can achieve better behavior in reducing the number of flooding messages, better bandwidth management and a smaller hops. A table-driven routing protocol is used for intra-cluster routing, and an on-demand routing protocol is used for inter-cluster routing. The node with the highest connectivity is selected as a cluster head in each zone to simulate the function of the Home Location Register (HLR) in a GSM system. It is called the Location Database Node (LDN). In the GPS-QHRA, LDNs are formed as pail of a Uniform Quorum System (UQS), and they are disjoint and distinguishable from each other. This algorithm is divided into three parts: (i) a GPS-based routing algorithm, (ii) a mobility management scheme searching for a new substitute node while maintaining the LDNs, and (iii) a fault tolerance strategy that is initiated under specific circumstances. Simulation results show that the GPS-QHRA better optimizes the flooding overhead and the mean paths in highly mobility environments compared with Zone Hierarchical Link State (ZHLS) algorithm, which partitions each zone into a square and does not adopt the cluster head concept.

Journal ArticleDOI
TL;DR: This work simplifies Heather et al.'s tagging scheme by combining all the tags inside each encrypted component into a single tag and by omitting the tags on the outermost level, which reduces the sizes of messages in the security protocol.
Abstract: A type flaw attack on a security protocol is an attack in which a field in a message that was originally intended to have one type is subsequently interpreted as having another type. Heather et al. proved that type flaw attacks can be prevented by tagging each field with the information that indicates its intended type. We simplify Heather et al.'s tagging scheme by combining all the tags inside each encrypted component into a single tag and by omitting the tags on the outermost level. The simplification process reduces the sizes of messages in the security protocol. We also formally prove that our simplified tagging scheme is as secure as Heather et al.' with the strand space method.

Journal Article
TL;DR: It is demonstrated that the SCFNN-based digital channel equalizer possesses the ability to recover the channel distortion effectively and can be close to that of the Bayesian optimal solution and ANFIS.
Abstract: The design of a self-constructing fuzzy neural network (SCFNN)-based digital channel equalizer is proposed in this paper. We demonstrate that the SCFNN-based digital channel equalizer possesses the ability to recover the channel distortion effectively. The performance of SCFNN is compared with that of the adaptive-based-network fuzzy inference system (ANFIS) and the optimal Bayesian solution. Simulations were carried out in both real-valued and complex-valued nonlinear channels to demonstrate the flexibility of the proposed equalizer. The experimental results show that the performance of SCFNN can be close to that of the Bayesian optimal solution and ANFIS, while the hardware requirement of the trained SCFNN-based equalizer is much lower.

Journal ArticleDOI
TL;DR: The results differed from those obtained in a 'pure' c-learning setting, and the online homework performance was the only item that significantly accounted for the learning effect, which is a natural result of learning procedural knowledge.
Abstract: Some teachers adopt a blended learning model that combines traditional classroom teaching and an c-learning system. In this model, a teacher may teach the first few sessions in a classroom. After the students have established a general idea of the course, they can then proceed to on-line learning and interaction. This study aimed to discover the relationship between learning records and the learning effect in a blended c-learning environment through multiple regression analysis. The learning records considered included the grades for online assignments, reading time, the total number of login times, and the total number of online discussions. The learning effect was defined as the total grade for two monthly exams and one final exam. To collect learning record data, an c-learning system was designed that integrates the data collection functionality of learning activities with a teaching material managing module so that the learning records of all the learners are recorded automatically. With this system, an experiment was conducted on a program design course in a local high school. The results differed from those obtained in a 'pure' c-learning setting, and the online homework performance was the only item that significantly accounted for the learning effect, which is a natural result of learning procedural knowledge.

Journal Article
TL;DR: The results show that a two-layer perceptron performs comparably to a NN like standard pattern classifier in recognizing unconstrained handwritten numerals, while be- ing computationally more cost effective.
Abstract: The work presents the results of an investigation conducted to compare the per- formances of the Multi Layer Perceptron (MLP) and the Nearest Neighbor (NN) classi- fier for handwritten numeral recognition problem. The comparison is drawn in terms of the recognition performance and the computational requirements of the individual classi- fiers. The results show that a two-layer perceptron performs comparably to a NN like standard pattern classifier in recognizing unconstrained handwritten numerals, while be- ing computationally more cost effective. The work signifies the usefulness of the MLP as a standard pattern classifier for recognition of handwritten numerals with a large feature set of 96 features.

Journal ArticleDOI
TL;DR: This paper proposes a new approach to the Fixed Channel Assignment (FCA) problem, preserving the co-site channel constraint throughout the algorithm and adopting a fine-tuning procedure to escape from a local minimum to reduce the overall execution time and improve the convergence rate.
Abstract: A critical task in the design of a cellular radio network is to determine a spectrum-efficient and conflict-free allocation of channels among the cells. In this paper, we propose a new approach to the Fixed Channel Assignment (FCA) problem. By preserving the co-site channel constraint throughout our algorithm and adopting a fine-tuning procedure to escape from a local minimum, we reduce the overall execution time and improve the convergence rate. Simulation results show that our algorithm achieves a very high rate of convergence to solutions for eight benchmark problems. Furthermore, the number of iterations our algorithm requires is fewer than previous results.

Journal Article
TL;DR: This paper proposes simple parallel algo- rithms using only the parallel prefix and suffix computations and the Euler tour tech- nique for the all-pair shortest path query problem on interval and circular-arc graphs.
Abstract: In this paper, we consider some shortest path related problems on interval and cir- cular-arc graphs. For the all-pair shortest path query problem on interval and circular-arc graphs, instead of using the sophisticated technique, we propose simple parallel algo- rithms using only the parallel prefix and suffix computations and the Euler tour tech- nique. Our preprocessing algorithms run in O(log n) time using O(nlog n) processors. Using the data structure constructed by our preprocessing algorithms, a query of the length of a shortest path between any two vertices can be answered in constant time by using a single processor. For the hinge vertex problem on interval graphs, we propose an O(log n) time algorithm using O(nlog n) processors. It leads to a linear time sequential algorithm. Our algorithms work on the EREW PRAM model.

Journal ArticleDOI
TL;DR: The system has been used in actual practice for a graduate level course on wireless mobile computing and special attention has been paid to the task of entering text on PDAs, efficient use of the screen real estate, dynamics among students, privacy and ease of use issues.
Abstract: Collaborative note taking enables students in a class to take notes on their Personal Digital Assistants (PDAs) and laptops and share them with their ”study group” in real-time. Students also receive instructor's slides in real-time as they are displayed by the instructor. As the individual members of the group take notes pertaining to the slide being presented, their notes are automatically sent to all members of the group. The system has been used in actual practice for a graduate level course on wireless mobile computing. In developing this system, special attention has been paid to the task of entering text on PDAs, efficient use of the screen real estate, dynamics among students, privacy and ease of use issues. We describe our system and report the findings of this user study.

Journal ArticleDOI
TL;DR: This paper presents a decision fusion technique for a bimodal biometric verification system that makes use of facial and speech biometrics and its performances are evaluated and compared with that of other classical classification approaches, such as sum rule and Multilayer Perceptron.
Abstract: Identity verification systems that use a mono modal biometric always have to contend with sensor noise and limitations of the feature extractor and matcher, while combining information from different biometrics modalities may well provide higher and more consistent performance levels. However, an intelligent scheme is required to fuse the decisions produced by the individual sensors. This paper presents a decision fusion technique for a bimodal biometric verification system that makes use of facial and speech biometrics. The decision fusion schemes considered have simple Bayesian structures (SBS) that particularize the univariat Gaussian density function, Beta density function or Parzen window density estimation. SBS has advantages in terms of computation speed, storage space and its open framework. The performances of SBS is evaluated and compared with that of other classical classification approaches, such as sum rule and Multilayer Perceptron, on a bimodal database.

Journal Article
TL;DR: A management approach that can be used to efficiently maintain, search, and retrieve learning contents from a SCORM compliant LOR, called the Level-wise Content Management Scheme (LCMS).
Abstract: With the rapid development of the Internet, e-learning systems have become more and more popular. For sharing and reusing teaching materials in different e-learning system, the Sharable Content Object Reference Model (SCORM) has become the most popular international standard among the existing ones. In an e-leaming system, teaching materials are usually stored in a database, called the Learning Object Repository (LOR). In the LOR, a huge amount of SCORM teaching materials, including associated learning objects, will result in management problems in a wired/wireless environment. Recently, the SCORM organization has focused on devising ways to efficiently maintain, search, and retrieve desired learning objects in LORs for users. This effort is referred to as the Content Object Repository Discovery and Resolution Architecture (CORDRA). In this paper, we propose a management approach, called the Level-wise Content Management Scheme (LCMS), that can be used to efficiently maintain, search, and retrieve learning contents from a SCORM compliant LOR. LCMS includes two phases: the Construction phase and Search phase. In the former. the content structure of SCORM teaching materials (Content Package) is first transformed into a tree-like structure, called a Content Tree (CT), to represent each piece of teaching material. Based on Content Trees (CTs). the proposed Level-wise Content Clustering Algorithm (LCCAlg) then creates a multistage graph showing relationships among learning objects (LOs), e.g., a Directed Acyclic Graph (DAG), called the Level-wise Content Clustering Graph (LCCG). The LCCAlg determines the relationships among LOs in different teaching materials by clustering all of the LOs for each level from bottom to top, according to a similarity measure. Moreover, a maintenance strategy is employed to rebuild the LCCG if necessary by monitoring the condition of each node within the LCCG. The latter employs the LCCG Content Searching Algorithm (LCCG-CSAlg) to traverse the LCCG and retrieve desired learning content with both general and specific LOs, according to queries sent by users in the wire/wireless environment. Some experiments have been done conducted to test the proposed scheme, and the results are reported here.

Journal ArticleDOI
TL;DR: The LLH(subscript m) (Level-by-level and list scheduling using the Harmonic system partitioning scheme) algorithm as mentioned in this paper is an algorithm for scheduling precedence constrained parallel tasks on multiprocessors with noncontiguous processor allocation.
Abstract: We present an algorithm for scheduling precedence constrained parallel tasks on multiprocessors with noncontiguous processor allocation. The algorithm is called LLH(subscript m) (Level-by-level and List scheduling using the Harmonic system partitioning scheme), where m≥1 is a positive integer, which is a parameter for the harmonic system partitioning scheme. Three basic techniques are employed in algorithm LLH(subscript m). First, a task graph is divided into levels, and tasks are scheduled level by level to follow the precedence constraints. Second, tasks in the same level are scheduled using algorithm H(subscript m) developed in [16] for scheduling independent parallel tasks. The list scheduling method is used to implement algorithm H(subscript m). Third, the harmonic system partitioning scheme is used for processor allocation. It is shown here that for wide task graphs and some common task size distributions, as the size of a computation and m increase, and as the task sizes become smaller, the average-case performance ratio of algorithm LLH(subscript m) approaches one.

Journal Article
TL;DR: This paper proposes a new Bluetooth scatternet formation algorithm and its routing algorithm and uses the PARK mode to let Bluetooth devices have more chances to link each other in order to build a better-connected sc atternet.
Abstract: The emerging Bluetooth technology is a best-known PAN (Personal Area Networks) technology. It still has some issues left open in the current specification. Among them, the scatternet formation and routing are two major issues. In this paper, we proposed a new Bluetooth scatternet formation algorithm and its routing algorithm. Our method constructs and maintains a scatternet in a distributed way and does not need all nodes to be in the transmission range of each other. We use the PARK mode to let Bluetooth devices have more chances to link each other in order to build a better-connected scatternet. We also propose an on demand routing algorithm for the scatternet constructed. Experimental results show that the proposed algorithms are quite efficient and effective.

Journal Article
TL;DR: The p-Learning Grid as discussed by the authors is a service-oriented approach based on a per-vasive learning grid for solving difficulties associated with the sharing of learning re- sources distributed on different e-Learning platforms.
Abstract: This work proposes p-Learning grid, a service-oriented approach, based on a per- vasive learning grid, for solving difficulties associated with the sharing of learning re- sources distributed on different e-Learning platforms The p-Learning grid not only en- ables collaboration and effective reuse of learning objects but also supports learning anytime, anywhere Since the WSDL of web services remains poorly defined and has poor dispatch ability in service-level agreements for resource description, distributed re- sources can not be effectively managed, and service collaboration could not be achieved Our grid service was generated based on web services and grid technology which sup- port good descriptions of services and management mechanisms The proposed p-Learning Grid is based on such grid service technologies as Globus Toolkit 3 (28), the Grid Services Flow Language (GSFL) (17), etc, along with mobile devices and relevant technologies for supporting a pervasive and collaborative system in which resources can be effectively managed and shared This study used three self-developed learning plat- forms, integrated with GT3, to provide the grid engine used to implement the entire sys- tem The experiment involved the creation of English learning objects accessible via Nokia, Sony Ericsson, and Motorola mobile phones

Journal ArticleDOI
TL;DR: A courseware design tool with the theory of concept and influence diagram coupled with a user-friendly interface and the transformation algorithm is included for the conformance with e-learning standards.
Abstract: Without a systematic assessment mechanism, it is hard for teachers to design e-learning courseware and assess students' on-line learning behaviors. This is a common issue found in most authoring tools available all over the world. Furthermore, some authoring tools are powerful but not standard compliant. To overcome these draw backs, we propose a courseware design tool with the theory of concept and influence diagram coupled with a user-friendly interface. The transformation algorithm is also included for the conformance with e-learning standards. With the proposed mechanism and tools, the advantages of courseware diagram are preserved. Students' learning performance can be improved by taking different levels of remedial courses based on student performance with a systematically built course flow chart. Furthermore, students' learning results can be maximized by analyzing their learning performance for course content adjustment. More importantly, SCORM compliant courseware can be generated from the courseware diagram directly using the proposed algorithms.