scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Zhejiang University Science C in 2016"


Journal ArticleDOI
TL;DR: A systematic interpretation of published literature is presented to describe the state of the art of the three main areas in the MTD field, namely, MTD theory,MTD strategy, and MTD evaluation.
Abstract: Moving target defense (MTD) has emerged as one of the game-changing themes to alter the asymmetric situation between attacks and defenses in cyber-security. Numerous related works involving several facets of MTD have been published. However, comprehensive analyses and research on MTD are still absent. In this paper, we present a survey on MTD technologies to scientifically and systematically introduce, categorize, and summarize the existing research works in this field. First, a new security model is introduced to describe the changes in the traditional defense paradigm and security model caused by the introduction of MTD. A function-and-movement model is provided to give a panoramic overview on different perspectives for understanding the existing MTD research works. Then a systematic interpretation of published literature is presented to describe the state of the art of the three main areas in the MTD field, namely, MTD theory, MTD strategy, and MTD evaluation. Specifically, in the area of MTD strategy, the common characteristics shared by the MTD strategies to improve system security and effectiveness are identified and extrapolated. Thereafter, the methods to implement these characteristics are concluded. Moreover, the MTD strategies are classified into three types according to their specific goals, and the necessary and sufficient conditions of each type to create effective MTD strategies are then summarized, which are typically one or more of the aforementioned characteristics. Finally, we provide a number of observations for the future direction in this field, which can be helpful for subsequent researchers.

85 citations


Journal ArticleDOI
TL;DR: For the first time, QCA-based designs of the reversible low-power odd parity generator and odd parity checker using the Feynman gate have been achieved in this study.
Abstract: Quantum-dot cellular automata (QCA) is an emerging area of research in reversible computing. It can be used to design nanoscale circuits. In nanocommunication, the detection and correction of errors in a received message is a major factor. Besides, device density and power dissipation are the key issues in the nanocommunication architecture. For the first time, QCA-based designs of the reversible low-power odd parity generator and odd parity checker using the Feynman gate have been achieved in this study. Using the proposed parity generator and parity checker circuit, a nanocommunication architecture is proposed. The detection of errors in the received message during transmission is also explored. The proposed QCA Feynman gate outshines the existing ones in terms of area, cell count, and delay. The quantum costs of the proposed conventional reversible circuits and their QCA layouts are calculated and compared, which establishes that the proposed QCA circuits have very low quantum cost compared to conventional designs. The energy dissipation by the layouts is estimated, which ensures the possibility of QCA nano-device serving as an alternative platform for the implementation of reversible circuits. The stability of the proposed circuits under thermal randomness is analyzed, showing the operational efficiency of the circuits. The simulation results of the proposed design are tested with theoretical values, showing the accuracy of the circuits. The proposed circuits can be used to design more complex low-power nanoscale lossless nanocommunication architecture such as nano-transmitters and nano-receivers.

64 citations


Journal ArticleDOI
TL;DR: The dolphin swarm algorithm is proposed, which possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions.
Abstract: By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization problems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark function results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more calls of fitness functions and fewer individuals.

62 citations


Journal ArticleDOI
TL;DR: This paper proposed a framework for attribute reduction in interval-valued data from the viewpoint of information theory, and some information theory concepts, including entropy, conditional entropy, and joint entropy, are given in intervals-valued information systems.
Abstract: Interval-valued data appear as a way to represent the uncertainty affecting the observed values. Dealing with interval-valued information systems is helpful to generalize the applications of rough set theory. Attribute reduction is a key issue in analysis of interval-valued data. Existing attribute reduction methods for single-valued data are unsuitable for interval-valued data. So far, there have been few studies on attribute reduction methods for interval-valued data. In this paper, we propose a framework for attribute reduction in interval-valued data from the viewpoint of information theory. Some information theory concepts, including entropy, conditional entropy, and joint entropy, are given in interval-valued information systems. Based on these concepts, we provide an information theory view for attribute reduction in interval-valued information systems. Consequently, attribute reduction algorithms are proposed. Experiments show that the proposed framework is effective for attribute reduction in interval-valued information systems.

38 citations


Journal ArticleDOI
Yinan Wang1, Zhiyun Lin1, Xiao Liang, Wen-yuan Xu1, Qiang Yang1, Gangfeng Yan1 
TL;DR: The fragility of an ECPS is analyzed under various cyber attacks including denial-of-service (DoS) attacks, replay attacks, and false data injection attacks and control strategies such as load shedding and relay protection are verified using this model against these attacks.
Abstract: This paper establishes a new framework for modeling electrical cyber-physical systems (ECPSs), integrating both power grids and communication networks. To model the communication network associated with a power transmission grid, we use a mesh network that considers the features of power transmission grids such as high-voltage levels, long-transmission distances, and equal importance of each node. Moreover, bidirectional links including data uploading channels and command downloading channels are assumed to connect every node in the communication network and a corresponding physical node in the transmission grid. Based on this model, the fragility of an ECPS is analyzed under various cyber attacks including denial-of-service (DoS) attacks, replay attacks, and false data injection attacks. Control strategies such as load shedding and relay protection are also verified using this model against these attacks.

37 citations


Journal ArticleDOI
TL;DR: An efficient identity-based signature scheme is presented over the number theory research unit (NTRU) lattice assumption and proves to be unforgeable against the adaptively chosen message attack in the random oracle model under the hardness of the γ-shortest vector problem on the NTRU lattice.
Abstract: Identity-based signature has become an important technique for lightweight authentication as soon as it was proposed in 1984. Thereafter, identity-based signature schemes based on the integer factorization problem and discrete logarithm problem were proposed one after another. Nevertheless, the rapid development of quantum computers makes them insecure. Recently, many efforts have been made to construct identity-based signatures over lattice assumptions against attacks in the quantum era. However, their efficiency is not very satisfactory. In this study, an efficient identity-based signature scheme is presented over the number theory research unit (NTRU) lattice assumption. The new scheme is more efficient than other lattice- and identity-based signature schemes. The new scheme proves to be unforgeable against the adaptively chosen message attack in the random oracle model under the hardness of the γ-shortest vector problem on the NTRU lattice.

34 citations


Journal ArticleDOI
TL;DR: An active steering controller, including tractor and trailer controllers, based on linear quadratic regulator (LQR) theory is designed, designed to follow the desired yaw rate and minimize their side-slip angle of the center of gravity at the same time.
Abstract: To improve maneuverability and stability of articulated vehicles, we design an active steering controller, including tractor and trailer controllers, based on linear quadratic regulator (LQR) theory. First, a three-degree-of-freedom (3-DOF) model of the tractor-trailer with steered trailer axles is built. The simulated annealing particle swarm optimization (SAPSO) algorithm is applied to identify the key parameters of the model under specified vehicle speed and steering wheel angle. Thus, the key parameters of the simplified model can be obtained according to the vehicle conditions using an online look-up table and interpolation. Simulation results show that vehicle parameter outputs of the simplified model and TruckSim agree well, thus providing the ideal reference yaw rate for the controller. Then the active steering controller of the tractor and trailer based on LQR is designed to follow the desired yaw rate and minimize their side-slip angle of the center of gravity (CG) at the same time. Finally, simulation tests at both low speed and high speed are conducted based on the TruckSim-Simulink program. The results show significant effects on the active steering controller on improving maneuverability at low speed and lateral stability at high speed for the articulated vehicle. The control strategy is applicable for steering not only along gentle curves but also along sharp curves.

30 citations


Journal ArticleDOI
TL;DR: The spectral clustering based partition and placement algorithms are proposed, by which a large network is partitioned into several small SDN domains efficiently and effectively and the effectiveness of the algorithm for the SDN domain partition and controller placement problems is shown.
Abstract: As a novel architecture, software-defined networking (SDN) is viewed as the key technology of future networking. The core idea of SDN is to decouple the control plane and the data plane, enabling centralized, flexible, and programmable network control. Although local area networks like data center networks have benefited from SDN, it is still a problem to deploy SDN in wide area networks (WANs) or large-scale networks. Existing works show that multiple controllers are required in WANs with each covering one small SDN domain. However, the problems of SDN domain partition and controller placement should be further addressed. Therefore, we propose the spectral clustering based partition and placement algorithms, by which we can partition a large network into several small SDN domains efficiently and effectively. In our algorithms, the matrix perturbation theory and eigengap are used to discover the stability of SDN domains and decide the optimal number of SDN domains automatically. To evaluate our algorithms, we develop a new experimental framework with the Internet2 topology and other available WAN topologies. The results show the effectiveness of our algorithm for the SDN domain partition and controller placement problems.

26 citations


Journal ArticleDOI
TL;DR: This paper analyzes the impact of PCA on NMF, and finds that multiplicative NMF can also be applicable to data after principal component transformation, and presents a method to perform NMF in the principal component space, named ‘principal component NMF’ (PCNMF).
Abstract: Non-negative matrix factorization (NMF) has been widely used in mixture analysis for hyperspectral remote sensing. When used for spectral unmixing analysis, however, it has two main shortcomings: (1) since the dimensionality of hyperspectral data is usually very large, NMF tends to suffer from large computational complexity for the popular multiplicative iteration rule; (2) NMF is sensitive to noise (outliers), and thus the corrupted data will make the results of NMF meaningless. Although principal component analysis (PCA) can be used to mitigate these two problems, the transformed data will contain negative numbers, hindering the direct use of the multiplicative iteration rule of NMF. In this paper, we analyze the impact of PCA on NMF, and find that multiplicative NMF can also be applicable to data after principal component transformation. Based on this conclusion, we present a method to perform NMF in the principal component space, named ‘principal component NMF’ (PCNMF). Experimental results show that PCNMF is both accurate and time-saving.

23 citations


Journal ArticleDOI
TL;DR: A qualitative performance evaluation of three 2D visual markers, Vuforia, ArUco marker, and AprilTag, which are suitable for real-time applications are presented, to improve the efficiency of the indoor localization and navigation approach by choosing the best visual marker system.
Abstract: The massive diffusion of smartphones, the growing interest in wearable devices and the Internet of Things, and the exponential rise of location based services (LBSs) have made the problem of localization and navigation inside buildings one of the most important technological challenges of recent years. Indoor positioning systems have a huge market in the retail sector and contextual advertising; in addition, they can be fundamental to increasing the quality of life for citizens if deployed inside public buildings such as hospitals, airports, and museums. Sometimes, in emergency situations, they can make the difference between life and death. Various approaches have been proposed in the literature. Recently, thanks to the high performance of smartphones’ cameras, marker-less and marker-based computer vision approaches have been investigated. In a previous paper, we proposed a technique for indoor localization and navigation using both Bluetooth low energy (BLE) and a 2D visual marker system deployed into the floor. In this paper, we presented a qualitative performance evaluation of three 2D visual markers, Vuforia, ArUco marker, and AprilTag, which are suitable for real-time applications. Our analysis focused on specific case study of visual markers placed onto the tiles, to improve the efficiency of our indoor localization and navigation approach by choosing the best visual marker system.

21 citations


Journal ArticleDOI
TL;DR: A circular metadata management mechanism named dynamic circular metadata splitting (DCMS), which removes Hadoop’s SPOF and provides an efficient and scalable metadata management.
Abstract: In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the file system. Hadoop is the most widely used framework to deal with big data. Due to this growth of huge amount of metadata, however, the efficiency of Hadoop is questioned numerous times by many researchers. Therefore, it is essential to create an efficient and scalable metadata management for Hadoop. Hash-based mapping and subtree partitioning are suitable in distributed metadata management schemes. Subtree partitioning does not uniformly distribute workload among the metadata servers, and metadata needs to be migrated to keep the load roughly balanced. Hash-based mapping suffers from a constraint on the locality of metadata, though it uniformly distributes the load among NameNodes, which are the metadata servers of Hadoop. In this paper, we present a circular metadata management mechanism named dynamic circular metadata splitting (DCMS). DCMS preserves metadata locality using consistent hashing and locality-preserving hashing, keeps replicated metadata for excellent reliability, and dynamically distributes metadata among the NameNodes to keep load balancing. NameNode is a centralized heart of the Hadoop. Keeping the directory tree of all files, failure of which causes the single point of failure (SPOF). DCMS removes Hadoop’s SPOF and provides an efficient and scalable metadata management. The new framework is named ‘Dr. Hadoop’ after the name of the authors.

Journal ArticleDOI
TL;DR: This paper proposes deep belief networks based on information entropy, IE-DBNs, for engine fault diagnosis, and introduces several information entropies and proposes joint complexity entropy based on single signal entropy.
Abstract: Precise fault diagnosis is an important part of prognostics and health management. It can avoid accidents, extend the service life of the machine, and also reduce maintenance costs. For gas turbine engine fault diagnosis, we cannot install too many sensors in the engine because the operating environment of the engine is harsh and the sensors will not work in high temperature, at high rotation speed, or under high pressure. Thus, there is not enough sensory data from the working engine to diagnose potential failures using existing approaches. In this paper, we consider the problem of engine fault diagnosis using finite sensory data under complicated circumstances, and propose deep belief networks based on information entropy, IE-DBNs, for engine fault diagnosis. We first introduce several information entropies and propose joint complexity entropy based on single signal entropy. Second, the deep belief networks (DBNs) is analyzed and a logistic regression layer is added to the output of the DBNs. Then, information entropy is used in fault diagnosis and as the input for the DBNs. Comparison between the proposed IE-DBNs method and state-of-the-art machine learning approaches shows that the IE-DBNs method achieves higher accuracy.

Journal ArticleDOI
TL;DR: This study presents two high-performance quaternary full adder cells based on carbon nanotube field effect transistors (CNTFETs) that are robust against process, voltage, and temperature variations, and are noise tolerant.
Abstract: CMOS binary logic is limited by short channel effects, power density, and interconnection restrictions. The effective solution is non-silicon multiple-valued logic (MVL) computing. This study presents two high-performance quaternary full adder cells based on carbon nanotube field effect transistors (CNTFETs). The proposed designs use the unique properties of CNTFETs such as achieving a desired threshold voltage by adjusting the carbon nanotube diameters and having the same mobility as p-type and n-type devices. The proposed circuits were simulated under various test conditions using the Synopsys HSPICE simulator with the 32 nm Stanford comprehensive CNTFET model. The proposed designs have on average 32% lower delay, 68% average power, 83% energy consumption, and 77% static power compared to current state-of-the-art quaternary full adders. Simulation results indicated that the proposed designs are robust against process, voltage, and temperature variations, and are noise tolerant.

Journal ArticleDOI
TL;DR: This review paper focuses on the advanced control methods that have been practically applied to spacecrafts during flight tests, or have been tested in real time on ground facilities and general testbeds/simulators built with actual flight data.
Abstract: We aim at examining the current status of advanced control methods in spacecrafts from an engineer’s perspective. Instead of reviewing all the fancy theoretical results in advanced control for aerospace vehicles, the focus is on the advanced control methods that have been practically applied to spacecrafts during flight tests, or have been tested in real time on ground facilities and general testbeds/simulators built with actual flight data. The aim is to provide engineers with all the possible control laws that are readily available rather than those that are tested only in the laboratory at the moment. It turns out that despite the blooming developments of modern control theories, most of them have various limitations, which stop them from being practically applied to spacecrafts. There are a limited number of spacecrafts that are controlled by advanced control methods, among which H 2/H ∞ robust control is the most popular method to deal with flexible structures, adaptive control is commonly used to deal with model/parameter uncertainty, and the linear quadratic regulator (LQR) is the most frequently used method in case of optimal control. It is hoped that this review paper will enlighten aerospace engineers who hold an open mind about advanced control methods, as well as scholars who are enthusiastic about engineering-oriented problems.

Journal ArticleDOI
TL;DR: This work defines a restricted version of the HR gradient operator, which comes in two versions, and derives explicit expressions for the derivatives of a class of regular nonlinear quaternion-valued functions, and proves that the restricted HR gradients are consistent with the gradients in the real domain.
Abstract: The gradients of a quaternion-valued function are often required for quaternionic signal processing algorithms. The HR gradient operator provides a viable framework and has found a number of applications. However, the applications so far have been limited to mainly real-valued quaternion functions and linear quaternionvalued functions. To generalize the operator to nonlinear quaternion functions, we define a restricted version of the HR operator, which comes in two versions, the left and the right ones. We then present a detailed analysis of the properties of the operators, including several different product rules and chain rules. Using the new rules, we derive explicit expressions for the derivatives of a class of regular nonlinear quaternion-valued functions, and prove that the restricted HR gradients are consistent with the gradients in the real domain. As an application, the derivation of the least mean square algorithm and a nonlinear adaptive algorithm is provided. Simulation results based on vector sensor arrays are presented as an example to demonstrate the effectiveness of the quaternion-valued signal model and the derived signal processing algorithm.

Journal ArticleDOI
TL;DR: This paper introduces for the first time a formal definition of the ‘storage wall’ from the perspective of parallel application scalability, and quantifies the effects of the storage bottleneck by providing a storage-bounded speedup and defining the storage wall quantitatively.
Abstract: The mismatch between compute performance and I/O performance has long been a stumbling block as supercomputers evolve from petaflops to exaflops. Currently, many parallel applications are I/O intensive, and their overall running times are typically limited by I/O performance. To quantify the I/O performance bottleneck and highlight the significance of achieving scalable performance in peta/exascale supercomputing, in this paper, we introduce for the first time a formal definition of the ‘storage wall’ from the perspective of parallel application scalability. We quantify the effects of the storage bottleneck by providing a storage-bounded speedup, defining the storage wall quantitatively, presenting existence theorems for the storage wall, and classifying the system architectures depending on I/O performance variation. We analyze and extrapolate the existence of the storage wall by experiments on Tianhe-1A and case studies on Jaguar. These results provide insights on how to alleviate the storage wall bottleneck in system design and achieve hardware/software optimizations in peta/exascale supercomputing.

Journal ArticleDOI
TL;DR: A multifunctional automatic hydraulic steering circuit is presented that maintains the lowest pressure under load feedback and stays at the neutral position during unloading, thus meeting the requirements for steering.
Abstract: Most automatic steering systems for large tractors are designed with hydraulic systems that run on either constant flow or constant pressure. Such designs are limited in adaptability and applicability. Moreover, their control valves can unload in the neutral position and eventually lead to serious hydraulic leakage over long operation periods. In response to the problems noted above, a multifunctional automatic hydraulic steering circuit is presented. The system design is composed of a 5-way-3- position proportional directional valve, two pilot-controlled check valves, a pressure-compensated directional valve, a pressurecompensated flow regulator valve, a load shuttle valve, and a check valve, among other components. It is adaptable to most open-center systems with constant flow supply and closed-center systems with load feedback. The design maintains the lowest pressure under load feedback and stays at the neutral position during unloading, thus meeting the requirements for steering. The steering controller is based on proportional-integral-derivative (PID) running on a 51-microcontroller-unit master control chip. An experimental platform is developed to establish the basic characteristics of the system subject to stepwise inputs and sinusoidal tracking. Test results show that the system design demonstrates excellent control accuracy, fast response, and negligible leak during long operation periods.

Journal ArticleDOI
TL;DR: This work revisits this research topic and infer home location within 100 m×100 m squares at 70% accuracy for 76% and 71% of active users in New York City and the Bay Area, respectively, marking the first time home location has been detected at such a fine granularity using sparse and noisy data.
Abstract: Accurate home location is increasingly important for urban computing. Existing methods either rely on continuous (and expensive) Global Positioning System (GPS) data or suffer from poor accuracy. In particular, the sparse and noisy nature of social media data poses serious challenges in pinpointing where people live at scale. We revisit this research topic and infer home location within 100 m×100 m squares at 70% accuracy for 76% and 71% of active users in New York City and the Bay Area, respectively. To the best of our knowledge, this is the first time home location has been detected at such a fine granularity using sparse and noisy data. Since people spend a large portion of their time at home, our model enables novel applications. As an example, we focus on modeling people’s health at scale by linking their home locations with publicly available statistics, such as education disparity. Results in multiple geographic regions demonstrate both the effectiveness and added value of our home localization method and reveal insights that eluded earlier studies. In addition, we are able to discover the real buzz in the communities where people live.

Journal ArticleDOI
TL;DR: A novel dynamic traffic signal coordination method that takes account of the special traffic flow characteristics of urban arterial roads and can reduce the average travel time and the average stop rate effectively is proposed.
Abstract: We propose a novel dynamic traffic signal coordination method that takes account of the special traffic flow characteristics of urban arterial roads. The core of this method includes a control area division module and a signal coordination control module. Firstly, we analyze and model the influences of segment distance, traffic flow density, and signal cycle time on the correlation degree between two neighboring intersections. Then, we propose a fuzzy computing method to estimate the correlation degree based on a hierarchical structure and a method to divide the control area of urban arterial roads into subareas based on correlation degrees. Subarea coordination control arithmetic is used to calculate the public cycle time of the control subarea, up-run offset and down-run offset of the section, and the split of each intersection. An application of the method in Shaoxing City, Zhejiang Province, China shows that the method can reduce the average travel time and the average stop rate effectively.

Journal ArticleDOI
TL;DR: This paper proposes a novel local/global parameterization approach, ARAP++, for single and multi-boundary triangular meshes, an extension of the as-rigid-as-possible (ARAP) approach, which stitches together 1-ring patches instead of individual triangles.
Abstract: Mesh parameterization is one of the fundamental operations in computer graphics (CG) and computeraided design (CAD). In this paper, we propose a novel local/global parameterization approach, ARAP++, for singleand multi-boundary triangular meshes. It is an extension of the as-rigid-as-possible (ARAP) approach, which stitches together 1-ring patches instead of individual triangles. To optimize the spring energy, we introduce a linear iterative scheme which employs convex combination weights and a fitting Jacobian matrix corresponding to a prescribed family of transformations. Our algorithm is simple, efficient, and robust. The geometric properties (angle and area) of the original model can also be preserved by appropriately prescribing the singular values of the fitting matrix. To reduce the area and stretch distortions for high-curvature models, a stretch operator is introduced. Numerical results demonstrate that ARAP++ outperforms several state-of-the-art methods in terms of controlling the distortions of angle, area, and stretch. Furthermore, it achieves a better visualization performance for several applications, such as texture mapping and surface remeshing.

Journal ArticleDOI
TL;DR: Numerical results show that the WL-LS algorithm is an effective method for layout optimization of satellite modules and outperforms methods in the literature.
Abstract: The layout design of satellite modules is considered to be NP-hard. It is not only a complex coupled system design problem but also a special multi-objective optimization problem. The greatest challenge in solving this problem is that the function to be optimized is characterized by a multitude of local minima separated by high-energy barriers. The Wang-Landau (WL) sampling method, which is an improved Monte Carlo method, has been successfully applied to solve many optimization problems. In this paper we use the WL sampling method to optimize the layout of a satellite module. To accelerate the search for a global optimal layout, local search (LS) based on the gradient method is executed once the Monte-Carlo sweep produces a new layout. By combining the WL sampling algorithm, the LS method, and heuristic layout update strategies, a hybrid method called WL-LS is proposed to obtain a final layout scheme. Furthermore, to improve significantly the efficiency of the algorithm, we propose an accurate and fast computational method for the overlapping depth between two objects (such as two rectangular objects, two circular objects, or a rectangular object and a circular object) embedding each other. The rectangular objects are placed orthogonally. We test two instances using first 51 and then 53 objects. For both instances, the proposed WL-LS algorithm outperforms methods in the literature. Numerical results show that the WL-LS algorithm is an effective method for layout optimization of satellite modules.

Journal ArticleDOI
TL;DR: In this study, to deal with the particular characteristics of a cutting system, a novel adaptive fuzzy integral sliding mode control (AFISMC) is designed for controlling cutting velocity and it is demonstrated that the proposed AFISMC cutting velocity controller gives a superior and robust velocity tracking performance.
Abstract: This paper presents a velocity controller for the cutting system of a trench cutter (TC). The cutting velocity of a cutting system is affected by the unknown load characteristics of rock and soil. In addition, geological conditions vary with time. Due to the complex load characteristics of rock and soil, the cutting load torque of a cutter is related to the geological conditions and the feeding velocity of the cutter. Moreover, a cutter’s dynamic model is subjected to uncertainties with unknown effects on its function. In this study, to deal with the particular characteristics of a cutting system, a novel adaptive fuzzy integral sliding mode control (AFISMC) is designed for controlling cutting velocity. The model combines the robust characteristics of an integral sliding mode controller with the adaptive adjusting characteristics of an adaptive fuzzy controller. The AFISMC cutting velocity controller is synthesized using the backstepping technique. The stability of the whole system including the fuzzy inference system, integral sliding mode controller, and the cutting system is proven using the Lyapunov theory. Experiments have been conducted on a TC test bench with the AFISMC under different operating conditions. The experimental results demonstrate that the proposed AFISMC cutting velocity controller gives a superior and robust velocity tracking performance.

Journal ArticleDOI
TL;DR: This report reviews recent work on photon mapping and classifies it into a set of categories including radiance estimation, photon relaxation, photon tracing, progressive photon mapping, and parallel methods.
Abstract: Global illumination is the core part of photo-realistic rendering. The photon mapping algorithm is an effective method for computing global illumination with its obvious advantage of caustic and color bleeding rendering. It is an active research field that has been developed over the past two decades. The deficiency of precise details and efficient rendering are still the main challenges of photon mapping. This report reviews recent work and classifies it into a set of categories including radiance estimation, photon relaxation, photon tracing, progressive photon mapping, and parallel methods. The goals of our report are giving readers an overall introduction to photon mapping and motivating further research to address the limitations of existing methods.

Journal ArticleDOI
TL;DR: This paper proposes a novel two-level hierarchical feature learning framework based on the deep convolutional neural network (CNN), which is simple and effective and effectively increases the classification accuracy in comparison with flat multiple classification methods.
Abstract: In some image classification tasks, similarities among different categories are different and the samples are usually misclassified as highly similar categories. To distinguish highly similar categories, more specific features are required so that the classifier can improve the classification performance. In this paper, we propose a novel two-level hierarchical feature learning framework based on the deep convolutional neural network (CNN), which is simple and effective. First, the deep feature extractors of different levels are trained using the transfer learning method that fine-tunes the pre-trained deep CNN model toward the new target dataset. Second, the general feature extracted from all the categories and the specific feature extracted from highly similar categories are fused into a feature vector. Then the final feature representation is fed into a linear classifier. Finally, experiments using the Caltech-256, Oxford Flower-102, and Tasmania Coral Point Count (CPC) datasets demonstrate that the expression ability of the deep features resulting from two-level hierarchical feature learning is powerful. Our proposed method effectively increases the classification accuracy in comparison with flat multiple classification methods.

Journal ArticleDOI
TL;DR: A framework for prioritization of software requirements, called RePizer, is proposed, to be used in conjunction with a selected prioritization technique to rank software requirements based on defined criteria such as implementation cost.
Abstract: The standard software development life cycle heavily depends on requirements elicited from stakeholders. Based on those requirements, software development is planned and managed from its inception phase to closure. Due to time and resource constraints, it is imperative to identify the high-priority requirements that need to be considered first during the software development process. Moreover, existing prioritization frameworks lack a store of historical data useful for selecting the most suitable prioritization technique of any similar project domain. In this paper, we propose a framework for prioritization of software requirements, called RePizer, to be used in conjunction with a selected prioritization technique to rank software requirements based on defined criteria such as implementation cost. RePizer assists requirements engineers in a decision-making process by retrieving historical data from a requirements repository. RePizer also provides a panoramic view of the entire project to ensure the judicious use of software development resources. We compared the performance of RePizer in terms of expected accuracy and ease of use while separately adopting two different prioritization techniques, planning game (PG) and analytical hierarchy process (AHP). The results showed that RePizer performed better when used in conjunction with the PG technique.

Journal ArticleDOI
TL;DR: Compared to several existing methods, the proposed method is more efficient in obtaining desirable objective function values associated with the chip area, wirelength, and fixed-outline constraints.
Abstract: Outline-free floorplanning focuses on area and wirelength reductions, which are usually meaningless, since they can hardly satisfy modern design requirements. We concentrate on a more difficult and useful issue, fixed-outline floorplanning. This issue imposes fixed-outline constraints on the outline-free floorplanning, making the physical design more interesting and challenging. The contributions of this paper are primarily twofold. First, a modified simulated annealing (MSA) algorithm is proposed. In the beginning of the evolutionary process, a new attenuation formula is used to decrease the temperature slowly, to enhance MSA’s global searching capacity. After a period of time, the traditional attenuation formula is employed to decrease the temperature rapidly, to maintain MSA’s local searching capacity. Second, an excessive area model is designed to guide MSA to find feasible solutions readily. This can save much time for refining feasible solutions. Additionally, B*-tree representation is known as a very useful method for characterizing floorplanning. Therefore, it is employed to perform a perturbing operation for MSA. Finally, six groups of benchmark instances with different dead spaces and aspect ratios—circuits n10, n30, n50, n100, n200, and n300—are chosen to demonstrate the efficiency of our proposed method on fixed-outline floorplanning. Compared to several existing methods, the proposed method is more efficient in obtaining desirable objective function values associated with the chip area, wirelength, and fixed-outline constraints.

Journal ArticleDOI
TL;DR: In this paper, the overlap of several infrared beams affects the tracked position of the user, depending on the angle of incidence of light, distance to the target, distance between sensors, and the number of capture devices used.
Abstract: This paper seeks to determine how the overlap of several infrared beams affects the tracked position of the user, depending on the angle of incidence of light, distance to the target, distance between sensors, and the number of capture devices used. We also try to show that under ideal conditions using several Kinect sensors increases the precision of the data collected. The results obtained can be used in the design of telerehabilitation environments in which several RGB-D cameras are needed to improve precision or increase the tracking range. A numerical analysis of the results is included and comparisons are made with the results of other studies. Finally, we describe a system that implements intelligent methods for the rehabilitation of patients based on the results of the tests carried out.

Journal ArticleDOI
TL;DR: This paper forms the problem of caching resource sharing for multiple service provider servers (SPSs) competing for the caching space as an oligopoly market model and uses a dynamic non-cooperative game to obtain the optimal amount of caching space needed by the SPSs.
Abstract: Deployment of caching in wireless networks has been considered an effective method to cope with the challenge brought on by the explosive wireless traffic. Although some research has been conducted on caching in cellular networks, most of the previous works have focused on performance optimization for content caching. To the best of our knowledge, the problem of caching resource sharing for multiple service provider servers (SPSs) has been largely ignored. In this paper, by assuming that the caching capability is deployed in the base station of a radio access network, we consider the problem of caching resource sharing for multiple SPSs competing for the caching space. We formulate this problem as an oligopoly market model and use a dynamic non-cooperative game to obtain the optimal amount of caching space needed by the SPSs. In the dynamic game, the SPSs gradually and iteratively adjust their strategies based on their previous strategies and the information given by the base station. Then through rigorous mathematical analysis, the Nash equilibrium and stability condition of the dynamic game are proven. Finally, simulation results are presented to show the performance of the proposed dynamic caching resource allocation scheme.

Journal ArticleDOI
TL;DR: An approach for an intelligent negotiation model to support the group decision-making process specifically designed for ubiquitous contexts by considering and defining strategies to deal with important points such as the type of attributes in the multicriterion problems, agents’ reasoning, and intelligent dialogues is proposed.
Abstract: Supporting group decision-making in ubiquitous contexts is a complex task that must deal with a large amount of factors to succeed. Here we propose an approach for an intelligent negotiation model to support the group decision-making process specifically designed for ubiquitous contexts. Our approach can be used by researchers that intend to include arguments, complex algorithms, and agents’ modeling in a negotiation model. It uses a social networking logic due to the type of communication employed by the agents and it intends to support the ubiquitous group decision-making process in a similar way to the real process, which simultaneously preserves the amount and quality of intelligence generated in face-to-face meetings. We propose a new look into this problem by considering and defining strategies to deal with important points such as the type of attributes in the multicriterion problems, agents’ reasoning, and intelligent dialogues.

Journal ArticleDOI
Gang Xiong, Yu-xiang Hu, Le Tian1, Ju-long Lan, Jun-fei Li, Qiao Zhou 
TL;DR: A framework is designed for service deployment decision, and an integer linear programming model is proposed to resolve the service placement and minimize the network transport delay, and a heuristic solution is designed based on the improved quantum genetic algorithm.
Abstract: Despite the critical role that middleboxes play in introducing new network functionality, management and innovation of them are still severe challenges for network operators, since traditional middleboxes based on hardware lack service flexibility and scalability. Recently, though new networking technologies, such as network function virtualization (NFV) and software-defined networking (SDN), are considered as very promising drivers to design cost-efficient middlebox service architectures, how to guarantee transmission efficiency has drawn little attention under the condition of adding virtual service process for traffic. Therefore, we focus on the service deployment problem to reduce the transport delay in the network with a combination of NFV and SDN. First, a framework is designed for service placement decision, and an integer linear programming model is proposed to resolve the service placement and minimize the network transport delay. Then a heuristic solution is designed based on the improved quantum genetic algorithm. Experimental results show that our proposed method can calculate automatically the optimal placement schemes. Our scheme can achieve lower overall transport delay for a network compared with other schemes and reduce 30% of the average traffic transport delay compared with the random placement scheme.