scispace - formally typeset
Search or ask a question

Showing papers by "Northeastern University (China) published in 2012"


Proceedings ArticleDOI
23 Mar 2012
TL;DR: This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle and describes future research work about dataSecurity and privacy Protection issues in cloud.
Abstract: It is well-known that cloud computing has many potential advantages and many enterprise applications and data are migrating to public or hybrid cloud. But regarding some business-critical applications, the organizations, especially large enterprises, still wouldn't move them to cloud. The market size the cloud computing shared is still far behind the one expected. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle. Then this paper discusses some current solutions. Finally, this paper describes future research work about data security and privacy protection issues in cloud.

654 citations


Journal ArticleDOI
Abstract: Composite materials are used in a wide range of applications such as automotive, aerospace and renewable energy industries. But they have not been properly recycled, due to their inherent nature of heterogeneity, in particular for the thermoset-based polymer composites. The current and future waste management and environmental legislations require all engineering materials to be properly recovered and recycled, from end-of-life (EOL) products such as automobiles, wind turbines and aircrafts. Recycling will ultimately lead to resource and energy saving. Various technologies, mostly focusing on reinforcement fibres and yet to be commercialized, have been developed: mechanical recycling, thermal recycling, and chemical recycling. However, lack of adequate markets, high recycling cost, and lower quality of the recyclates are the major commercialization barriers. To promote composites recycling, extensive R&D efforts are still needed on development of ground-breaking better recyclable composites and much more efficient separation technologies. It is believed that through the joint efforts from design, manufacturing, and end-of-life management, new separation and recycling technologies for the composite materials recycling will be available and more easily recyclable composite materials will be developed in the future.

534 citations


Journal ArticleDOI
TL;DR: In this article, the maximum reflection loss reached −45.1 dB with a thickness of the absorber of only 2.5 mm, and the Debye relaxation processes in graphene/polyaniline nanorod arrays are improved compared to polyanilines nanorods.
Abstract: In the paper, we find that graphene has a strong dielectric loss, but exhibits very weak attenuation properties to electromagnetic waves due to its high conductivity. As polyaniline nanorods are perpendicularly grown on the surface of graphene by an in situ polymerization process, the electromagnetic absorption properties of the nanocomposite are significantly enhanced. The maximum reflection loss reaches −45.1 dB with a thickness of the absorber of only 2.5 mm. Theoretical simulation in terms of the Cole–Cole dispersion law shows that the Debye relaxation processes in graphene/polyaniline nanorod arrays are improved compared to polyaniline nanorods. The enhanced electromagnetic absorption properties are attributed to the unique structural characteristics and the charge transfer between graphene and polyaniline nanorods. Our results demonstrate that the deposition of other dielectric nanostructures on the surface of graphene sheets is an efficient way to fabricate lightweight materials for strong electromagnetic wave absorbents.

443 citations


Journal ArticleDOI
TL;DR: The microstructure characteristics and deformation behavior of 304L stainless steel during tensile deformation at two different strain rates have been investigated by means of interrupted tensile tests, electron-backscatter-diffraction and transmission electron microscopy (TEM) techniques as discussed by the authors.
Abstract: The microstructure characteristics and deformation behavior of 304L stainless steel during tensile deformation at two different strain rates have been investigated by means of interrupted tensile tests, electron-backscatter-diffraction (EBSD) and transmission electron microscopy (TEM) techniques. The volume fractions of transformed martensite and deformation twins at different stages of the deformation process were measured using X-ray diffraction method and TEM observations. It is found that the volume fraction of martensite monotonically increases with increasing strain but decreases with increasing strain rate. On the other hand, the volume fraction of twins increases with increasing strain for strain level less than 57%. Beyond that, the volume fraction of twins decreases with increasing strain. Careful TEM observations show that stacking faults (SFs) and twins preferentially occur before the nucleation of martensite. Meanwhile, both ɛ-martensite and α′-martensite are observed in the deformation microstructures, indicating the co-existence of stress-induced-transformation and strain-induced-transformation. We also discussed the effects of twinning and martensite transformation on work-hardening as well as the relationship between stacking faults, twinning and martensite transformation.

389 citations


Journal ArticleDOI
06 Feb 2012-Small
TL;DR: Flexible graphene paper (GP) pillared by carbon black (CB) nanoparticles using a simple vacuum filtration method is developed as a high-performance electrode material for supercapacitors that exhibit excellent electrochemical performances and cyclic stabilities.
Abstract: Flexible graphene paper (GP) pillared by carbon black (CB) nanoparticles using a simple vacuum filtration method is developed as a high-performance electrode material for supercapacitors. Through the introduction of CB nanoparticles as spacers, the self-restacking of graphene sheets during the filtration process is mitigated to a great extent. The pillared GP-based supercapacitors exhibit excellent electrochemical performances and cyclic stabilities compared with GP without the addition of CB nanoparticles. At a scan rate of 10 mV s −1 , the specific capacitance of the pillared GP is 138 F g −1 and 83.2 F g −1 with negligible 3.85% and 4.35% capacitance degradation after 2000 cycles in aqueous and organic electrolytes, respectively. At an extremely fast scan rate of 500 mV s −1 , the specific capacitance can reach 80 F g −1 in aqueous electrolyte. No binder is needed for assembling the supercapacitor cells and the pillared GP itself may serve as a current collector due to its intrinsic high electrical conductivity. The pillared GP has great potential in the development of promising flexible and ultralight-weight supercapacitors for electrochemical energy storage.

311 citations


Journal ArticleDOI
TL;DR: A new approach to fabricate an integrated power pack by hybridizing energy harvest and storage processes that incorporates a series-wound dye-sensitized solar cell and a lithium ion battery on the same Ti foil that has double-sided TiO(2) nanotube (NTs) arrays.
Abstract: We present a new approach to fabricate an integrated power pack by hybridizing energy harvest and storage processes. This power pack incorporates a series-wound dye-sensitized solar cell (DSSC) and a lithium ion battery (LIB) on the same Ti foil that has double-sided TiO(2) nanotube (NTs) arrays. The solar cell part is made of two different cosensitized tandem solar cells based on TiO(2) nanorod arrays (NRs) and NTs, respectively, which provide an open-circuit voltage of 3.39 V and a short-circuit current density of 1.01 mA/cm(2). The power pack can be charged to about 3 V in about 8 min, and the discharge capacity is about 38.89 μAh under the discharge density of 100 μA. The total energy conversion and storage efficiency for this system is 0.82%. Such an integrated power pack could serve as a power source for mobile electronics.

304 citations


Journal ArticleDOI
TL;DR: In this paper, the robust adaptive fault-tolerant compensation control problem for linear systems with parameter uncertainty, disturbance and actuator faults including outage, loss of effectiveness and stuck is considered.
Abstract: This study is concerned with the robust adaptive fault-tolerant compensation control problem for linear systems with parameter uncertainty, disturbance and actuator faults including outage, loss of effectiveness and stuck. It is assumed that the lower and upper bounds of actuator efficiency factor, the upper bounds of disturbance and the unparametrisable time-varying stuck fault, are unknown. Then, according to the information from the adaptive mechanism, the effect of actuator fault, exogenous disturbance and parameter uncertainty can be eliminated completely by designing adaptive state feedback controller. Furthermore, it is shown that the solutions of the resulting adaptive closed-loop system are uniformly bounded, the states converge asymptotically to zero. Finally, two examples are given to illustrate the effectiveness and applicability of the proposed design method.

277 citations


Journal ArticleDOI
TL;DR: A novel bidirectional diffusion strategy is proposed to promote the efficiency of the most widely investigated permutation-diffusion type image cipher and has a satisfactory security level with a low computational complexity, which renders it a good candidate for real-time secure image transmission applications.
Abstract: Chaos-based image cipher has been widely investigated over the last decade or so to meet the increasing demand for real-time secure image transmission over public networks. In this paper, an improved diffusion strategy is proposed to promote the efficiency of the most widely investigated permutation-diffusion type image cipher. By using the novel bidirectional diffusion strategy, the spreading process is significantly accelerated and hence the same level of security can be achieved with fewer overall encryption rounds. Moreover, to further enhance the security of the cryptosystem, a plain-text related chaotic orbit turbulence mechanism is introduced in diffusion procedure by perturbing the control parameter of the employed chaotic system according to the cipher-pixel. Extensive cryptanalysis has been performed on the proposed scheme using differential analysis, key space analysis, various statistical analyses and key sensitivity analysis. Results of our analyses indicate that the new scheme has a satisfactory security level with a low computational complexity, which renders it a good candidate for real-time secure image transmission applications.

253 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of challenges of localization in non-line-of-sight, node selection criteria for localization in energy-constrained network, scheduling the sensor node to optimize the tradeoff between localization performance and energy consumption, cooperative node localization, and localization algorithm in heterogeneous network is presented.
Abstract: Localization is one of the key techniques in wireless sensor network. The location estimation methods can be classified into target/source localization and node self-localization. In target localization, we mainly introduce the energy-based method. Then we investigate the node self-localization methods. Since the widespread adoption of the wireless sensor network, the localization methods are different in various applications. And there are several challenges in some special scenarios. In this paper, we present a comprehensive survey of these challenges: localization in non-line-of-sight, node selection criteria for localization in energy-constrained network, scheduling the sensor node to optimize the tradeoff between localization performance and energy consumption, cooperative node localization, and localization algorithm in heterogeneous network. Finally, we introduce the evaluation criteria for localization in wireless sensor network.

245 citations


Journal ArticleDOI
TL;DR: In this paper, a comprehensive monitoring campaign consisting of a digital borehole camera, cross-hole acoustic apparatus, and sliding micrometer was undertaken for in situ measurements in two specially excavated test tunnels B and F.

213 citations


Journal ArticleDOI
01 Mar 2012
TL;DR: iMapReduce significantly improves the performance of iterative implementations by reducing the overhead of creating new MapReduce jobs repeatedly, eliminating the shuffling of static data, and allowing asynchronous execution of map tasks.
Abstract: Iterative computation is pervasive in many applications such as data mining, web ranking, graph analysis, online social network analysis, and so on. These iterative applications typically involve massive data sets containing millions or billions of data records. This poses demand of distributed computing frameworks for processing massive data sets on a cluster of machines. MapReduce is an example of such a framework. However, MapReduce lacks built-in support for iterative process that requires to parse data sets iteratively. Besides specifying MapReduce jobs, users have to write a driver program that submits a series of jobs and performs convergence testing at the client. This paper presents iMapReduce, a distributed framework that supports iterative processing. iMapReduce allows users to specify the iterative computation with the separated map and reduce functions, and provides the support of automatic iterative processing within a single job. More importantly, iMapReduce significantly improves the performance of iterative implementations by (1) reducing the overhead of creating new MapReduce jobs repeatedly, (2) eliminating the shuffling of static data, and (3) allowing asynchronous execution of map tasks. We implement an iMapReduce prototype based on Apache Hadoop, and show that iMapReduce can achieve up to 5 times speedup over Hadoop for implementing iterative algorithms.

Journal ArticleDOI
TL;DR: The main components and the key technologies in each component are discussed, and the main functions of the system that have been focused include Digital Assembly Modeling, Assembly Sequence Planning, Path Planning, Visualization, and Simulation.
Abstract: To automate assembly planning for complex products such as aircraft components, an assembly planning and simulation system called AutoAssem has been developed. In this paper, its system architecture is presented; the main components and the key technologies in each component are discussed. The core functions of the system that have been focused include Digital Assembly Modeling, Assembly Sequence Planning (ASP), Path Planning, Visualization, and Simulation. In contrast to existing assembly planning systems, one of the novelties of the system is it allows the assembly plans be automatically generated from a CAD assembly model with minimal manual interventions. Within the system, new methodologies have been developed to: (i) create Assembly Relationship Matrices; (ii) plan assembly sequences; (iii) generate assembly paths; and (iv) visualize and simulate assembly plans. To illustrate the application of the system, the assembly of a worm gear reducer is used as an example throughout this paper for demonstration purpose. AutoAssem has been successfully applied to virtual assembly design for various complex products so far.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed method can still provide accurate SOC estimation when there exist inexact or unknown statistical properties of the errors, and has been applied successfully to the robot for inspecting the running 500-kV extra high voltage power transmission lines.
Abstract: Battery state-of-charge (SOC) estimation is essential for a mobile robot, such as inspection of power transmission lines. It is often estimated using a Kalman filter (KF) under the assumption that the statistical properties of the system and measurement errors are known. Otherwise, the SOC estimation error may be large or even divergent. In this paper, without the requirement of the known statistical properties, a SOC estimation method is proposed using an H∞ observer, which can still guarantee the SOC estimation accuracy in the worst statistical error case. Under the conditions of different currents and temperatures, the effectiveness of the proposed method is verified in the laboratory and field environments. With the comparison of the proposed method and the KF-based one, the experimental results show that the proposed method can still provide accurate SOC estimation when there exist inexact or unknown statistical properties of the errors. The proposed method has been applied successfully to the robot for inspecting the running 500-kV extra high voltage power transmission lines.

Journal ArticleDOI
TL;DR: The generalized multiple Lyapunov functions method and the adding a power integrator technique is used to design a switching law and construct continuous state feedback controllers of subsystems explicitly by a recursive design algorithm to produce global asymptotical stability and a prescribed H∞ performance level.
Abstract: The problem of H∞ control of switched nonlinear systems in p-normal form is investigated in this technical note where the solvability of the H∞ control problem for individual subsystems is unnecessary. Using the generalized multiple Lyapunov functions method and the adding a power integrator technique, we design a switching law and construct continuous state feedback controllers of subsystems explicitly by a recursive design algorithm to produce global asymptotical stability and a prescribed H∞ performance level. Multiple Lyapunov functions are exploited to reduce the conservativeness caused by adoption of a common Lyapunov function for all subsystems, which is usually required when applying the backstepping-like recursive design scheme. An example is provided to demonstrate the effectiveness of the proposed design method.

Proceedings ArticleDOI
01 Apr 2012
TL;DR: This paper investigates the publication of DP-compliant histograms, which is an important analytical tool for showing the distribution of a random variable, e.g., hospital bill size for certain patients, and proposes two novel algorithms, namely Noise First and Structure First, for computing DP- Complaint histograms.
Abstract: Differential privacy (DP) is a promising scheme for releasing the results of statistical queries on sensitive data, with strong privacy guarantees against adversaries with arbitrary background knowledge. Existing studies on DP mostly focus on simple aggregations such as counts. This paper investigates the publication of DP-compliant histograms, which is an important analytical tool for showing the distribution of a random variable, e.g., hospital bill size for certain patients. Compared to simple aggregations whose results are purely numerical, a histogram query is inherently more complex, since it must also determine its structure, i.e., the ranges of the bins. As we demonstrate in the paper, a DP-compliant histogram with finer bins may actually lead to significantly lower accuracy than a coarser one, since the former requires stronger perturbations in order to satisfy DP. Moreover, the histogram structure itself may reveal sensitive information, which further complicates the problem. Motivated by this, we propose two novel algorithms, namely Noise First and Structure First, for computing DP-compliant histograms. Their main difference lies in the relative order of the noise injection and the histogram structure computation steps. Noise First has the additional benefit that it can improve the accuracy of an already published DP-complaint histogram computed using a naiive method. Going one step further, we extend both solutions to answer arbitrary range queries. Extensive experiments, using several real data sets, confirm that the proposed methods output highly accurate query answers, and consistently outperform existing competitors.

Journal ArticleDOI
TL;DR: The approach combining solid-state assembly and heat treatment provides a simple and versatile way to prepare various metal chalcogeides for energy storage applications.
Abstract: The polyhedral CoS2 with a narrow size distribution was synthesized by a facile solid-state assembly process in a sealed silica tube. The flux of potassium halide (KX; X = Cl, Br, I) plays a crucial role in the formation of polyhedrons and the size distribution. The S22– groups in CoS2 can be controllably withdrawn during heat treatment in air. The obtained phases and microstructures of CoS2, Co3S4, CoS, Co9S8, and CoO depended on heating temperature and time. These cobalt materials, successfully used as the electrodes of lithium ion batteries, possessed good cycling stability in lithium ion batteries. The discharge capacities of 929.1 and 835.2 mAh g–1 were obtained for CoS2 and CoS respectively, and 76% and 71% of the capacities remained after 10 cycles. High capacities and good cycle performance make them promising candidates for lithium ion batteries. The approach combining solid-state assembly and heat treatment provides a simple and versatile way to prepare various metal chalcogeides for energy stor...

Posted Content
TL;DR: In this paper, the authors provide baseline implementations of eight representative algorithms and test their performances with uniform measures fairly, according to the fair tests over many different benchmark data sets and discuss some new findings.
Abstract: In recent years, due to the wide applications of uncertain data, mining frequent itemsets over uncertain databases has attracted much attention. In uncertain databases, the support of an itemset is a random variable instead of a fixed occurrence counting of this itemset. Thus, unlike the corresponding problem in deterministic databases where the frequent itemset has a unique definition, the frequent itemset under uncertain environments has two different definitions so far. The first definition, referred as the expected support-based frequent itemset, employs the expectation of the support of an itemset to measure whether this itemset is frequent. The second definition, referred as the probabilistic frequent itemset, uses the probability of the support of an itemset to measure its frequency. Thus, existing work on mining frequent itemsets over uncertain databases is divided into two different groups and no study is conducted to comprehensively compare the two different definitions. In addition, since no uniform experimental platform exists, current solutions for the same definition even generate inconsistent results. In this paper, we firstly aim to clarify the relationship between the two different definitions. Through extensive experiments, we verify that the two definitions have a tight connection and can be unified together when the size of data is large enough. Secondly, we provide baseline implementations of eight existing representative algorithms and test their performances with uniform measures fairly. Finally, according to the fair tests over many different benchmark data sets, we clarify several existing inconsistent conclusions and discuss some new findings.

Journal ArticleDOI
01 Jul 2012
TL;DR: This paper verifies that the two definitions of the frequent itemset have a tight connection and can be unified together when the size of data is large enough and provides baseline implementations of eight existing representative algorithms and test their performances with uniform measures fairly.
Abstract: In recent years, due to the wide applications of uncertain data, mining frequent itemsets over uncertain databases has attracted much attention. In uncertain databases, the support of an itemset is a random variable instead of a fixed occurrence counting of this itemset. Thus, unlike the corresponding problem in deterministic databases where the frequent itemset has a unique definition, the frequent itemset under uncertain environments has two different definitions so far. The first definition, referred as the expected support-based frequent itemset, employs the expectation of the support of an itemset to measure whether this itemset is frequent. The second definition, referred as the probabilistic frequent itemset, uses the probability of the support of an itemset to measure its frequency. Thus, existing work on mining frequent itemsets over uncertain databases is divided into two different groups and no study is conducted to comprehensively compare the two different definitions. In addition, since no uniform experimental platform exists, current solutions for the same definition even generate inconsistent results. In this paper, we firstly aim to clarify the relationship between the two different definitions. Through extensive experiments, we verify that the two definitions have a tight connection and can be unified together when the size of data is large enough. Secondly, we provide baseline implementations of eight existing representative algorithms and test their performances with uniform measures fairly. Finally, according to the fair tests over many different benchmark data sets, we clarify several existing inconsistent conclusions and discuss some new findings.

Proceedings ArticleDOI
06 Jun 2012
TL;DR: The results show that implementation of Bluetooth Low Energy (BLE) technology in the existing ECG monitoring system not only eliminates the physical constraints imposed by hard-wired link but also highly reduces the power consumption of the long-term monitoring system.
Abstract: A wireless electrocardiogram (ECG) monitoring system is developed which integrates Bluetooth Low Energy (BLE) technology. This BLE-based system is comprised of a single-chip ECG signal acquisition module, a Bluetooth module and a smart-phone. Apple's iPhone 4S is selected as the mobile device platform, which embedded with Bluetooth v4.0, Wi-Fi and iOS. In this paper, the monitoring system is able to acquire ECG signals through 2-lead electrocardiogram (ECG) sensor, transmit the ECG data via the Bluetooth wireless link, process and display the ECG waveform in a smart-phone. The results show that implementation of Bluetooth Low Energy (BLE) technology in the existing ECG monitoring system not only eliminates the physical constraints imposed by hard-wired link but also highly reduces the power consumption of the long-term monitoring system.

Journal ArticleDOI
TL;DR: In this paper, the SmtA-GO composites were assembled onto the surface of cytopore microbeads and used for highly selective adsorption and preconcentration of ultra-trace cadmium.
Abstract: Graphene oxide (GO) nanosheets were decorated with a cysteine-rich metal-binding protein, cyanobacterium metallothionein (SmtA). The SmtA–GO composites were characterized by means of FT-IR, AFM and TGA, giving rise to a SmtA binding amount of 867 mg g−1. The SmtA–GO composites exhibit ultra-high selectivity toward the adsorption of cadmium, i.e., the tolerant concentrations for the coexisting metal and anionic species were 1–800 000 fold improved after SmtA decoration with respect to bare GO. The SmtA–GO composites were then assembled onto the surface of cytopore microbeads and used for highly selective adsorption and preconcentration of ultra-trace cadmium. In comparison with bare GO (carboxyl-rich GO) loaded cytopore (GO@cytopore), SmtA–GO loaded cytopore (SmtA–GO@cytopore) shows a 3.3-fold improvement over the binding capacity of cadmium, i.e. 7.70 mg g−1 for SmtA–GO@cytopore compared to 2.34 mg g−1 for that by GO@cytopore. A novel procedure for selective cadmium preconcentration was developed using SmtA–GO@cytopore beads as a renewable sorption medium incorporated into a sequential injection lab-on-valve system, with detection by graphite furnace atomic absorption spectrometry (GFAAS). The cadmium retained on the SmtA–GO surface was eluted with a small amount of nitric acid. An enrichment factor of 14.6 and a detection limit of 1.2 ng L−1 were achieved within a linear range of 5–100 ng L−1 by using a sample volume of 1 mL. The procedure was validated by analyzing cadmium in certified reference materials and a series of environmental water samples.

Journal ArticleDOI
12 Nov 2012-Langmuir
TL;DR: The nanocarrier based on the multifunctional nanoplatform exhibits an excellent drug loading capability of ca.
Abstract: A novel and specific nanoplatform for in vitro simultaneous cancer-targeted optical imaging and magnetically guided drug delivery is developed by conjugating CdTe quantum dots with Fe(3)O(4)-filled carbon nanotubes (CNTs) for the first time. Fe(3)O(4) is filled into the interior of the CNTs, which facilitates magnetically guided delivery and improves the synergetic targeting efficiency. In comparison with that immobilized on the external surface of CNTs, the magnetite nanocrystals inside the CNTs protect it from agglomeration, enhance its chemical stability, and improve the drug loading capacity. It also avoids magnetic nanocrystals-induced quenching of fluorescence of the quantum dots. The SiO(2)-coated quantum dots (HQDs) attached on the surface of CNTs exhibit favorable fluorescence as the hybrid SiO(2) shells on the QDs surface prevent its fluorescence quenching caused by the CNTs. In addition, the hybrid SiO(2) shells also mitigate the toxicity of the CdTe QDs. By coating transferrin on the surface of the herein modified CNTs, it provides a dual-targeted drug delivery system to transport the doxorubicin hydrochloride (DOX) into Hela cells by means of an external magnetic field. The nanocarrier based on the multifunctional nanoplatform exhibits an excellent drug loading capability of ca. 110%, in addition to cancer-targeted optical imaging as well as magnetically guided drug delivery.

Journal ArticleDOI
TL;DR: Graphene quantum dots prepared by a one-step hydrothermal procedure in a microwave exhibit an unusual emission transformation in strong acidic media and at high concentration, induced by self-assembled J-type aggregation under restrained π-π interactions.

Journal ArticleDOI
TL;DR: In this paper, a dynamic data replication strategy is put forward with a brief survey of replication strategy suitable for distributed computing environments and experimental results demonstrate the efficiency and effectiveness of the improved system brought by the proposed strategy in a cloud.
Abstract: Failures are normal rather than exceptional in the cloud computing environments. To improve system availability, replicating the popular data to multiple suitable locations is an advisable choice, as users can access the data from a nearby site. This is, however, not the case for replicas which must have a fixed number of copies on several locations. How to decide a reasonable number and right locations for replicas has become a challenge in the cloud computing. In this paper, a dynamic data replication strategy is put forward with a brief survey of replication strategy suitable for distributed computing environments. It includes: 1) analyzing and modeling the relationship between system availability and the number of replicas; 2) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 3) calculating a suitable number of copies to meet a reasonable system byte effective rate requirement and placing replicas among data nodes in a balanced way; 4) designing the dynamic data replication algorithm in a cloud. Experimental results demonstrate the efficiency and effectiveness of the improved system brought by the proposed strategy in a cloud.

Journal ArticleDOI
TL;DR: Based on the characteristic of magnetic-controlling refractive index, the magnetic fluid filled in hollow-core photonic crystal fiber (HC-PCF) can be used as the sensitive medium in the cavity of a fiber Fabry-Perot (F-P) magnetic field sensor.
Abstract: Based on the characteristic of magnetic-controlling refractive index, the magnetic fluid filled in hollow-core photonic crystal fiber (HC-PCF) can be used as the sensitive medium in the cavity of a fiber Fabry–Perot (F–P) magnetic field sensor. The structure and the sensor principle are introduced. The theoretical simulations of the mode distribution of the HC-PCF filled with the magnetic fluid and the sensor output spectra are discussed in detail. The sensor multiplexing capability is indicated as well. Magnetic field measurement sensitivity is about 33 pm/Oe based on the proposed sensor.

Journal ArticleDOI
TL;DR: Experimental results on the well-known benchmark instances and comparisons with other recently published algorithms show the efficiency and effectiveness of the proposed HSFLA for solving the multi-objective flexible job shop scheduling problem.

Journal ArticleDOI
TL;DR: A framework with ensemble techniques is presented for customer churn prediction directly using longitudinal behavioral data and a novel approach called the hierarchical multiple kernel support vector machine (H-MK-SVM) is formulated.

Journal ArticleDOI
TL;DR: In this article, the authors study the fundamentals of mesoscale structure and texture development in face-centered-cubic (fcc) metals with low stacking fault energy (SFE).

Journal ArticleDOI
TL;DR: This paper examines simple and easy to implement methods that are able to improve 12 out of 120 best known solutions of Taillard’s flowshop benchmark with total flowtime criterion and presents extensions of these methods that work over populations.

Journal ArticleDOI
TL;DR: Fluorescent carbon dots were solvothermaly synthesized in water-glycol medium by using glucose as carbon source and then modified with polyethyleneimine for the first time to improve fluorescence quality and test the cytotoxicity of these CDs using HeLa cells.

Journal ArticleDOI
TL;DR: In this article, self-doped polyaniline (SPAN) was deposited on carbon cloth by electro-co-polymerization of aniline and metanilic acid to get SPAN/FC which showed a specific capacitance of 408 F g −1 in 0.5 M Na 2 SO 4, based on a constant charge-discharge experiment at a current density of 1 ǫg −1.