scispace - formally typeset
Search or ask a question

Showing papers by "NEC published in 2018"


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, a multi-level adversarial network is proposed to perform output space domain adaptation at different feature levels, including synthetic-to-real and cross-city scenarios.
Abstract: Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.

1,469 citations


Proceedings ArticleDOI
29 Mar 2018
TL;DR: Attention statistics pooling for deep speaker embedding in text-independent speaker verification uses an attention mechanism to give different weights to different frames and generates not only weighted means but also weighted standard deviations, which can capture long-term variations in speaker characteristics more effectively.
Abstract: This paper proposes attentive statistics pooling for deep speaker embedding in text-independent speaker verification. In conventional speaker embedding, frame-level features are averaged over all the frames of a single utterance to form an utterance-level feature. Our method utilizes an attention mechanism to give different weights to different frames and generates not only weighted means but also weighted standard deviations. In this way, it can capture long-term variations in speaker characteristics more effectively. An evaluation on the NIST SRE 2012 and the VoxCeleb data sets shows that it reduces equal error rates (EERs) from the conventional method by 7.5% and 8.1%, respectively.

450 citations


Proceedings ArticleDOI
10 Sep 2018
TL;DR: This work utilizes recurrent neural networks to learn time-aware representations of relation types which can be used in conjunction with existing latent factorization methods to incorporate temporal information.
Abstract: Research on link prediction in knowledge graphs has mainly focused on static multi-relational data In this work we consider temporal knowledge graphs where relations between entities may only hold for a time interval or a specific point in time In line with previous work on static knowledge graphs, we propose to address this problem by learning latent entity and relation type representations To incorporate temporal information, we utilize recurrent neural networks to learn time-aware representations of relation types which can be used in conjunction with existing latent factorization methods The proposed approach is shown to be robust to common challenges in real-world KGs: the sparsity and heterogeneity of temporal expressions Experiments show the benefits of our approach on four temporal KGs The data sets are available under a permissive BSD-3 license

240 citations


Proceedings ArticleDOI
26 Jun 2018
TL;DR: This paper describes Version 2.0 of the ASVspoof 2017 database which was released to correct data anomalies detected post-evaluation and contains as-yet unpublished meta-data which describes recording and playback devices and acoustic environments which support the analysis of replay detection performance and limits.
Abstract: The now-acknowledged vulnerabilities of automatic speaker verification (ASV) technology to spoofing attacks have spawned interests to develop so-called spoofing countermeasures. By providing common databases, protocols and metrics for their assessment, the ASVspoof initiative was born to spear-head research in this area. The first competitive ASVspoof challenge held in 2015 focused on the assessment of countermeasures to protect ASV technology from voice conversion and speech synthesis spoofing attacks. The second challenge switched focus to the consideration of replay spoofing attacks and countermeasures. This paper describes Version 2.0 of the ASVspoof 2017 database which was released to correct data anomalies detected post-evaluation. The paper contains as-yet unpublished meta-data which describes recording and playback devices and acoustic environments. These support the analysis of replay detection performance and limits. Also described are new results for the official ASVspoof baseline system which is based upon a constant Q cesptral coefficient frontend and a Gaussian mixture model backend. Reported are enhancements to the baseline system in the form of log-energy coefficients and cepstral mean and variance normalisation in addition to an alternative i-vector backend. The best results correspond to a 48% relative reduction in equal error rate when compared to the original baseline system.

153 citations


Journal ArticleDOI
TL;DR: A DC Algorithm (DCA) is presented, where the dual step at each iteration can be efficiently carried out due to the accessible subgradient of the largest-k norm, and the efficiency of the proposed DCA in comparison with existing methods which have other penalty terms is demonstrated.
Abstract: We propose a DC (Difference of two Convex functions) formulation approach for sparse optimization problems having a cardinality or rank constraint. With the largest-k norm, an exact DC representation of the cardinality constraint is provided. We then transform the cardinality-constrained problem into a penalty function form and derive exact penalty parameter values for some optimization problems, especially for quadratic minimization problems which often appear in practice. A DC Algorithm (DCA) is presented, where the dual step at each iteration can be efficiently carried out due to the accessible subgradient of the largest-k norm. Furthermore, we can solve each DCA subproblem in linear time via a soft thresholding operation if there are no additional constraints. The framework is extended to the rank-constrained problem as well as the cardinality- and the rank-minimization problems. Numerical experiments demonstrate the efficiency of the proposed DCA in comparison with existing methods which have other penalty terms.

151 citations


Proceedings ArticleDOI
26 Jun 2018
TL;DR: In this article, the authors proposed a tandem detection cost function (t-DCF) metric to compare the performance of different anti-spoofing countermeasures in isolation from automatic speaker verification (ASV).
Abstract: The ASVspoof challenge series was born to spearhead research in anti-spoofing for automatic speaker verification (ASV). The two challenge editions in 2015 and 2017 involved the assessment of spoofing countermeasures (CMs) in isolation from ASV using an equal error rate (EER) metric. While a strategic approach to assessment at the time, it has certain shortcomings. First, the CM EER is not necessarily a reliable predic-tor of performance when ASV and CMs are combined. Second, the EER operating point is ill-suited to user authentication applications , e.g. telephone banking, characterised by a high target user prior but a low spoofing attack prior. We aim to migrate from CM-to ASV-centric assessment with the aid of a new tandem detection cost function (t-DCF) metric. It extends the conventional DCF used in ASV research to scenarios involving spoofing attacks. The t-DCF metric has 6 parameters: (i) false alarm and miss costs for both systems, and (ii) prior probabilities of target and spoof trials (with an implied third, nontar-get prior). The study is intended to serve as a self-contained, tutorial-like presentation. We analyse with the t-DCF a selection of top-performing CM submissions to the 2015 and 2017 editions of ASVspoof, with a focus on the spoofing attack prior. Whereas there is little to choose between countermeasure systems for lower priors, system rankings derived with the EER and t-DCF show differences for higher priors. We observe some ranking changes. Findings support the adoption of the DCF-based metric into the roadmap for future ASVspoof challenges, and possibly for other biometric anti-spoofing evaluations.

147 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: PRIOTRACKER is a backward and forward causality tracker that automatically prioritizes the investigation of abnormal causal dependencies in the tracking process and can capture attack traces that are missed by existing trackers and reduce the analysis time by up to two orders of magnitude.
Abstract: The increasingly sophisticated Advanced Persistent Threat (APT) attacks have become a serious challenge for enterprise IT security. Attack causality analysis, which tracks multi-hop causal relationships between files and processes to diagnose attack provenances and consequences, is the first step towards understanding APT attacks and taking appropriate responses. Since attack causality analysis is a time-critical mission, it is essential to design causality tracking systems that extract useful attack information in a timely manner. However, prior work is limited in serving this need. Existing approaches have largely focused on pruning causal dependencies totally irrelevant to the attack, but fail to differentiate and prioritize abnormal events from numerous relevant, yet benign and complicated system operations, resulting in long investigation time and slow responses. To address this problem, we propose PRIOTRACKER, a backward and forward causality tracker that automatically prioritizes the investigation of abnormal causal dependencies in the tracking process. Specifically, to assess the priority of a system event, we consider its rareness and topological features in the causality graph. To distinguish unusual operations from normal system events, we quantify the rareness of each event by developing a reference model which records common routine activities in corporate computer systems. We implement PRIOTRACKER, in 20K lines of Java code, and a reference model builder in 10K lines of Java code. We evaluate our tool by deploying both systems in a real enterprise IT environment, where we collect 1TB of 2.5 billion OS events from 150 machines in one week. Experimental results show that PRIOTRACKER can capture attack traces that are missed by existing trackers and reduce the analysis time by up to two orders of magnitude.

122 citations


Journal ArticleDOI
TL;DR: Although there are limitations and requirements for applying automated histopathological classification of gastric biopsy specimens in the clinical setting, the results of the present study are promising.
Abstract: Automated image analysis has been developed currently in the field of surgical pathology The aim of the present study was to evaluate the classification accuracy of the e-Pathologist image analysis software A total of 3062 gastric biopsy specimens were consecutively obtained and stained The specimen slides were anonymized and digitized At least two experienced gastrointestinal pathologists evaluated each slide for pathological diagnosis We compared the three-tier (positive for carcinoma or suspicion of carcinoma; caution for adenoma or suspicion of a neoplastic lesion; or negative for a neoplastic lesion) or two-tier (negative or non-negative) classification results of human pathologists and of the e-Pathologist Of 3062 cases, 334% showed an abnormal finding For the three-tier classification, the overall concordance rate was 556% (1702/3062) The kappa coefficient was 028 (95% CI, 026–030; fair agreement) For the negative biopsy specimens, the concordance rate was 906% (1033/1140), but for the positive biopsy specimens, the concordance rate was less than 50% For the two-tier classification, the sensitivity, specificity, positive predictive value, and negative predictive value were 895% (95% CI, 875–914%), 507% (95% CI, 485–529%), 477% (95% CI, 454–499%), and 906% (95% CI, 888–922%), respectively Although there are limitations and requirements for applying automated histopathological classification of gastric biopsy specimens in the clinical setting, the results of the present study are promising

77 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: DROIDUNPACK is a whole-system emulation based Android packing analysis framework, which compared with existing tools, relies on intrinsic characteristics of Android runtime (rather than heuristics), and further enables virtual machine inspection to precisely recover hidden code and reveal packing behaviors.
Abstract: The prevalent usage of runtime packers has complicated Android malware analysis, as both legitimate and malicious apps are leveraging packing mechanisms to protect themselves against reverse engineer. Although recent efforts have been made to analyze particular packing techniques, little has been done to study the unique characteristics of Android packers. In this paper, we report the first systematic study on mainstream Android packers, in an attempt to understand their security implications. For this purpose, we developed DROIDUNPACK, a whole-system emulation based Android packing analysis framework, which compared with existing tools, relies on intrinsic characteristics of Android runtime (rather than heuristics), and further enables virtual machine inspection to precisely recover hidden code and reveal packing behaviors. Running our tool on 6 major commercial packers, 93,910 Android malware samples and 3 existing state-of-the-art unpackers, we found that not only are commercial packing services abused to encrypt malicious or plagiarized contents, they themselves also introduce securitycritical vulnerabilities to the apps being packed. Our study further reveals the prevalence and rapid evolution of custom packers used by malware authors, which cannot be defended against using existing techniques, due to their design weaknesses.

73 citations


Proceedings ArticleDOI
11 Nov 2018
TL;DR: The potential of SX-Aurora TSUBASA is clarified through evaluations of practical applications and the effectiveness of the new execution model is examined by using a microbenchmark.
Abstract: A new SX-Aurora TSUBASA vector supercomputer has been released, and it features a new system architecture and a new execution model to achieve high sustained performance, especially for memory-intensive applications. In SX-Aurora TSUBASA, the vector host (VH) of a standard x86 Linux node is attached to the vector engine (VE) of the newly developed vector processor. An application is executed on the VE, and only system calls are offloaded to the VH. This new execution model can avoid redundant data transfers between the VH and VE that can easily become a bottleneck in the conventional execution model. This paper examines the potential of SX-Aurora TSUBASA. First, the basic performance is clarified by evaluating benchmark programs. Then, the effectiveness of the new execution model is examined by using a microbenchmark. Finally, the potential of SX-Aurora TSUBASA is clarified through evaluations of practical applications.

69 citations


Posted Content
TL;DR: A migration from CM- to ASV-centric assessment with the aid of a new tandem detection cost function (t-DCF) metric is aimed at, which extends the conventional DCF used in ASV research to scenarios involving spoofing attacks.
Abstract: The ASVspoof challenge series was born to spearhead research in anti-spoofing for automatic speaker verification (ASV). The two challenge editions in 2015 and 2017 involved the assessment of spoofing countermeasures (CMs) in isolation from ASV using an equal error rate (EER) metric. While a strategic approach to assessment at the time, it has certain shortcomings. First, the CM EER is not necessarily a reliable predictor of performance when ASV and CMs are combined. Second, the EER operating point is ill-suited to user authentication applications, e.g. telephone banking, characterised by a high target user prior but a low spoofing attack prior. We aim to migrate from CM- to ASV-centric assessment with the aid of a new tandem detection cost function (t-DCF) metric. It extends the conventional DCF used in ASV research to scenarios involving spoofing attacks. The t-DCF metric has 6 parameters: (i) false alarm and miss costs for both systems, and (ii) prior probabilities of target and spoof trials (with an implied third, nontarget prior). The study is intended to serve as a self-contained, tutorial-like presentation. We analyse with the t-DCF a selection of top-performing CM submissions to the 2015 and 2017 editions of ASVspoof, with a focus on the spoofing attack prior. Whereas there is little to choose between countermeasure systems for lower priors, system rankings derived with the EER and t-DCF show differences for higher priors. We observe some ranking changes. Findings support the adoption of the DCF-based metric into the roadmap for future ASVspoof challenges, and possibly for other biometric anti-spoofing evaluations.

Journal ArticleDOI
TL;DR: Yoshino et al. as mentioned in this paper pointed out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QD systems using practical bandwidth limited devices.
Abstract: Quantum key distribution (QKD) allows two distant parties to share secret keys with the proven security even in the presence of an eavesdropper with unbounded computational power. Recently, GHz-clock decoy QKD systems have been realized by employing ultrafast optical communication devices. However, security loopholes of high-speed systems have not been fully explored yet. Here we point out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QKD systems using practical band-width limited devices. We experimentally observe the inter-pulse intensity correlation and modulation pattern-dependent intensity deviation in a practical high-speed QKD system. Such correlation violates the assumption of most security theories. We also provide its countermeasure which does not require significant changes of hardware and can generate keys secure over 100 km fiber transmission. Our countermeasure is simple, effective and applicable to wide range of high-speed QKD systems, and thus paves the way to realize ultrafast and security-certified commercial QKD systems. A potential security loophole and its countermeasure have been discovered in practical implementations of high-speed quantum key distribution (QKD). Ken-ichiro Yoshino from NEC corporation and a team of researchers from Japan investigated the intensity fluctuations of optical pulses in a GHz-clocked QKD system, revealing that the limited bandwidth of the ultrafast optical transmitter’s electronics generates deviations from the ideal signal. These perturbations have been shown to carry signatures of previous modulation patterns - effectively introducing correlations between individual pulses. As the strength of QKD relies on its proof-of-principle security, which in many cases is derived under the assumption of independent pulses, these correlations constitute a loophole that might compromise the whole protocol. Fortunately, the researchers developed two countermeasures: pattern sifting and alternate key distillation, which recover security and do not impact performances too severely.

Proceedings ArticleDOI
11 Mar 2018
TL;DR: A novel, low complexity nonlinearity compensation technique based on generating a black-box model of the transmission by training an artificial neural network is used, resulting in the largest SE-distance product 66,102 b/s/Hz-km over live-traffic carrying cable.
Abstract: We report on the evolution of the longest segment of FASTER cable at 11,017 km, with 8QAM transponders at 4b/s/Hz spectral efficiency (SE) in service. With offline testing, 6 b/s/Hz is further demonstrated using probabilistically shaped 64QAM, and a novel, low complexity nonlinearity compensation technique based on generating a black-box model of the transmission by training an artificial neural network, resulting in the largest SE-distance product 66,102 b/s/Hz-km over live-traffic carrying cable.

Journal ArticleDOI
TL;DR: The proposed model for setting the dispatching headways of bus lines considers the demand, headway and travel time variations along every section of each bus route for different times of the day, as well as operational costs, vehicle capacity and fleet size constraints.
Abstract: Frequency setting requires the determination of the dispatching headways of all bus lines in a city network and constitutes the main activity in the tactical planning of public transport operations. Determining the dispatching headways of bus services in a city network is a multi-criteria problem that typically involves balancing between passenger demand coverage and operational costs. In this study, the problem of setting the optimal dispatching headways is formulated with the explicit consideration of operational variability issues for mitigating the adverse effects of passenger demand and travel time variations inherent to bus operations. The proposed model for setting the dispatching headways of bus lines considers the demand, headway and travel time variations along every section of each bus route for different times of the day, as well as operational costs, vehicle capacity and fleet size constraints. We first formulate the problem while accounting for the consequences of variability in service operations. The resulting optimization problem is then solved by employing a Branch and Bound approach together with Sequential Quadratic Programming in order to find the optimal dispatching headway for each bus line. Experimental results demonstrate (a) the improvement potential of the base case dispatching headways when considering the service reliability; (b) the sensitivity of the determined dispatching headways to changes in different criteria, such as passenger demand and/or bus running costs, and (c) the convergence accuracy of the proposed solution method when compared to heuristic approaches.

Journal ArticleDOI
TL;DR: In this article, a new oxygen evolution reaction mechanism originating from oxygen vacancies, which remarkably enhances the OER activity of oxygen deficient electrocatalysts, is reported, which suggests that oxygen deficient layered superconductors are promising OER catalysts for energy conversion technologies.
Abstract: The oxygen evolution reaction (OER) is an important reaction in the field of renewable energy and is utilized in electrochemical water splitting for hydrogen fuel production and rechargeable metal–air batteries. Herein, we report a new oxygen evolution reaction mechanism originating from oxygen vacancies, which remarkably enhances the OER activity of oxygen deficient electrocatalysts. The OER activity of Sr2VFeAsO3−δ is drastically enhanced above the lattice oxygen vacancy of δ = 0.5, exhibiting ∼300 mV lower overpotential and 80 times higher specific activity at 1.7 V vs. RHE. Surprisingly, the initially low OER activity of Sr2VFeAsO3−δ (δ 0.5. As a result, the distance between OH− coupled oxygen-vacant sites becomes sufficiently short to enable direct O–O bond formation as in photosystem II. Thus, we found that the OER activity of oxygen deficient electrocatalysts is controllable by the variety of lattice oxygen vacancies, which suggests that oxygen deficient layered superconductors are promising OER catalysts for energy conversion technologies.

Patent
Yamamoto Tetsuya1
26 Jul 2018
TL;DR: In this paper, a route switching device includes a first selection section for outputting a first main signal, a second selection section with a second switching command signal and a third monitor for obtaining a second main signal from a selected receiver.
Abstract: A route switching device includes a first selection section for outputting a first main signal, a second selection section for outputting a first switching command signal, a first transmitter transmitting an inputted signal, a second transmitter, a first receiver, a second receiver, a first monitor for outputting an abnormality notification if an abnormality in a second main signal is detected, and outputting a first switching command notification if second information is included in a second switching command signal, a second monitor, and a third selection section for obtaining a second main signal from a selected receiver. The first selection section and the second selection section switches a selection destination when a first switching command notification is input. The second selection section outputs a switching command signal including second information when an abnormality notification is input. The third selection section switches a selection source when an abnormality notification is input.

Journal ArticleDOI
TL;DR: It is clarified that the fulfillment of a real-time tsunami inundation forecast system requires a system with high-performance cores connected to the memory subsystem at a high memory bandwidth such as SX-ACE.
Abstract: The tsunami disasters that occurred in Indonesia, Chile, and Japan have inflicted serious casualties and damaged social infrastructures. Tsunami forecasting systems are thus urgently required worldwide. We have developed a real-time tsunami inundation forecast system that can complete a tsunami inundation and damage forecast for coastal cities at the level of 10-m grid size in less than 20 min. As the tsunami inundation and damage simulation is a vectorizable memory-intensive program, we incorporate NEC's vector supercomputer SX-ACE. In this paper, we present an overview of our system. In addition, we describe an implementation of the program on SX-ACE and evaluate its performance of SX-ACE in comparison with the cases using an Intel Xeon-based system and the K computer. Then, we clarify that the fulfillment of a real-time tsunami inundation forecast system requires a system with high-performance cores connected to the memory subsystem at a high memory bandwidth such as SX-ACE.

Journal ArticleDOI
TL;DR: A genetic algorithm-based framework which integrates software fault localization techniques and focuses on reusing test specifications and input values whenever feasible is proposed which can be easily reused between different products of the same family and help reduce the overall testing and debugging cost.
Abstract: In response to the highly competitive market and the pressure to cost-effectively release good-quality software, companies have adopted the concept of software product line to reduce development cost. However, testing and debugging of each product, even from the same family, is still done independently. This can be very expensive. To solve this problem, we need to explore how test cases generated for one product can be used for another product. We propose a genetic algorithm-based framework which integrates software fault localization techniques and focuses on reusing test specifications and input values whenever feasible. Case studies using four software product lines and eight fault localization techniques were conducted to demonstrate the effectiveness of our framework. Discussions on factors that may affect the effectiveness of the proposed framework is also presented. Our results indicate that test cases generated in such a way can be easily reused (with appropriate conversion) between different products of the same family and help reduce the overall testing and debugging cost.

Posted Content
TL;DR: In this paper, the authors proposed Sereum (Secure Ethereum) which protects deployed smart contracts against re-entrancy attacks in a backwards compatible way based on run-time monitoring and validation.
Abstract: Recently, a number of existing blockchain systems have witnessed major bugs and vulnerabilities within smart contracts. Although the literature features a number of proposals for securing smart contracts, these proposals mostly focus on proving the correctness or absence of a certain type of vulnerability within a contract, but cannot protect deployed (legacy) contracts from being exploited. In this paper, we address this problem in the context of re-entrancy exploits and propose a novel smart contract security technology, dubbed Sereum (Secure Ethereum), which protects existing, deployed contracts against re-entrancy attacks in a backwards compatible way based on run-time monitoring and validation. Sereum does neither require any modification nor any semantic knowledge of existing contracts. By means of implementation and evaluation using the Ethereum blockchain, we show that Sereum covers the actual execution flow of a smart contract to accurately detect and prevent attacks with a false positive rate as small as 0.06% and with negligible run-time overhead. As a by-product, we develop three advanced re-entrancy attacks to demonstrate the limitations of existing offline vulnerability analysis tools.


Book ChapterDOI
03 Sep 2018
TL;DR: A method for creating a digital twin that is network-specific, cost-efficient, highly reliable, and security test-oriented is suggested and demonstrated on a simple use case of a simplified ICS network.
Abstract: Industrial control systems (ICSs), and particularly supervisory control and data acquisition (SCADA) systems, are used in many critical infrastructures and are inherently insecure, making them desirable targets for attackers. ICS networks differ from typical enterprise networks in their characteristics and goals; therefore, security assessment methods that are common in enterprise networks (e.g., penetration testing) cannot be directly applied in ICSs. Thus, security experts recommend using an isolated environment that mimics the real one for assessing the security of ICSs. While the use of such environments solves the main challenge in ICS security analysis, it poses another one: the trade-off between budget and fidelity. In this paper we suggest a method for creating a digital twin that is network-specific, cost-efficient, highly reliable, and security test-oriented. The proposed method consists of two modules: a problem builder that takes facts about the system under test and converts them into a rules set that reflects the system’s topology and digital twin implementation constraints; and a solver that takes these inputs and uses 0–1 non-linear programming to find an optimal solution (i.e., a digital twin specification), which satisfies all of the constraints. We demonstrate the application of our method on a simple use case of a simplified ICS network.

Proceedings ArticleDOI
11 Mar 2018
TL;DR: Quality of transmission prediction for real-time mixed line-rate systems is realized using artificial neural network based transfer learning with SDN orchestrating.
Abstract: Quality of transmission prediction for real-time mixed line-rate systems is realized using artificial neural network based transfer learning with SDN orchestrating. 0.42 dB accuracy is achieved with a 1000 to 20 reduction in training samples.

Journal ArticleDOI
Qian Cheng1, Ya Zhang
TL;DR: In this paper, a multi-channel graphite anode with channels etched into the graphite surface was proposed for fast chargeable lithium ion batteries for electric vehicles and plug-in hybrid vehicles.
Abstract: Graphite, which is the most wildly used anode material for lithium ion batteries, has a limited power performance at high charging rates (Li-ion input), while its alternatives, such as silicon and tin alloys, show an even inferior rate capability. Here, we describe a multi-channel graphite anode with channels etched into the graphite surface that enables lithium ions to quickly access graphite particles for fast chargeable lithium ion batteries. As a result, the multi-channel graphite anode showed an excellent charging rate capability of 83% for 6C charging and 73% for 10C charging, which is much better than pristine graphite material. Moreover, the multi-channel graphite anode showed a great enhanced discharge rate capability than pristine graphite. In addition, it showed excellent cyclability with a capacity retention of 85% at 6C after 3000 cycles without any additives. The multi-channel graphite anode is proposed for use in fast chargeable lithium ion batteries for electric vehicles and plug-in hybrid vehicles.

Proceedings ArticleDOI
02 Sep 2018
TL;DR: This paper investigates the performance of a variety of different features used previously for both ASV and PAD and assesses their performance when combined for both tasks, and presents a Gaussian back-end fusion approach to system combination.
Abstract: The vulnerability of automatic speaker verification (ASV) systems to spoofing is widely acknowledged. Recent years have seen an intensification in research efforts to develop spoofing countermeasures, also known as presentation attack detection (PAD) systems. Much of this work has involved the exploration of features that discriminate reliably between bona fide and spoofed speech. While there are grounds to use different front-ends for ASV and PAD systems (they are different tasks) the use of a single front-end has obvious benefits, not least convenience and computational efficiency, especially when ASV and PAD are combined. This paper investigates the performance of a variety of different features used previously for both ASV and PAD and assesses their performance when combined for both tasks. The paper also presents a Gaussian back-end fusion approach to system combination. In contrast to cascaded architec-tures, it relies upon the modelling of the two-dimensional score distribution stemming from the combination of ASV and PAD in parallel. This approach to combination is shown to gener-alise particularly well across independent ASVspoof 2017 v2.0 development and evaluation datasets.

Journal ArticleDOI
Manabu Arikawa1, Toshiharu Ito1
TL;DR: It is found that full receiver-side multiple-input multiple-output (Rx-MIMO) architecture in three-mode diversity reception improved the performance by 5 dB compared with selection combining (SC) of signals decoded individually in LP modes, and that it mitigated the required transmitted power by 6dB compared with reception with single mode fiber (SMF) coupling.
Abstract: We investigated the performance of mode diversity reception of a polarization-division-multiplexed (PDM) signal with few-mode-fiber (FMF) coupling for high-speed free-space optical communications under atmospheric turbulence. Optical propagation through eigenmodes of a FMF yields coupling between different linearly polarized (LP) modes in orthogonal polarizations, which causes power imbalance and loss of the orthogonality of multiplexed signals within each individual LP mode. Due to this phenomenon, the architecture of mode diversity combining affects the receiver performance. We numerically simulated the power fluctuation coupled to each LP mode after atmospheric propagation and FMF propagation in the condition of an optical downlink from a low-Earth-orbital satellite to the ground. We found that full receiver-side multiple-input multiple-output (Rx-MIMO) architecture in three-mode diversity reception improved the performance by 5 dB compared with selection combining (SC) of signals decoded individually in LP modes, and that it mitigated the required transmitted power by 6 dB compared with reception with single mode fiber (SMF) coupling. We also experimentally confirmed in three-mode diversity reception of a 128 Gb/s PDM-quadrature phase-shift keying with a diffuser plate as a turbulence emulator, that full Rx-MIMO with adaptive filters could work under severe fading and that it outperformed SC.

Posted Content
TL;DR: In this paper, a recurrent neural network (RNN) was used to learn time-aware representations of relation types in temporal KGs. But the proposed approach is not robust to common challenges in real-world KGs: the sparsity and heterogeneity of temporal expressions.
Abstract: Research on link prediction in knowledge graphs has mainly focused on static multi-relational data. In this work we consider temporal knowledge graphs where relations between entities may only hold for a time interval or a specific point in time. In line with previous work on static knowledge graphs, we propose to address this problem by learning latent entity and relation type representations. To incorporate temporal information, we utilize recurrent neural networks to learn time-aware representations of relation types which can be used in conjunction with existing latent factorization methods. The proposed approach is shown to be robust to common challenges in real-world KGs: the sparsity and heterogeneity of temporal expressions. Experiments show the benefits of our approach on four temporal KGs. The data sets are available under a permissive BSD-3 license 1.

Proceedings ArticleDOI
Noriaki Tawa1, Toshihide Kuwabara1, Yasushi Maruta1, Masaaki Tanio1, Tomoya Kaneko1 
01 Sep 2018
TL;DR: This paper describes the indoor experimental verification testing results and discussion of a 28 GHz multi-user (MU) multiple-input multiple-output (MIMO) using the 360 element full-digital massive MIMO active antenna system (AAS) for 5G base station applications.
Abstract: This paper describes the indoor experimental verification testing results and discussion of a 28 GHz multi-user (MU) multiple-input multiple-output (MIMO) using the 360 element full-digital massive MIMO active antenna system (AAS) for 5G base station applications. AAS accommodates 24 channels 28 GHz transceiver chains and FPGA based on 1-bit direct-IF digital transmitters. The antenna array consists of full digitally controlled 24 streams of 15 elements inline sub-arrays. The simultaneous four beams are generated and the average Effective Isotropically Radiated Power (EIRP) is 59.4 dBm. The throughputs are measured by maximum-ratio-combining (MRC) method and zero-forcing (ZF) method by steering control. The achieved user-equipment (UE) throughputs are 0.996 Gbps with single beam, 0.655 Gbps to 0.857 Gbps with four users by MRC, and 0.673 Gbps to 0.873 Gbps by ZF. The four user cell throughputs are 2.884 Gbps and 3.134 Gbps by MRC and ZF respectively. The achieved spectrum efficiency is 10.4 bps/Hz/cell using 300 MHz OFDM signal. As par author's knowledge, this is a first report for experimental verification of multi-user MIMO using 28 GHz digital AAS.

Posted Content
Qiongqiong Wang1, Koji Okabe1, Kong Aik Lee1, Hitoshi Yamamoto1, Takafumi Koshinaka1 
TL;DR: The possibility that the attention model might be decoupled from its parent network and assist other speaker embedding networks and even conventional i-vector extractors is considered.
Abstract: This paper presents an experimental study on deep speaker embedding with an attention mechanism that has been found to be a powerful representation learning technique in speaker recognition. In this framework, an attention model works as a frame selector that computes an attention weight for each frame-level feature vector, in accord with which an utterancelevel representation is produced at the pooling layer in a speaker embedding network. In general, an attention model is trained together with the speaker embedding network on a single objective function, and thus those two components are tightly bound to one another. In this paper, we consider the possibility that the attention model might be decoupled from its parent network and assist other speaker embedding networks and even conventional i-vector extractors. This possibility is demonstrated through a series of experiments on a NIST Speaker Recognition Evaluation (SRE) task, with 9.0% EER reduction and 3.8% min_Cprimary reduction when the attention weights are applied to i-vector extraction. Another experiment shows that DNN-based soft voice activity detection (VAD) can be effectively combined with the attention mechanism to yield further reduction of minCprimary by 6.6% and 1.6% in deep speaker embedding and i-vector systems, respectively.

Posted Content
TL;DR: An unsupervised PLDA adaptation algorithm to learn from a small amount of unlabeled in-domain data to close the gap due to the domain mismatch is proposed.
Abstract: State-of-the-art speaker recognition systems comprise an x-vector (or i-vector) speaker embedding front-end followed by a probabilistic linear discriminant analysis (PLDA) backend. The effectiveness of these components relies on the availability of a large collection of labeled training data. In practice, it is common that the domains (e.g., language, demographic) in which the system are deployed differs from that we trained the system. To close the gap due to the domain mismatch, we propose an unsupervised PLDA adaptation algorithm to learn from a small amount of unlabeled in-domain data. The proposed method was inspired by a prior work on feature-based domain adaptation technique known as the correlation alignment (CORAL). We refer to the model-based adaptation technique proposed in this paper as CORAL+. The efficacy of the proposed technique is experimentally validated on the recent NIST 2016 and 2018 Speaker Recognition Evaluation (SRE'16, SRE'18) datasets.

Journal ArticleDOI
25 May 2018
TL;DR: In this paper, the authors present the security aspects of the 5G system specified by the 3rd Generation Partnership Project (3GPP), especially highlighting the differences to the 4G (LTE) system.
Abstract: 5G is the next generation of mobile communication systems. As it is being finalized, the specification is stable enough to allow giving an overview. This paper presents the security aspects of the 5G system specified by the 3rd Generation Partnership Project (3GPP), especially highlighting the differences to the 4G (LTE) system. The most important 5G security enhancements are access agnostic primary authentication with home control, security key establishment and management, security for mobility, service based architecture security, inter-network security, privacy and security for services provided over 5G with secondary authentication.