scispace - formally typeset
Search or ask a question

Showing papers by "NEC published in 2020"


Proceedings Article
21 Jan 2020
TL;DR: This paper demonstrates the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling, and shows that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks.
Abstract: Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at this https URL.

1,313 citations


Posted Content
TL;DR: FixMatch as mentioned in this paper combines consistency regularization and pseudo-labeling to generate pseudo-labels using the model's predictions on weakly-augmented unlabeled images.
Abstract: Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at this https URL.

375 citations


Journal ArticleDOI
TL;DR: The ASVspoof challenge as mentioned in this paper was created to foster research on anti-spoofing and to provide common platforms for the assessment and comparison of spoofing countermeasures, and the first edition focused on replay spoofing attacks and countermeasures.

211 citations


Posted Content
TL;DR: PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to explaining multiple instances collectively, and has better generalization ability and can be utilized in an inductive setting easily.
Abstract: Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method independently addresses the local explanations (i.e., important subgraph structure and node features) to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized for each instance. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to a lack of generalizability and hindering it from being used in the inductive setting. Besides, as it is designed for explaining a single instance, it is challenging to explain a set of instances naturally (e.g., graphs of a given class). In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to explaining multiple instances collectively. Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7\% relative improvement in AUC on explaining graph classification over the leading baseline.

192 citations


Journal ArticleDOI
TL;DR: This article presents the first 39-GHz phased-array transceiver (TRX) chipset for fifth-generation new radio (5G NR), consisting of 4 sub-array TRX elements with local-oscillator (LO) phase-shifting architecture and built-in calibration on phase and amplitude.
Abstract: This article presents the first 39-GHz phased-array transceiver (TRX) chipset for fifth-generation new radio (5G NR). The proposed transceiver chipset consists of 4 sub-array TRX elements with local-oscillator (LO) phase-shifting architecture and built-in calibration on phase and amplitude. The calibration scheme is proposed to alleviate phase and amplitude mismatch between each sub-array TRX element, especially for a large-array transceiver system in the base station (BS). Based on LO phase-shifting architecture, the transceiver has a 0.04-dB maximum gain variation over the 360° full tuning range, allowing constant-gain characteristic during phase calibration. A phase-to-digital converter (PDC) and a high-resolution phase-detection mechanism are proposed for highly accurate phase calibration. The built-in calibration has a measured accuracy of 0.08° rms phase error and 0.01-dB rms amplitude error. Moreover, a pseudo-single-balanced mixer is proposed for LO-feedthrough (LOFT) cancellation and sub-array TRX LO-to-LO isolation. The transceiver is fabricated in standard 65-nm CMOS technology with flip-chip packaging. The 8TX–8RX phased-array transceiver module 1-m OTA measurement supports 5G NR 400-MHz 256-QAM OFDMA modulation with −30.0-dB EVM. The 64-element transceiver has a EIRPMAX of 53 dBm. The four-element chip consumes a power of 1.5 W in the TX mode and 0.5 W in the RX mode.

118 citations


Journal ArticleDOI
TL;DR: A neutralized bi-directional technique is introduced in this work to reduce the chip area significantly and Compact and low-cost 5G millimeter-wave MIMO systems could be realized.
Abstract: This article presents a low-cost and area-efficient 28-GHz CMOS phased-array beamformer chip for 5G millimeter-wave dual-polarized multiple-in-multiple-out (MIMO) (DP-MIMO) systems. A neutralized bi-directional technique is introduced in this work to reduce the chip area significantly. With the proposed technique, completely the same circuit chain is shared between the transmitter and receiver. To further minimize the area, an active bi-directional vector-summing phase shifter is also introduced. Area-efficient and high-resolution active phase shifting could be realized in both TX and RX modes. In measurement, the achieved saturated output power for the TX-mode beamformer is 15.1 dBm. The RX-mode noise figure is 4.2 dB at 28 GHz. To evaluate the over-the-air performance, 16 H+16 V sub-array modules are implemented in this work. Each of the sub-array modules consists of four 4 H+4 V chips. Two sub-array modules in this work are capable of scanning the beam from −50° to +50°. A saturated EIRP of 45.6 dBm is realized by 32 TX-mode beamformers. Within 1-m distance, a maximum SC-mode data rate of 15 Gb/s and the 5G new radio downlink packets transmission in 256-QAM could be supported by the module. A $2\times 2$ DP-MIMO communication is also demonstrated with two 5G new radio 64-QAM uplink streams. Thanks to the proposed area-efficient bi-directional technique, the required core area for a single element-beamformer is only 0.58 mm2. Compact and low-cost 5G millimeter-wave MIMO systems could be realized.

113 citations


Journal ArticleDOI
Tatsuaki Okada1, Tatsuaki Okada2, Tetsuya Fukuhara3, Satoshi Tanaka1, Satoshi Tanaka2, Satoshi Tanaka4, Makoto Taguchi3, Takehiko Arai, Hiroki Senshu5, Naoya Sakatani2, Yuri Shimaki2, Hirohide Demura6, Yoshiko Ogawa6, Kentaro Suko6, Tomohiko Sekiguchi7, Toru Kouyama8, Jun Takita9, Tsuneo Matsunaga10, Takeshi Imamura1, Takehiko Wada2, Sunao Hasegawa2, Jörn Helbert11, Thomas G. Müller12, Axel Hagermann13, Jens Biele11, Matthias Grott11, Maximilian Hamm14, Maximilian Hamm11, Marco Delbo15, Naru Hirata6, Naoyuki Hirata16, Yukio Yamamoto4, Yukio Yamamoto2, Seiji Sugita5, Seiji Sugita1, Noriyuki Namiki4, Kohei Kitazato6, Masahiko Arakawa16, Shogo Tachibana2, Shogo Tachibana1, Hitoshi Ikeda2, Masateru Ishiguro17, Koji Wada5, Chikatoshi Honda6, Rie Honda18, Yoshiaki Ishihara10, Koji Matsumoto4, Moe Matsuoka2, Tatsuhiro Michikami19, Akira Miura2, Tomokatsu Morota1, Hirotomo Noda, Rina Noguchi2, Kazunori Ogawa2, Kazunori Ogawa16, Kei Shirai16, Eri Tatsumi1, Eri Tatsumi20, Hikaru Yabuta21, Yasuhiro Yokota2, Manabu Yamada5, Masanao Abe2, Masanao Abe4, Masahiko Hayakawa2, Takahiro Iwata2, Takahiro Iwata4, Masanobu Ozaki4, Masanobu Ozaki2, Hajime Yano4, Hajime Yano2, Satoshi Hosoda2, Osamu Mori2, Hirotaka Sawada2, Takanobu Shimada2, Hiroshi Takeuchi2, Hiroshi Takeuchi4, Ryudo Tsukizaki2, Atsushi Fujii2, Chikako Hirose2, Shota Kikuchi2, Yuya Mimasu2, Naoko Ogawa2, Go Ono2, T. Takahashi22, T. Takahashi2, Yuto Takei2, Tomohiro Yamaguchi23, Tomohiro Yamaguchi2, Kent Yoshikawa2, Fuyuto Terui2, Takanao Saiki2, Satoru Nakazawa2, Makoto Yoshikawa4, Makoto Yoshikawa2, Sei-ichiro Watanabe24, Sei-ichiro Watanabe2, Yuichi Tsuda2, Yuichi Tsuda4 
26 Mar 2020-Nature
TL;DR: Thermal imaging data obtained from the spacecraft Hayabusa2 reveal that the carbonaceous asteroid 162173 Ryugu is an object of unusually high porosity, which constrain the formation history of Ryugu.
Abstract: Additional co-authors: Tsuneo Matsunaga, Takeshi Imamura, Takehiko Wada, Sunao Hasegawa, Jorn Helbert, Thomas G. Muller, Jens Biele, Matthias Grott, Maximilian Hamm, Marco Delbo, Naru Hirata, Naoyuki Hirata, Yukio Yamamoto, Seiji Sugita, Noriyuki Namiki, Kohei Kitazato, Masahiko Arakawa, Shogo Tachibana, Hitoshi Ikeda, Masateru Ishiguro, Koji Wada, Chikatoshi Honda, Rie Honda, Yoshiaki Ishihara, Koji Matsumoto, Moe Matsuoka, Tatsuhiro Michikami, Akira Miura, Tomokatsu Morota, Hirotomo Noda, Rina Noguchi, Kazunori Ogawa, Kei Shirai, Eri Tatsumi, Hikaru Yabuta, Yasuhiro Yokota, Manabu Yamada, Masanao Abe, Masahiko Hayakawa, Takahiro Iwata, Masanobu Ozaki, Hajime Yano, Satoshi Hosoda, Osamu Mori, Hirotaka Sawada, Takanobu Shimada, Hiroshi Takeuchi, Ryudo Tsukizaki, Atsushi Fujii, Chikako Hirose, Shota Kikuchi, Yuya Mimasu, Naoko Ogawa, Go Ono, Tadateru Takahashi, Yuto Takei, Tomohiro Yamaguchi, Kent Yoshikawa, Fuyuto Terui, Takanao Saiki, Satoru Nakazawa, Makoto Yoshikawa, Seiichiro Watanabe & Yuichi Tsuda Output Status: Forthcoming/Available Online

110 citations


Proceedings ArticleDOI
01 Jan 2020
TL;DR: The insight behind the PROVDETECTOR approach is that although stealthy malware attempts to blend into benign processes, the malicious behaviors inevitably interact with the underlying operating system (OS), which will be exposed to and captured by provenance monitoring.
Abstract: To subvert recent advances in perimeter and host security, the attacker community has developed and employed various attack vectors to make a malware much stealthier than before to penetrate the target system and prolong its presence. Advanced malware or “stealthy malware” makes use of various techniques to impersonate or abuse benign applications and legitimate system tools to minimize its footprints in the target system. Thus, it is difficult for traditional detection tools, such as malware scanners, to detect it, as the malware normally does not expose its malicious payload in a file and hides its malicious behaviors among the benign behaviors of the processes. In this paper, we present PROVDETECTOR, a provenancebased approach for detecting stealthy malware. Our insight behind the PROVDETECTOR approach is that although stealthy malware attempts to blend into benign processes, the malicious behaviors inevitably interact with the underlying operating system (OS), which will be exposed to and captured by provenance monitoring. Based on this intuition, PROVDETECTOR first employs a novel selection algorithm to identify possible malicious parts in the OS-level provenance data of a process. It then applies a neural embedding and machine learning pipeline to automatically detect any behavior that deviates significantly from normal behaviors. We evaluate our approach on a large provenance dataset from an enterprise network and demonstrate that it achieves very high detection performance of stealthy malware (an average F1 score of 0.974). Further, we conduct thorough interpretability studies to understand the internals of the learned machine learning models.

86 citations


Journal ArticleDOI
TL;DR: The first field trial of distributed fiber optical sensing (DFOS) and high-speed communication, comprising a coexisting system, over an operation telecom network is presented, with positive results for vehicle speed and vehicle density.
Abstract: To the best of our knowledge, we present the first field trial of distributed fiber optical sensing (DFOS) and high-speed communication, comprising a coexisting system, over an operation telecom network. Using probabilistic-shaped (PS) DP-144QAM, a 36.8 Tb/s with an 8.28-b/s/Hz spectral efficiency (SE) (48-Gbaud channels, 50-GHz channel spacing) was achieved. Employing DFOS technology, road traffic, i.e., vehicle speed and vehicle density, were sensed with 98.5% and 94.5% accuracies, respectively, as compared to video analytics. Additionally, road conditions, i.e., roughness level was sensed with >85% accuracy via a machine learning based classifier.

72 citations


Proceedings ArticleDOI
01 Mar 2020
TL;DR: The proposed multi-timescale model makes future and past predictions at different timescales for a given input pose trajectory and outperforms existing methods to detect abnormal activities.
Abstract: A classical approach to abnormal activity detection is to learn a representation for normal activities from the training data and then use this learned representation to detect abnormal activities while testing. Typically, the methods based on this approach operate at a fixed timescale — either a single time-instant (e.g. frame-based) or a constant time duration (e.g. video-clip based). But human abnormal activities can take place at different timescales. For example, jumping is a short-term anomaly and loitering is a long-term anomaly in a surveillance scenario. A single and pre-defined timescale is not enough to capture the wide range of anomalies occurring with different time duration. In this paper, we propose a multi-timescale model to capture the temporal dynamics at different timescales. In particular, the proposed model makes future and past predictions at different timescales for a given input pose trajectory. The model is multi-layered where intermediate layers are responsible to generate predictions corresponding to different timescales. These predictions are combined to detect abnormal activities. In addition, we also introduce a single-camera abnormal activity dataset for research use that contains 483,566 annotated frames. Our experiments show that the proposed model can capture the anomalies of different time duration and outperforms existing methods.

64 citations


Journal ArticleDOI
TL;DR: This paper presents several new extensions to the tandem detection cost function (t-DCF), a recent risk-based approach to assess the reliability of spoofing CMs deployed in tandem with an ASV system.
Abstract: Recent years have seen growing efforts to develop spoofing countermeasures (CMs) to protect automatic speaker verification (ASV) systems from being deceived by manipulated or artificial inputs. The reliability of spoofing CMs is typically gauged using the equal error rate (EER) metric. The primitive EER fails to reflect application requirements and the impact of spoofing and CMs upon ASV and its use as a primary metric in traditional ASV research has long been abandoned in favour of risk-based approaches to assessment. This paper presents several new extensions to the tandem detection cost function (t-DCF), a recent risk-based approach to assess the reliability of spoofing CMs deployed in tandem with an ASV system. Extensions include a simplified version of the t-DCF with fewer parameters, an analysis of a special case for a fixed ASV system, simulations which give original insights into its interpretation and new analyses using the ASVspoof 2019 database. It is hoped that adoption of the t-DCF for the CM assessment will help to foster closer collaboration between the anti-spoofing and ASV research communities.

Proceedings Article
Takashi Ishida1, Ikko Yamane1, Tomoya Sakai2, Gang Niu, Masashi Sugiyama1 
12 Jul 2020
TL;DR: Flooding is proposed that intentionally prevents further reduction of the training loss when it reaches a reasonably small value, which is called the flooding level and is compatible with any stochastic optimizer and other regularizers.
Abstract: Overparameterized deep networks have the capacity to memorize training data with zero \emph{training error}. Even after memorization, the \emph{training loss} continues to approach zero, making the model overconfident and the test performance degraded. Since existing regularizers do not directly aim to avoid zero training loss, it is hard to tune their hyperparameters in order to maintain a fixed/preset level of training loss. We propose a direct solution called \emph{flooding} that intentionally prevents further reduction of the training loss when it reaches a reasonably small value, which we call the \emph{flood level}. Our approach makes the loss float around the flood level by doing mini-batched gradient descent as usual but gradient ascent if the training loss is below the flood level. This can be implemented with one line of code and is compatible with any stochastic optimizer and other regularizers. With flooding, the model will continue to "random walk" with the same non-zero training loss, and we expect it to drift into an area with a flat loss landscape that leads to better generalization. We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.

Journal ArticleDOI
TL;DR: Iron sulfide nanoclusters enable on-demand and local generation of nitric oxide, an important lipophilic messenger in the brain, allowing the modulation and investigation of nitrics oxide-triggered neural signalling events.
Abstract: Understanding the function of nitric oxide, a lipophilic messenger in physiological processes across nervous, cardiovascular and immune systems, is currently impeded by the dearth of tools to deliver this gaseous molecule in situ to specific cells To address this need, we have developed iron sulfide nanoclusters that catalyse nitric oxide generation from benign sodium nitrite in the presence of modest electric fields Locally generated nitric oxide activates the nitric oxide-sensitive cation channel, transient receptor potential vanilloid family member 1 (TRPV1), and the latency of TRPV1-mediated Ca2+ responses can be controlled by varying the applied voltage Integrating these electrocatalytic nanoclusters with multimaterial fibres allows nitric oxide-mediated neuronal interrogation in vivo The in situ generation of nitric oxide in the ventral tegmental area with the electrocatalytic fibres evoked neuronal excitation in the targeted brain region and its excitatory projections This nitric oxide generation platform may advance mechanistic studies of the role of nitric oxide in the nervous system and other organs Iron sulfide nanoclusters enable on-demand and local generation of nitric oxide, an important lipophilic messenger in the brain, allowing the modulation and investigation of nitric oxide-triggered neural signalling events

Proceedings ArticleDOI
08 Mar 2020
TL;DR: The experimental implementation of photonic neural network for fiber nonlinearity compensation over a 10,080 km trans-pacific transmission link achieves Q-factor improvement of 0.51 dB with only 0.06 dB lower than numerical simulations.
Abstract: We demonstrate the experimental implementation of photonic neural network for fiber nonlinearity compensation over a 10,080 km trans-pacific transmission link. Q-factor improvement of 0.51 dB is achieved with only 0.06 dB lower than numerical simulations.

Journal ArticleDOI
TL;DR: The capabilities of the FIWARE platform are introduced, which is transitioning from a research to a commercial level, and two examples are given showing that FIWARE still maintains openness to innovation: semantics and privacy.
Abstract: The ever-increasing acceleration of technology evolution in all fields is rapidly changing the architectures of data-driven systems towards the Internet-of-Things concept. Many general and specific-purpose IoT platforms are already available. This article introduces the capabilities of the FIWARE framework that is transitioning from a research to a commercial level. We base our exposition on the analysis of three real-world use cases (global IoT market, analytics in smart cities, and IoT augmented autonomous driving) and their requirements that are addressed with the usage of FIWARE. We highlight the lessons learnt during the design, implementation and deployment phases for each of the use cases and their critical issues. Finally we give two examples showing that FIWARE still maintains openness to innovation: semantics and privacy.

Journal ArticleDOI
05 Jun 2020
TL;DR: An imagebased autonomous navigation scheme of the Hayabusa2 mission using artificial landmarks named target markers (TMs) is described, and the in-flight results of the first touchdown and its rehearsal are shown.
Abstract: Hayabusa2 is an asteroid sample return mission carried out by the Japan Aerospace Exploration Agency The spacecraft was launched in 2014 and arrived at the target asteroid Ryugu on June 27, 2018 During the 15-year proximity phase, several critical operations (including two landing/sampling operations) were successfully performed They were based on autonomous image-based descent and landing techniques This paper describes an imagebased autonomous navigation scheme of the Hayabusa2 mission using artificial landmarks named target markers (TMs) Its basic algorithm, and the in-flight results of the first touchdown and its rehearsal, are shown

Proceedings Article
09 Nov 2020
TL;DR: PGExplainer as mentioned in this paper adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to explaining multiple instances collectively, which can be utilized in an inductive setting easily.
Abstract: Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method independently addresses the local explanations (i.e., important subgraph structure and node features) to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized for each instance. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to a lack of generalizability and hindering it from being used in the inductive setting. Besides, as it is designed for explaining a single instance, it is challenging to explain a set of instances naturally (e.g., graphs of a given class). In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to explaining multiple instances collectively. Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7\% relative improvement in AUC on explaining graph classification over the leading baseline.

Journal ArticleDOI
TL;DR: In this article, the authors present an extended network security model for MulVAL that considers the physical network topology, supports short-range communication protocols, models vulnerabilities in the design of network protocols, and models specific industrial communication architectures.
Abstract: An attack graph is a method used to enumerate the possible paths that an attacker can take in the organizational network. MulVAL is a known open-source framework used to automatically generate attack graphs. MulVAL's default modeling has two main shortcomings. First, it lacks the ability to represent network protocol vulnerabilities, and thus it cannot be used to model common network attacks, such as ARP poisoning. Second, it does not support advanced types of communication, such as wireless and bus communication, and thus it cannot be used to model cyber-attacks on networks that include IoT devices or industrial components. In this paper, we present an extended network security model for MulVAL that: (1) considers the physical network topology, (2) supports short-range communication protocols, (3) models vulnerabilities in the design of network protocols, and (4) models specific industrial communication architectures. Using the proposed extensions, we were able to model multiple attack techniques including: spoofing, man-in-the-middle, and denial of service attacks, as well as attacks on advanced types of communication. We demonstrate the proposed model in a testbed which implements a simplified network architecture comprised of both IT and industrial components

Journal ArticleDOI
01 Jun 2020
TL;DR: In this article, a Monte Carlo analysis was used to determine the parameters of the operational design for the final descent and touchdown sequence of the Hayabusa2 asteroid explorer mission, and the simulation of the simulation during the touchdown of the spacecraft was discussed.
Abstract: The Hayabusa2 asteroid explorer mission focuses principally on the touchdown and sampling on near-Earth asteroid 162173 Ryugu. Hayabusa2 successfully landed on its surface and ejected a projectile for sample collection on February 22, 2019. Hayabusa2 later landed near a crater formed by an impactor and executed the sampling sequence again on July 11, 2019. For a successful mission, a thorough understanding and evaluation of spacecraft dynamics during touchdown were crucial. The most challenging aspect of this study was the modeling of such spacecraft phenomena as the dynamics of landing on a surface with unknown properties. In particular, a Monte Carlo analysis was used to determine the parameters of the operational design for the final descent and touchdown sequence. This paper discusses the dynamical modeling of the simulation during the touchdown of Hayabusa2.

Patent
Hisashi Futaki1, Sadafuku Hayashi1
09 Jul 2020
TL;DR: In this paper, a radio terminal is configured to transmit data using a first communication architecture type, in response to an occurrence of a request for specific data transmission when the radio terminal has already been configured by a network to use a second architecture type.
Abstract: A radio terminal (1) is configured to transmit data using a first communication architecture type, in response to an occurrence of a request for specific data transmission when the radio terminal (1) has already been configured by a network (3) to use a second communication architecture type. This contributes to, for example, when the radio terminal has already been configured by the network to use the second communication architecture type that involves suspension and resumption of an RRC connection, facilitating effectively performing communication according to the first communication architecture type that involves data transmission over a control plane.

Journal ArticleDOI
TL;DR: This paper overviews the progresses of silicon photonics from four points reflecting the recent advances reflecting the CMOS-based silicon photonic platform technologies, applications to optical transceiver in the data-com network, Applications to multi-port optical switches in the telecom network and applications to OPA in LiDAR system.
Abstract: In recent decades, silicon photonics has attracted much attention in telecom and data-com areas. Constituted of high refractive-index contrast waveguides on silicon-on-insulator (SOI), a variety of integrated photonic passive and active devices have been implemented supported by excellent optical properties of silicon in the mid-infrared spectrum. The main advantage of the silicon photonics is the ability to use complementary metal oxide semiconductor (CMOS) pro-cess-compatible fabrication technologies, resulting in high-volume production at low cost. On the other hand, explosively growing traffic in the telecom, data center and high-performance computer demands the data flow to have high speed, wide bandwidth, low cost, and high energy-efficiency, as well as the photonics and electronics to be integrated for ultra-fast data transfer in networks. In practical applications, silicon photonics started with optical interconnect transceivers in the data-com first, and has been now extended to innovative applications such as multi-port optical switches in the telecom network node and integrated optical phased arrays (OPAs) in light detection and ranging (LiDAR). This paper overviews the progresses of silicon photonics from four points reflecting the recent advances mentioned above. CMOS-based silicon photonic platform technologies, applications to optical transceiver in the data-com network, applications to multi-port optical switches in the telecom network and applications to OPA in LiDAR system.

Journal ArticleDOI
TL;DR: In this article, the authors present several extensions to the tandem detection cost function (t-DCF), a recent risk-based approach to assess the reliability of spoofing CMs deployed in tandem with an ASV system.
Abstract: Recent years have seen growing efforts to develop spoofing countermeasures (CMs) to protect automatic speaker verification (ASV) systems from being deceived by manipulated or artificial inputs. The reliability of spoofing CMs is typically gauged using the equal error rate (EER) metric. The primitive EER fails to reflect application requirements and the impact of spoofing and CMs upon ASV and its use as a primary metric in traditional ASV research has long been abandoned in favour of risk-based approaches to assessment. This paper presents several new extensions to the tandem detection cost function (t-DCF), a recent risk-based approach to assess the reliability of spoofing CMs deployed in tandem with an ASV system. Extensions include a simplified version of the t-DCF with fewer parameters, an analysis of a special case for a fixed ASV system, simulations which give original insights into its interpretation and new analyses using the ASVspoof 2019 database. It is hoped that adoption of the t-DCF for the CM assessment will help to foster closer collaboration between the anti-spoofing and ASV research communities.

Book ChapterDOI
01 Jul 2020
TL;DR: This paper presents a design of authenticated encryption focusing on minimizing the implementation size, i.e., hardware gates or working memory on software, and shows that the proposed scheme, called \(\textsf {COFB}\), for COmbined FeedBack has a good performance and the smallest footprint among all known blockcipher-based AE.
Abstract: This paper presents a design of authenticated encryption (AE) focusing on minimizing the implementation size, i.e., hardware gates or working memory on software. The scheme is called \(\textsf {COFB}\), for COmbined FeedBack. \(\textsf {COFB}\) uses an n-bit blockcipher as the underlying primitive, and relies on the use of a nonce for security. In addition to the state required for executing the underlying blockcipher, \(\textsf {COFB}\) needs only n / 2 bits state as a mask. Till date, for all existing constructions in which masks have been applied, at least n bit masks have been used. Thus, we have shown the possibility of reducing the size of a mask without degrading the security level much. Moreover, it requires one blockcipher call to process one input block. We show \(\textsf {COFB}\) is provably secure up to \(O(2^{n/2}/n)\) queries which is almost up to the standard birthday bound. We also present our hardware implementation results. Experimental implementation results suggest that our proposal has a good performance and the smallest footprint among all known blockcipher-based AE.

Posted Content
TL;DR: PEXESO is proposed, a framework for joinable table discovery in data lakes that identifies substantially more tables than equi-joins and outperforms other similarity-based options, and the join results are useful in data enrichment for machine learning tasks.
Abstract: Finding joinable tables in data lakes is key procedure in many applications such as data integration, data augmentation, data analysis, and data market. Traditional approaches that find equi-joinable tables are unable to deal with misspellings and different formats, nor do they capture any semantic joins. In this paper, we propose PEXESO, a framework for joinable table discovery in data lakes. We embed textual values as high-dimensional vectors and join columns under similarity predicates on high-dimensional vectors, hence to address the limitations of equi-join approaches and identify more meaningful results. To efficiently find joinable tables with similarity, we propose a block-and-verify method that utilizes pivot-based filtering. A partitioning technique is developed to cope with the case when the data lake is large and the index cannot fit in main memory. An experimental evaluation on real datasets shows that our solution identifies substantially more tables than equi-joins and outperforms other similarity-based options, and the join results are useful in data enrichment for machine learning tasks. The experiments also demonstrate the efficiency of the proposed method.

Proceedings ArticleDOI
25 Oct 2020
TL;DR: A dynamic-margin softmax loss for the training of deep speaker embedding neural network that dynamically set the margin of each training sample commensurate with the cosine angle of that sample, hence, the name dynamic-additive margin softmax (DAM-Softmax) loss.
Abstract: We propose a dynamic-margin softmax loss for the training of deep speaker embedding neural network. Our proposal is inspired by the additive-margin softmax (AM-Softmax) loss reported earlier. In AM-Softmax loss, a constant margin is used for all training samples. However, the angle between the feature vector and the ground-truth class center is rarely the same for all samples. Furthermore, the angle also changes during training. Thus, it is more reasonable to set a dynamic margin for each training sample. In this paper, we propose to dynamically set the margin of each training sample commensurate with the cosine angle of that sample, hence, the name dynamic-additivemargin softmax (DAM-Softmax) loss. More specifically, the smaller the cosine angle is, the larger the margin between the training sample and the corresponding class in the feature space should be to promote intra-class compactness. Experimental results show that the proposed DAM-Softmax loss achieves stateof-the-art performance on the VoxCeleb dataset by 1.94% in equal error rate (EER). In addition, our method also outperforms AM-Softmax loss when evaluated on the Speakers in the Wild (SITW) corpus.

Journal ArticleDOI
TL;DR: It is shown that the resulting ETDNN obtained after applying pruning becomes so sparse that its complexity is comparable to that of conventional DPDs such as memory polynomial(MP) and generalized memoryPolynomial (GMP), while the degradation in performance due to the pruning is negligible.
Abstract: This paper proposes an efficient neural-network-based digital predistortion (DPD), named as envelope time-delay neural network (ETDNN) DPD The method complies with the physical characteristics of radio-frequency (RF) power amplifiers (PAs) and uses a more compact DPD model than the conventional neural-network-based DPD Additionally, a structured pruning technique is presented and used to reduce the computational complexity It is shown that the resulting ETDNN obtained after applying pruning becomes so sparse that its complexity is comparable to that of conventional DPDs such as memory polynomial(MP) and generalized memory polynomial (GMP), while the degradation in performance due to the pruning is negligible In an experiment on a 35-GHz GaN Doherty power amplifier (PA), our method with the proposed pruning had only one-thirtieth the computational complexity of the conventional neural-network-based DPD for the same adjacent channel leakage ratio (ACLR) Moreover, compared with memory-polynomial-based digital predistortion, our method with the proposed pruning achieved a 7-dB improvement in ACLR, despite it having lower computational complexity

Posted Content
Takashi Ishida1, Ikko Yamane1, Tomoya Sakai2, Gang Niu, Masashi Sugiyama1 
TL;DR: The authors proposed a method called "flooding" to prevent further reduction of the training loss when it reaches a reasonably small value, which they call the ''flood level'' and experimentally show that flooding improves performance and induces a double descent curve of the test loss.
Abstract: Overparameterized deep networks have the capacity to memorize training data with zero \emph{training error}. Even after memorization, the \emph{training loss} continues to approach zero, making the model overconfident and the test performance degraded. Since existing regularizers do not directly aim to avoid zero training loss, it is hard to tune their hyperparameters in order to maintain a fixed/preset level of training loss. We propose a direct solution called \emph{flooding} that intentionally prevents further reduction of the training loss when it reaches a reasonably small value, which we call the \emph{flood level}. Our approach makes the loss float around the flood level by doing mini-batched gradient descent as usual but gradient ascent if the training loss is below the flood level. This can be implemented with one line of code and is compatible with any stochastic optimizer and other regularizers. With flooding, the model will continue to "random walk" with the same non-zero training loss, and we expect it to drift into an area with a flat loss landscape that leads to better generalization. We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.

Journal ArticleDOI
TL;DR: A detailed description and analysis of the design methodology, data augmentation, bandwidth extension, multi-head attention, PLDA adaptation, and other components that have contributed to good performance in NEC-TT’s SRE’18 results are provided.

Journal ArticleDOI
01 Dec 2020
TL;DR: In this paper, strategies and technical details of the guidance, navigation, and control systems are presented, and the flight results prove that the performance of the systems was satisfactory and largely contributed to the success of the operation.
Abstract: Hayabusa2 is a Japanese sample return mission from the asteroid Ryugu. The Hayabusa2 spacecraft was launched on 3 December 2014 and arrived at Ryugu on 27 June 2018. It stayed there until December 2019 for in situ observation and soil sample collection, and will return to the Earth in November or December 2020. During the stay, the spacecraft performed the first touchdown operation on 22 February 2019 and the second touchdown on 11 July 2019, which were both completed successfully. Because the surface of Ryugu is rough and covered with boulders, it was not easy to find target areas for touchdown. There were several technical challenges to overcome, including demanding guidance, navigation, and control accuracy, to realize the touchdown operation. In this paper, strategies and technical details of the guidance, navigation, and control systems are presented. The flight results prove that the performance of the systems was satisfactory and largely contributed to the success of the operation.

Book ChapterDOI
21 Oct 2020
TL;DR: In this paper, the authors present WARP, a lightweight 128-bit block cipher with a 32-nibble Type-2 Generalized Feistel Network with a permutation over nibbles designed to optimize the security and efficiency.
Abstract: In this article, we present WARP, a lightweight 128-bit block cipher with a 128-bit key. It aims at small-footprint circuit in the field of 128-bit block ciphers, possibly for a unified encryption and decryption functionality. The overall structure of WARP is a variant of 32-nibble Type-2 Generalized Feistel Network (GFN), with a permutation over nibbles designed to optimize the security and efficiency. We conduct a thorough security analysis and report comprehensive hardware and software implementation results. Our hardware results show that WARP is the smallest 128-bit block cipher for most of typical hardware implementation strategies. A serialized circuit of WARP achieves around 800 Gate Equivalents (GEs), which is much smaller than previous state-of-the-art implementations of lightweight 128-bit ciphers (they need more than 1, 000 GEs). While our primary metric is hardware size, WARP also enjoys several other features, most notably low energy consumption. This is somewhat surprising, since GFN generally needs more rounds than substitution permutation network (SPN), and thus GFN has been considered to be less advantageous in this regard. We show a multi-round implementation of WARP is quite low-energy. Moreover, WARP also performs well on software: our SIMD implementation is quite competitive to known hardware-oriented 128-bit lightweight ciphers for long input, and even much better for small inputs due to the small number of parallel blocks. On 8-bit microcontrollers, the results of our assembly implementations show that WARP is flexible to achieve various performance characteristics.