Y
Yusuke Koda
Researcher at Kyoto University
Publications - 41
Citations - 515
Yusuke Koda is an academic researcher from Kyoto University. The author has contributed to research in topics: Computer science & Reinforcement learning. The author has an hindex of 8, co-authored 36 publications receiving 213 citations.
Papers
More filters
Journal ArticleDOI
Communication-Efficient Multimodal Split Learning for mmWave Received Power Prediction
Yusuke Koda,Jihong Park,Mehdi Bennis,Koji Yamamoto,Takayuki Nishio,Masahiro Morikura,Kota Nakashima +6 more
TL;DR: In this article, the authors proposed a distributed multimodal machine learning (ML) framework, termed as multimodi-modal split learning (MultSL), in which a large neural network (NN) is split into two wirelessly connected segments.
Posted Content
Communication-Efficient Multimodal Split Learning for mmWave Received Power Prediction
TL;DR: A distributed multimodal machine learning (ML) framework, coined MultSL), in which a large neural network is split into two wirelessly connected segments, in which the upper segment combines images and received powers for future received power prediction, whereas the lower segment compresses its output to reduce communication costs and privacy leakage.
Proceedings ArticleDOI
Differentially Private AirComp Federated Learning with Power Adaptation Harnessing Receiver Noise
TL;DR: This study designs transmit power control across clients, wherein the received signal level is adjusted intentionally to control the noise perturbation levels effectively, thereby achieving the desired privacy level.
Journal ArticleDOI
Proactive Received Power Prediction Using Machine Learning and Depth Images for mmWave Networks
Takayuki Nishio,Hironao Okamoto,Kota Nakashima,Yusuke Koda,Koji Yamamoto,Masahiro Morikura,Yusuke Asai,Ryo Miyatake +7 more
TL;DR: The simulation and experimental evaluations demonstrated that the proposed mechanism employing convolutional long short-term memory predicted a time series of received power up to 500 ms ahead, with an inference time of less than 3 ms and a root-mean-square error of 3.4 dB.
Posted Content
Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data
TL;DR: In this article, a distillation-based semi-supervised FL (DS-FL) algorithm was proposed to overcome largely incremental communication costs due to model sizes in typical frameworks without compromising model performance.