Y
Yuma Kinoshita
Researcher at Tokyo Metropolitan University
Publications - 93
Citations - 788
Yuma Kinoshita is an academic researcher from Tokyo Metropolitan University. The author has contributed to research in topics: Image fusion & Hue. The author has an hindex of 11, co-authored 84 publications receiving 482 citations.
Papers
More filters
Journal ArticleDOI
Pixel-Based Image Encryption Without Key Management for Privacy-Preserving Deep Neural Networks
TL;DR: A novel pixel-based image encryption method that maintains important features of original images and is robust against ciphertext-only attacks (COAs) and data augmentation in the encrypted domain is proposed for privacy-preserving DNNs.
Proceedings ArticleDOI
Privacy-Preserving Deep Neural Networks with Pixel-Based Image Encryption Considering Data Augmentation in the Encrypted Domain
TL;DR: A novel pixel-based image encryption method is first proposed for privacy-preserving DNNs and it is demonstrated that conventional privacy- Preserving machine learning methods cannot be applied to data augmentation in the encrypted domain and that the proposed method outperforms them in terms of classification accuracy.
Journal ArticleDOI
Scene Segmentation-Based Luminance Adjustment for Multi-Exposure Image Fusion.
Yuma Kinoshita,Hitoshi Kiya +1 more
TL;DR: In this article, a novel method for adjusting luminance for multi-exposure image fusion is proposed, where two novel scene segmentation approaches based on luminance distribution are also proposed.
Journal ArticleDOI
An Overview of Compressible and Learnable Image Transformation with Secret Key and Its Applications
TL;DR: This paper focuses on a class of image transformation referred to as learnable image encryption, which is applicable to privacy-preserving machine learning and adversarially robust defense and discusses robustness against various attacks.
Journal ArticleDOI
Image to Perturbation: An Image Transformation Network for Generating Visually Protected Images for Privacy-Preserving Deep Neural Networks
TL;DR: Wang et al. as mentioned in this paper proposed a novel image transformation network for generating visually protected images for privacy-preserving deep neural networks (DNNs), which is trained by using a plain image dataset so that plain images are converted into visually protected ones.