K
Krittin Intharawijitr
Researcher at Tokyo Institute of Technology
Publications - 6
Citations - 163
Krittin Intharawijitr is an academic researcher from Tokyo Institute of Technology. The author has contributed to research in topics: Latency (engineering) & Mobile edge computing. The author has an hindex of 3, co-authored 6 publications receiving 127 citations.
Papers
More filters
Proceedings ArticleDOI
Analysis of fog model considering computing and communication latency in 5G cellular networks
TL;DR: A mathematical model of a Fog network and the important related parameters are defined and results from a model used to evaluate three different policies for selecting the target Fog server for each task are analyzed.
Journal ArticleDOI
Simulation Study of Low Latency Network Architecture Using Mobile Edge Computing
TL;DR: A simulation study of Low Latency Network Architecture Using Mobile Edge Computing using mobile Edge Computing for low latency network architecture accuracy and efficiency.
Proceedings ArticleDOI
Practical Enhancement and Evaluation of a Low-Latency Network Model Using Mobile Edge Computing
TL;DR: This research studies the impact of both latencies in MEC architecture with regard to latency-sensitive services and considers a centralized model, in which a controller is used to manage flows between users and mobile edge resources, to analyze MEC in a practical architecture.
Journal ArticleDOI
Simulation Study of Low-Latency Network Model with Orchestrator in MEC
TL;DR: This research studies the impact of both latencies in MEC architecture with regard to latency-sensitive services and considers a centralized model, in which a controller is used to manage flows between users and mobile edge resources, to analyze MEC in a practical architecture.
Journal ArticleDOI
Empirical Study of Low-Latency Network Model with Orchestrator in MEC
TL;DR: This paper designed and implemented an MEC based network architecture that guarantees the latency of offloading tasks and first estimates the total latency including computing and communication ones at the centralized node called orchestrator, and evaluated its performance in terms of the blocking probability of the tasks.