Exploiting the parallelism of large-scale application-layer networks by adaptive GPU-based simulation
Citations
52 citations
13 citations
Cites background from "Exploiting the parallelism of large..."
...proposed a fully GPU-based conservative simulator implementation that adapts the LP size at runtime to balance parallelism and event management overheads [1]....
[...]
...In fully GPU-based simulation [1, 13, 21, 22, 28, 32], the simulator core is executed on the GPU as well....
[...]
10 citations
Cites background or methods from "Exploiting the parallelism of large..."
...Representation of irregular data structures by arrays and grids APU [180], GPU [47, 69, 98, 114, 144–146, 152, 166, 177] [7, 14, 95, 109, 140, 141, 159, 168, 183, 196], FPGA [121, 149]...
[...]
...Other works assume a minimum time delta between an event and its creation (lookahead) to guarantee the correctness of the simulation results [7, 157, 196]....
[...]
...Instead, the set of events is considered jointly in an unsorted fashion [168], split by model segment [159] or simulated entity [7, 109, 196], split according to a fixed policy [141, 183], or split randomly [121]....
[...]
8 citations
Cites background or methods from "Exploiting the parallelism of large..."
...The literature proposes two solutions: first, merging queues of multiple simulated entities increases the probability of having events that can safely be executed [35]....
[...]
...Autotuning approaches, which have previously been shown to be highly beneficial in the GPU context [49], [35], might help in selecting a suitable queue....
[...]
...In [35], the number of simulated entities assigned to each LP is adapted to balance idle threads and the cost of queue operations....
[...]
...[35] store each LP’s events in a separate array....
[...]
...The considered parameter combinations were chosen according to our previous works in GPU-based simulation [35], [48] to cover cases of low utilization where the GPU could be outperformed by a single CPU core, up to configurations approaching full GPU utilization....
[...]
4 citations
Cites background from "Exploiting the parallelism of large..."
..., [14, 19, 35]), including several types of network simulations [2, 3, 46]....
[...]
References
64 citations
"Exploiting the parallelism of large..." refers background in this paper
...In 2010, Park et al. (Park and Fishwick 2010) proposed a framework for purely GPU-based discreteevent simulations, achieving a speedup close to 10....
[...]
40 citations
"Exploiting the parallelism of large..." refers result in this paper
...In some cases, a large speedup compared with a sequential simulation was achieved (Park, Fujimoto, and Perumalla 2004), while in other cases there were modest or no performance gains (Dinh, Lees, Theodoropoulos, and Minson 2008, Quinson, Rosa, and Thiery 2012)....
[...]
27 citations
25 citations
21 citations
"Exploiting the parallelism of large..." refers result in this paper
...In some cases, a large speedup compared with a sequential simulation was achieved (Park, Fujimoto, and Perumalla 2004), while in other cases there were modest or no performance gains (Dinh, Lees, Theodoropoulos, and Minson 2008, Quinson, Rosa, and Thiery 2012)....
[...]