Calendar - University of Houston
Skip to main content

[Defense] Parallel I/O in Low Latency Storage Systems

Monday, December 7, 2020

9:00 am - 10:30 am

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
Raafat Feki
will defend his proposal
Parallel I/O in Low Latency Storage Systems


Abstract

The high performance computing (HPC) systems has known a momentous evolution during the last two last decades. Embedding thousands of cores, with very powerful processing capabilities, today’s supercomputers can process a tremendous amount of data in a matter of seconds. However, the evolution of the storage systems has not kept pace with it, which has led to a huge gap between I/O performance and processing performance. Therefore, computer scientists had mainly focused on improving the I/O performance by providing software solutions e.g. collective I/O and asynchronous I/O, that constitute the foundation of parallel I/O. They proposed several algorithms and techniques in order to hide the I/O overhead and improve the overall performance of storage systems by targeting the bandwidth and the capacity. The Message Passing Interface (MPI) has been the most recognized parallel programming paradigm for large scale parallel applications. Starting from version 2 of the MPI specification, the standard has introduced an interface for parallel file I/O support, referred to as MPI-I/O. By extending the MPI concepts to file I/O operations, the programming model becomes more complete and offers more options for developers to exploit the performance benefits of parallel I/O. While reaching the new era of Exascale computing, multiple innovative technologies has risen to the surface opening the door toward a balanced HPC ecosystem that incorporates low latency storage systems. Nevertheless, this evolution has also posed new challenges regarding parallel I/O optimizations. Whereas hardware latency was the main source of I/O overhead, software latency was usually negligible. However, in low latency storage systems, the equation changes since the former would be reduced to the same level as the latter.

Therefore, we aim in this dissertation at solving the new equation by providing multiple optimization techniques to the existing parallel I/O solutions within MPI-IO context. In particular, this dissertation targets the communication overhead of collective I/O operations, the computation phase of complex access patterns within independent I/O operations and the file locking overhead of the Lustre parallel file system. Finally, it proposes a generic model for parallel I/O performance in a typical HPC system that deploys a low latency storage systems


 Monday, December 7, 2020
9:00AM - 10:30AM CT
Online via MS Teams

Dr. Edgar Gabriel, dissertation advisor

Faculty, students and the general public are invited.

Location
Online via MS Teams