In Partial Fulfillment of the Requirements for the Degree of
Doctor of Philosophy
Will give a preliminary defense of his dissertation
In the past decade, the increasing number of cores per node has propelled the performance of leadership scale systems from teraflops to petaflops. On the other hand, bandwidth of I/O subsystems have almost been stagnant. This has created a huge gap between the computational and I/O time, making I/O a major bottleneck. Furthermore, the realized I/O bandwidth in such systems is in general far lesser as compared to their theoretical peak bandwidth. The Message Passing Interface (MPI) has been the de-facto standard for parallel computing in the past couple of decades. MPI-I/O, which is a part of MPI, not only offers a clean approach to access the file system from the application but also acts as a middle-ware between the application and the file system to specify a variety of enhancements. Specifically, collective I/O has proven to be very effective for I/O in large scale systems and helps to bridge the gap between the theoretical and sustained I/O bandwidth. This dissertation will aim at developing approaches to improve parallel I/O at this level. In particular we propose to investigate methods to overlap I/O with computation, utilize data-layout aware rank assignment to improve I/O performance, dynamically tune runtime parameters for a certain system configuration and finally investigate the usability of such approaches in newer forms of storage such as Solid State Drives (SSDs).
Date: Wednesday, May 9, 2012
Time: 2:00 PM
Faculty, students, and the general public are invited.
Advisor: Dr. Edgar Gabriel