Computer Science Distinguished Seminar - University of Houston
Skip to main content

Computer Science Distinguished Seminar

Back to the Future: All-Systolic Convolutional Neural Networks

When: Friday, June 8, 2018
Where: PGH 563
Time: 11:00 AM

Speaker: Dr. H. T. Kung, Harvard University

Host: Dr. Stephen Huang

Back in the early 1980s, when we were busy designing systolic arrays, we did not anticipate that one day in the future we could jointly optimize model training, parallel processing, and circuit design altogether. With the arrival of deep convolutional neural networks (CNNs) and their end-to-end training, that day is here.

I will describe our recent results in all-systolic CNNs based on this joint design methodology. We have tested it with FPGA implementations using datasets such as MNIST, Fashion-MNIST, CIFAR-10, and Tiny-ImageNet. For the first three relatively small datasets, we are able to demonstrate single-chip full CNN designs under today’s FPGA hardware limitations.

Systolic arrays are the heart of the Tensor Processing Unit (TPU) architecture and other no-GPU CNN accelerators, but they left support for sparse neural networks and array scalability as open issues. We also need to understand related opportunities for mobile devices. I will conclude the talk by shedding some light on these challenges.

Bio:

H. T. Kung is William H. Gates Professor of Computer Science and Electrical Engineering at Harvard University. Prior to joining Harvard in 1992, he taught at Carnegie Mellon for 19 years after receiving his Ph.D. there. Professor Kung is best known for his pioneering work on I/O complexity in computing theory, systolic arrays in parallel processing, optimistic concurrency control in database systems, and wireless protocols in mobile ad-hoc networking. His academic honors include Member of Academia Sinica (Taiwan), Member of National Academy of Engineering, Guggenheim Fellowship, and the ACM SIGOPS 2015 Hall of Fame Award (with John Robinson).