My research generally involves parallelism, functional programming, and DSLs.
Currently, my research is about designing and implementing DSLs for doing implicitly data-parallel programming in a functional style.
One ongoing project is extending Accelerate to support targeting and running code across multiple heterogeneous parallel devices simultaneously (for example, a multi-core CPU and multiple GPUs). We are exploring ways to extend the compiler to automatically split up existing Accelerate programs and divide work intelligently across all available devices.
I have also been investigating “auto-tuning” compilers. Auto-tuning is a technique to automatically optimize programs using feedback from running the program on sample inputs. I have worked on applying this to GPU programs, and on functional techniques for implementing auto-tuning in an embedded language.
In the past, I briefly did research in artificial intelligence. I worked with Dr. Scott Gordon on a variation of the minimax algorithm designed to take more risks and set traps.
Michael Vollmer, Bo Joel Svensson, Eric Holk, and Ryan R. Newton. 2015. Meta-programming and Auto-tuning in the Search for High Performance GPU Code. In Proceedings of the 4th ACM SIGPLAN Workshop on Functional High-Performance Computing (FHPC 2015). ACM, New York, NY, USA, 1-11. [PDF]
Bo Joel Svensson, Michael Vollmer, Eric Holk, Trevor L. McDonell, and Ryan R. Newton. 2015. Converting Data-parallelism to Task-parallelism by Rewrites: Purely Functional Programs Across Multiple GPUs. In Proceedings of the 4th ACM SIGPLAN Workshop on Functional High-Performance Computing (FHPC 2015). ACM, New York, NY, USA, 12-22. [PDF]