My research generally involves parallelism and functional programming.
I am interested in designing programming languages to help programmers make better use of parallel hardware. More generally, I want to make software more efficient, especially through parallelism.
Currently, my research is about using types and functional programming abstractions to make parallel programming safer and easier. I am also working on an ongoing project called Gibbon that explores compiler optimizations that improve the performance of recursive tree traversals.
Previously, I worked on two research projects based around doing GPU programming in a functional style using embedded domain-specific languages. On the first, I contributed to Accelerate, aiming to extend the language and compiler to automatically divide work across multiple GPUs. On the second, I contributed to Obsidian, where I implemented an auto-tuning framework that used heuristic-based search to improve the performance of GPU kernels.
In the past, I briefly did research in artificial intelligence. I worked with Dr. Scott Gordon on a variation of the minimax algorithm designed to take more risks and set traps, and I developed a genetic programming library that is used in an undergraduate AI course.
Michael Vollmer, Sarah Spall, Buddhika Chamith, Laith Sakka, Milind Kulkarni, Sam Tobin-Hochstadt, and Ryan R. Newton. Compiling Tree Transforms to Operate on Packed Representations. European Conference on Object-Oriented Programming (ECOOP 2017). [PDF]
Michael Vollmer, Ryan G. Scott, Madanlal Musuvathi, and Ryan R. Newton. SC-Haskell: Sequential Consistency in Languages That Minimize Mutable Shared Heap. Principles and Practice of Parallel Programming (PPoPP 2017). [PDF]
Michael Vollmer, Bo Joel Svensson, Eric Holk, and Ryan R. Newton. Meta-programming and Auto-tuning in the Search for High Performance GPU Code. Workshop on Functional High-Performance Computing (FHPC 2015). [PDF]
Bo Joel Svensson, Michael Vollmer, Eric Holk, Trevor L. McDonell, and Ryan R. Newton. Converting Data-parallelism to Task-parallelism by Rewrites: Purely Functional Programs Across Multiple GPUs. Workshop on Functional High-Performance Computing (FHPC 2015). [PDF]