I’m a PhD student at Indiana University in the programming languages group, advised by Dr. Ryan Newton.


My research generally involves parallelism and functional programming.

I am interested in designing programming languages to help programmers make better use of parallel hardware. More generally, I want to make software more efficient, especially through parallelism.

Currently, my research is about using types and functional programming abstractions to make parallel programming safer and easier. I am also working on an ongoing project called Gibbon that explores compiler optimizations that improve the performance of recursive tree traversals.

Previously, I did research on doing GPU programming in a functional style using embedded domain-specific languages in Haskell.

Even earlier than that, I briefly did research in artificial intelligence. I worked with Dr. Scott Gordon on a variation of the minimax algorithm designed to take more risks and set traps, and I developed a genetic programming library that is used in an undergraduate AI course.


Michael Vollmer, Sarah Spall, Buddhika Chamith, Laith Sakka, Milind Kulkarni, Sam Tobin-Hochstadt, and Ryan R. Newton. Compiling Tree Transforms to Operate on Packed Representations. European Conference on Object-Oriented Programming (ECOOP 2017). [PDF]

Michael Vollmer, Ryan G. Scott, Madanlal Musuvathi, and Ryan R. Newton. SC-Haskell: Sequential Consistency in Languages That Minimize Mutable Shared Heap. Principles and Practice of Parallel Programming (PPoPP 2017). [PDF]

Michael Vollmer, Bo Joel Svensson, Eric Holk, and Ryan R. Newton. Meta-programming and Auto-tuning in the Search for High Performance GPU CodeWorkshop on Functional High-Performance Computing (FHPC 2015). [PDF]

Bo Joel Svensson, Michael Vollmer, Eric Holk, Trevor L. McDonell, and Ryan R. Newton. Converting Data-parallelism to Task-parallelism by Rewrites: Purely Functional Programs Across Multiple GPUs. Workshop on Functional High-Performance Computing (FHPC 2015). [PDF]