I'm fascinated by the systems we build to channel, represent, and draw meaningful knowledge from data.
I'm an machine learning practitioner who loves figuring out the right set of modeling techniques for a problem.
I'm also an avid science communicator, who's written over 50,000 words explaining and clarifying machine learning research.
Read more about my past work experience here
Over the last year or so, I've summarized 50+ individual recent machine learning papers, in language that is concise, clear, and easier to understand than the original paper
Read more about the kinds of problems I've solved using machine learning, and how I approached solving them
Occasionally, I realize there's an area of statistics or machine learning that I don't understand quite as well as I'd like. When that happens I read as much in the area as I can, and when I feel like I've reached clarity, I write about it.
Published on January 20, 2019
NVIDIA's StyleGAN outperforms prior GAN structures, and does so through use of thoughtful engineering of pathways for local and global structure. Here, I explain the idea of locality of information, and walk through how the concept is used in this model.
Published on December 26, 2018
Enthusiasts of graph-structured data want to find a convolution analogue for graphs, and the literature went down two paths: one based on a mathematically rigorous definition of convolution, the other more heuristic.
Published on October 1, 2018
Convolution has a history prior to its resurgence as a machine learning operation; this post walks through the mathematical convolution operator, and connects the conceptual dots between historical convolution and the intuitions we've developed for its more applied incarnation