Loving Vincent and Mathematically Awakened Art

A few days ago, I watched this movie called Loving Vincent, and the visual beauty of the movie blew me away.

loving vincent.gif

259 artists over a five year stretch painted every single frame of the film. This was a project of passion and the passion brought Van Gogh’s paintings to life, each second a visual heartbeat for the film.

But this isn’t a movie review blog. I do numbers. Of course, there’s no replacement for the passion and artistry that it took to create a work of art like Loving Vincent. However, I did want to see there were a way to better bring paintings and artwork to life mathematically.

Continue reading “Loving Vincent and Mathematically Awakened Art”

2 Sides of the Same Coin: Residual Blocks and Gradient Boosting

No fun backstory for this post. These things are just cool and important in machine learning. Turns out they are very algorithmically similar. So I’ll explain how 🙂

Residual Blocks Review:

A residual block in a neural network is defined by the presence of skip layers, as seen in the id(Hk-1) connection in figure (1). These skip layers represent the addition of a layer i-k so some layer i . These blocks reduce the effect of vanishing gradients, and enable the development of much deep neural networks. This architecture enabled Microsoft to win the 2015 ImageNet challenge, and residual blocks have become a staple of deep neural networks.

Image result for resnet block
Resnet Block | Credit: Andrew Ng

Continue reading “2 Sides of the Same Coin: Residual Blocks and Gradient Boosting”

Know Thyself: And Know Thine Uncertainty

So in my recent readings of various ML media, from blog posts to published papers, I’ve just started to notice a trend. I’m growing more certain that the ML community is ignoring uncertainty, which is certainly not a good practice and it renders much their results quite uncertain.

In this post, I wanted to just go over a quick and easy method to use inverse probability to estimate the uncertainty in your model’s test-accuracy. Continue reading “Know Thyself: And Know Thine Uncertainty”

Bad Optimizers, Black Boxes, and Why Neural Networks sometimes seem just Backwards-ass Lucky

Woah! TensorFlow! Neural Networks! Convolutionation Recurrent Deep Learned Blockchain Etherium Network. Where’s the line start??

How much can I spend?

Okay, maybe the last one isn’t actually a thing (for all I know). But there is currently a lot of hype and excitement around deep learning, and for good reason. Neural networks have provided a number of improvements in performance, and specific fields such as computer vision, speech recognition, and machine translation have been genuinely revolutionized by deep learning.

With that said, this will be Part 1 of the Grind my Gears series, where I will be talking about Deep Learning issues that just really grind my gears. This will be a less mathematic post than usual, but I will link to resources to dive in deeper if you are interested. With that said, let us begin:

Continue reading “Bad Optimizers, Black Boxes, and Why Neural Networks sometimes seem just Backwards-ass Lucky”

BFGS, Optimization, and Burritos

Hey ya’ll. Hope your summer is going well! In the season of peaches and watermelon, it’s easy to take things for granted. No one has experienced this more than the optimization algorithms used in, well, just about every single machine learning and computational problem.

I thought, for this post, I would dive into one of the classics: the Broyden Fletcher Goldfarb Shanno algorithm, also known as BFGS, named after these ballers right here.

Continue reading “BFGS, Optimization, and Burritos”

Samurai Swords: A Bayesian Perspective

A classic Japanese Katana, with a thickness of around 2-3 inches, has over 2000, hand-folded layers of steel. To put this into context, if you fold a sheet of paper 15 times, it will reach a height of 3 meters, or, in other words, Shaq with about 3 burritos on his heads. The swords were so powerful that foreigners would often find their blades shattered within seconds of a fight. So I guess the question on your mind is, what the hell does any of this have to do with Bayesian Statistics???

Continue reading “Samurai Swords: A Bayesian Perspective”

Cool Stuff: Style Transfer

Merry Christmas ya’ll. Happy Hanukkah, Happy Kwanzaa, and a Fortuitous Festivus for the rest of us! In the spirit of the holiday, I wanted to give my Mom a little gift.

She’s not the biggest fan of data analytics. I remember one day, we were on the phone and she was telling me this extremely sad story about one of her friends. Her friend was a doctor from the University of Tehran Continue reading “Cool Stuff: Style Transfer”

Forced Alignment: Pre-Game

Hope ya’ll’s day is going well! In my last post, I talk about the lab, SAIL, that I’m working at and an overview of the ongoing projects at the lab.  Today, I’m going to talk the specifics of my current role within the Forensic Interview Project.

For those of ya’ll who didn’t get the chance to read my last post, the problem that SAIL is currently trying to solve is the high level of

Continue reading “Forced Alignment: Pre-Game”