Loving Vincent and Mathematically Awakened Art

A few days ago, I watched this movie called Loving Vincent, and the visual beauty of the movie blew me away.

loving vincent.gif

259 artists over a five year stretch painted every single frame of the film. This was a project of passion and the passion brought Van Gogh’s paintings to life, each second a visual heartbeat for the film.

But this isn’t a movie review blog. I do numbers. Of course, there’s no replacement for the passion and artistry that it took to create a work of art like Loving Vincent. However, I did want to see there were a way to better bring paintings and artwork to life mathematically.

Continue reading “Loving Vincent and Mathematically Awakened Art”

2 Sides of the Same Coin: Residual Blocks and Gradient Boosting

No fun backstory for this post. These things are just cool and important in machine learning. Turns out they are very algorithmically similar. So I’ll explain how 🙂

Residual Blocks Review:

A residual block in a neural network is defined by the presence of skip layers, as seen in the id(Hk-1) connection in figure (1). These skip layers represent the addition of a layer i-k so some layer i . These blocks reduce the effect of vanishing gradients, and enable the development of much deep neural networks. This architecture enabled Microsoft to win the 2015 ImageNet challenge, and residual blocks have become a staple of deep neural networks.

Image result for resnet block
Resnet Block | Credit: Andrew Ng

Continue reading “2 Sides of the Same Coin: Residual Blocks and Gradient Boosting”

Conditional Probability, the Danger of Data-Driven Decisions, and When to Stop Eating that Burrito

So there’s a local burrito joint near USC, whose primary demographic is the 3 AM student who is less than toxicated. This small shack was one of my favorite places in all of Los Angeles.

It was a normal Thursday night, and I was eating my Loaded Combination Burrito with a three meat blend of chicken, carne asada, and al pastor. It’s always a good night when you’re holding a monster like this in your hand

Mmmmmmmm Burro

Continue reading “Conditional Probability, the Danger of Data-Driven Decisions, and When to Stop Eating that Burrito”

MCMC: Ice Crystals and the Posterior Distribution

What did the physicist say after the short, redhead spit on his shoe?

…

“Let me atom”

So with that subtle segway, let’s dive into our topic today: atomic physics and the reason why Markov Chain Monte Carlo is an effective method to estimate probabilities or a “Posterior distribution” given a loss function.

Continue reading “MCMC: Ice Crystals and the Posterior Distribution”

Know Thyself: And Know Thine Uncertainty

So in my recent readings of various ML media, from blog posts to published papers, I’ve just started to notice a trend. I’m growing more certain that the ML community is ignoring uncertainty, which is certainly not a good practice and it renders much their results quite uncertain.

In this post, I wanted to just go over a quick and easy method to use inverse probability to estimate the uncertainty in your model’s test-accuracy. Continue reading “Know Thyself: And Know Thine Uncertainty”

Bad Optimizers, Black Boxes, and Why Neural Networks sometimes seem just Backwards-ass Lucky

Woah! TensorFlow! Neural Networks! Convolutionation Recurrent Deep Learned Blockchain Etherium Network. Where’s the line start??

How much can I spend?

Okay, maybe the last one isn’t actually a thing (for all I know). But there is currently a lot of hype and excitement around deep learning, and for good reason. Neural networks have provided a number of improvements in performance, and specific fields such as computer vision, speech recognition, and machine translation have been genuinely revolutionized by deep learning.

With that said, this will be Part 1 of the Grind my Gears series, where I will be talking about Deep Learning issues that just really grind my gears. This will be a less mathematic post than usual, but I will link to resources to dive in deeper if you are interested. With that said, let us begin:

Continue reading “Bad Optimizers, Black Boxes, and Why Neural Networks sometimes seem just Backwards-ass Lucky”

BFGS, Optimization, and Burritos

Hey ya’ll. Hope your summer is going well! In the season of peaches and watermelon, it’s easy to take things for granted. No one has experienced this more than the optimization algorithms used in, well, just about every single machine learning and computational problem.

I thought, for this post, I would dive into one of the classics: the Broyden Fletcher Goldfarb Shanno algorithm, also known as BFGS, named after these ballers right here.

Continue reading “BFGS, Optimization, and Burritos”

Samurai Swords: A Bayesian Perspective

A classic Japanese Katana, with a thickness of around 2-3 inches, has over 2000, hand-folded layers of steel. To put this into context, if you fold a sheet of paper 15 times, it will reach a height of 3 meters, or, in other words, Shaq with about 3 burritos on his heads. The swords were so powerful that foreigners would often find their blades shattered within seconds of a fight. So I guess the question on your mind is, what the hell does any of this have to do with Bayesian Statistics???

Continue reading “Samurai Swords: A Bayesian Perspective”

Aper.io: 36 hours of Pizza Rat

Hackathon #2. UCLA. 1000 people. I wanted to go big or go home. My team Nihar Sheth, Markie Wagner, and Zane Durante,  felt the same.

After a series of disputes and disagreement, we eventually found the perfect idea: using ML to improve frame rate of videos, by predicting what frame should come between any two given frames. It was complex, it was visual, it was potentially boom, and it was definitely potentially bust: Continue reading “Aper.io: 36 hours of Pizza Rat”