Hackathon #2. UCLA. 1000 people. I wanted to go big or go home. My team Nihar Sheth, Markie Wagner, and Zane Durante, felt the same.
After a series of disputes and disagreement, we eventually found the perfect idea: using ML to improve frame rate of videos, by predicting what frame should come between any two given frames. It was complex, it was visual, it was potentially boom, and it was definitely potentially bust: we were pumped!
So at this point hackathon is well past begun, but we feel good because we have our idea, we knew how to do it, and we each had our separate roles. In the beginning, we all worked on building up datasets of bad to good pictures, as well as datasets to train our frame interpolation model. After this point we split up into various roles: Markie handled the front-end and entire UI: Nihar worked on automating the back-end of the project and pretty much built the glue that held the moving parts together. Zane focused on building the pixel quality improvement model, and I worked on developing the frame interpolation model. We chose 1 video to train on: the NYC Pizza rat.
And our goal was to turn this into a smooth video experience.
This was quite possibly the most up and down hackathon I’ve ever experienced (compared to the one other hackathon I’ve attended). We had long stretches where absolutely nothing worked, only to suddenly see great results. Such is the nature of playing with Deep Neural Networks :). By 5 AM, we had each made a solid amount of progress and decided to each take a quick nap, and by 8 AM we were back at it.
On my end, I cycled through reading various CNN research papers as I waited for my latest model architecture to train, and then updating my model. Eventually, the big issue that we ran into was correctly predicting the color distributions. By Saturday afternoon, most of our frame interpolations looked like this:
However, at around 4 PM, I made 2 major architecture changes that improved our model exponentially: We were training on this pizza rat video– here are the results by epoch.
Here’s the loss function using root mean squared error over all the pixels:
The model is at its best at around 28-30 epochs. At this point it’s around 3AM, we have 8 hours and we have just gotten our frame interpolation model ready: now we have to build an interface and an algorithm to integrate these outputs into a video. About 2 hours of sleep in the past 40 hours, but we got a new boost of energy just looking at that gorgeous loss function (maybe that was just me)
After writing a boatload of nasty ass spaghetti code, we finally got a gorgeous UI to work and developed a smooth integration of the model prediction frames within the original video. We got it done with 30 minutes to spare. But we had a beautiful before after video that could be created live in front of anyone. We finally built our model, that, given a choppy video, would be able to predict intermediate frames and insert them and run them smoothly. Here was our demo video result:
Original Processed
We flip out! We are so excited! Our product works the moving parts came together!
Then we all just crashed.
From there, we set up and presented to the judges, and the reactions we got were so rewarding. I got to talk ML to a large number of people and then just see a lot of other people just overall taken aback by the project.
Right after the judging, the organizers came over to us and let us know that we had made the top 5 and would be demoing. We went into the corner, divided up the talking/pitch, planned it all out and, by the time we got to the demo …. None of it went as planned. In the middle of our pitch, our laptop froze for a solid 2 out of 5 available minutes, so we had to bootstrap the presentation from there.
Technical difficulties notwithstanding, our demo went very well. After all the other teams demoed, the prizes began to be announced. We ended up winning 2nd overall in the hackathon (the #1 team was actually amazing– find their devpost here ), as well as the $1000 grant from 1517 VC to help us continue this project.
Overall, this might have been the most fun project I’ve ever worked on: Complex Deep Learning experimentation and tracing, a super applicable problem space, an awesome team, and just greatly run hackathon! Shoutout to all the organizers who were all just really cool people, and overall it was an awesome time!
This was a very fun stream to build!
Wow what a great time!