Last weeks learnings at the Benelearn conference about how to compress your Neural Networks to fit into less memory

Last weeks learnings at the Benelearn conference about how to compress your Neural Networks to fit into less memory

Compressing your NN by 1/64 without hurting your results too-much, it just sounds like dark magic! Great talk by +Kilian Weinberger.

#MachineLearning #NeuralNetwork #Benelearn

 

Check this out on Google+ 3 3

3 Replies to “Last weeks learnings at the Benelearn conference about how to compress your Neural Networks to fit into less memory”

  1. Hmm, as a sort of hack you could train dozens of small predictors, put them in an ensemble, and then distill a "larger" predictor that is smaller than the ensemble together.

    Interesting results nonetheless. Using the lesser probability predictions as a representation of internal state (yes a car looks a lot like a truck).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.