I said it.
In recent years, there has been a lot of buzz about deep learning,
where the learning algorithm is not based on the Bayes rule and probability.
People are optimizing arbitrary, complicated cost functions, and they are
doing it with gradient descent, so they don't even reach the minimum
of the (incorrect) function that they want to optimize.
I just wanted to remind that nobody in the world knows how to train
one hidden layer well, so perhaps instead of hand-waving about deep learning
that much, which gets annoying, it may be worth to examine again simpler,
fundamental models.
My publication on the Netflix Prize is now free. Download it
here.
The previous 4-page publication has so far
over 450 citations,
and the newer publication has 195 pages and 0 citations.
So read it, cite it.
My h-index is 1, and I want to increase it to 2.
I don't feel like a real scientist with an h-index 1.
You have to read it to not stay behind your competition.
Sunday, March 13, 2016
Subscribe to:
Posts (Atom)