I said it.
In recent years, there has been a lot of buzz about deep learning,
where the learning algorithm is not based on the Bayes rule and probability.
People are optimizing arbitrary, complicated cost functions, and they are
doing it with gradient descent, so they don't even reach the minimum
of the (incorrect) function that they want to optimize.
I just wanted to remind that nobody in the world knows how to train
one hidden layer well, so perhaps instead of hand-waving about deep learning
that much, which gets annoying, it may be worth to examine again simpler,
fundamental models.
My publication on the Netflix Prize is now free. Download it
here.
The previous 4-page publication has so far
over 450 citations,
and the newer publication has 195 pages and 0 citations.
So read it, cite it.
My h-index is 1, and I want to increase it to 2.
I don't feel like a real scientist with an h-index 1.
You have to read it to not stay behind your competition.
Subscribe to:
Post Comments (Atom)
4 comments:
This is simply not true.
It's good that it isn't not true in a complicated way.
Trust me, it's not true in a bad way.
i have checked your all websites i relize you really work hard and smart works I don’t even know how I ended up right here, but I thought this post was good. I don’t recognise who you’re but certainly you’re going to a famous cheers!
Post a Comment