Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

it's first author is a Googler, and its posted on a Google domain.

EDIT: haha I am downvoted for speaking the truth. The parent and I have both read lots of papers like these, there is only a modest contribution of this paper. It basically says we fudged the inference and the end results look similar so its a good optimization. However, there will be probably lots of specific models that "need" the mixing freedom the approximation removes, and hence the algorithm will only work for a specific subspace of MCMC problems. MCMC is basically impossible to debug, so we dunno how well it works overall. THIS IS THE SAME CONCLUSION OF EVERY OTHER MCMC APPROXIMATION PAPER. The only reason this is on HN, is because of its heritage. I do not think this paper is revolutionary (unlike some other papers coming out of Google).

EDIT 2: evidence Modern: "Fully Parallel Inference in Markov Logic Networks" - Max-Planck "Hybrid Parallel Inference for Hierarchical Dirichlet Process" "A split-merge mcmc algorithm for the hierarchical dirichlet process" Old: "Parallel Implementations of Probabilistic Inference" a 1996 review paper!

You might say these papers are not exactly the same, ok, but the final justification for the given paper is:

"Depending on the model, the resulting draws can be nearly indistinguishable"

NOTE kewords: "depending" and "nearly"



I wasn't the downvoter, but your initial response didn't answer the question, and was a snide irrelevancy. Was it intended as criticism of the HN audience? So what if it's related to Google?

Your subsequent edit is useful - thank you. I just wish you'd said that in the first place and left out the swipe.


it wasn't a swipe, compressed? yes. It IS the main reason why this paper is getting more air than the work of hundreds of other machine learning papres int he same league. It was not meant in malice, just a plain ordinary fact.

I thought arguing it has a was a average contribution would be more snidy. It's perfectly good work, just not in the same league as map-reduce or self-driving cars.


I really wish you'd said in the first instance that it appeared that the paper wasn't really saying anything new, and left out the implication that it's getting more air purely because it's from Google.

Not least, you might be wrong. It might be getting more air not because it's from Google, but simply because it has more visibilty in general. My feeling is that other, perhaps more scholarly, and perhaps more in-depth articles don't see the light of day simply because they are in more obscure places. You can help by pointing at other sources of similar material, and explaining why they are potentially of more value.

A single line of "It's by a Googler" was unenlightening, and to me came off as pure snark.


I have no idea if you're right or not, but I really hate how people downmod on HN instead of stating their disagreements.


In general so do I, but in this case the initial reply was a content free snipe. In that case I'm not surprised it got a downvote without investing the time in making a more complete reply.

The subsequent edit was useful, although the attitude is, well, "sharp".


As far as I've been able to tell from years of Reddit/HN, downvoting is basically a signal that says "this comment is not worth the time it'd take to rebut it--but also isn't doing anything actively harmful enough to elicit flagging."


"The parent and I have both read lots of papers like these,"

Can you please post the title/author of the ones that are most relevant?


Are there any theoretical results on this type of algorithm saying, i.e., amortized over some number of iterations, there is some probability of satisfying detailed balance?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: