Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my opinion it's extremely fair. The benchmarks are Keras+PlaidML compared to Keras+TensorFlow, it allows running exactly the same nets (just imported from the Keras included applications) and whatever penalty Keras might impose is equal in the two cases. Having one very direct comparison is actually why we constructed the tests that way (none of the other frameworks run on our high priority platforms).

That said we'd be pretty excited if someone wanted to add support for TF, PyTorch, MXNet, etc. We like Keras but are happy to have integrations for all frameworks. With work you could pair it with Docker and containerize GPU-accelerated workloads without the guests even needing to know what hardware it's running on. Lots of possibilities.



No, no, no.

> whatever penalty Keras might impose is equal in the two cases.

The penalty Keras imposes when using Tensorflow depends on its Tensorflow implementation. The penalty Keras imposes when using MXNet depends on its MXNet implementation. The penalty Keras imposes when using PlaidML depends on whatever the PlaidML devs implemented. When you build a Keras layer, it's calling different Keras code for each backend.

The comparison would be fair if Plaid claimed to be the fastest Keras backed, not if it were actually claiming to be faster than Tensorflow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: