Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The fatal flaw is that you’re really introducing a new class which may have its own behaviour problems. Now your test is implicitly testing the test class. I also prefer to see explicit mocking than just trust that the fake does what it says it will. How do I know without looking whether the fake store method REALLY stores the doc, or whether someone set it up to just return OK? And how do you test error handling - make another fake for every error scenario?


This isn't a "fatal flaw." It's a tradeoff. You add a little bit of extra code (which is a liability), and in exchange, you get a test double that you can reuse in many places (which is a benefit). This is in opposition to mocks, where you get an automated framework that can substitute calls anywhere in the system (a benefit), but you're completely replicating the behavior in every single test (a liability).

I argue that [0] the liability of mocks are much higher than the fake. With a fake, you could theoretically write tests that asserts that your class is satisfying the interface. There's no way to do this with a mock. You can just make an interface do anything, regardless of whether it makes sense. This is fragile under modification. With a large codebase, you're inevitably going to change the behavior of a class during a refactoring. Are you going to read every test that mocks that class to decide how the mocks should be rewritten? Probably not - the mocks force-overrode the behavior of the class, so your tests will still pass. Why would you look at them again?

> Make a fake for every error scenario?

By the time you're implementing a fake, you have an interface that the fake implements. One way that I've handled this is to make a FailingBlobStorage that takes an error in its constructor and throws it from any calls. If you need even more detailed behavior than that, you can create a mock for that specific test case. You're not married to the fake. It's not going to magically test every scenario. But fakes handle the common case in so many situations that it's actually surprising when you start trying them out.

[0] In fact, I've basically written this same blog post before. https://www.bitlog.com/2019/12/10/simple-software-engineerin...


Mocks have the same problem. You create something new, that can have uninteneded behavior. Even worse, if you are not perfectly familiar with the mocking framework, you maybe won't even notice it.

The good thing about (simple) fakes is, that you can see all the code in a very small class. No complicated library, that does something funky in a special case.

Mocking can be very useful though, but it can get too complicated very quickly.


Mocks are essentially just “calling this function on the mock object during this test returns this result”, though, right? At least, the ones I write are. No state, no logic, no real ‘behaviour’. As soon as you introduce a fake that really has an underlying state and maybe some kind of validation logic (assuming that you want tests to make sure you can’t insert nonsense/retrieve non-existent stuff), you’ve introduced a ton of new and completely meaningless failure points.


You're totally right.

But think about a blob storage class that has two methods, ReadOneFile(), ReadMultipleFiles().

You have one test that just mocks ReadOneFile(string fileName) and doesn't mock the ReadMultipleFiles(). You now change the implementation of the System Under Test (SUT) to use once the ReadMultipleFile(string[] fileNames) call instead of three times the ReadOneFile(). Now your test fails, but the implementation of the SUT is perfectly valid. You need to rewrite your test.

If you would use a fake, the test would stay green, without changing. The test with the fake is less specific to the implementation, and helps you better with refactoring/changing your code.

And additionally in this example ReadOneFile() is called three times, and is expected to return three different results. So even your mock needs some logic to handle that.


Yes, good point, that’s true. Personally, I know it’s not the ‘right’ ethos, but I see the mock test failure as a bit of a bonus. My change intended to stop calling ReadOne and start calling ReadMultiple. Having to change the test sure tells me I accomplished that! But you’re right that if I’m intending to only test behaviour, the fake lets me do that better. Interesting post, thank you :)


Somewhat tangential to the conversation at hand, but I feel part of the problem here is having two methods i.e. ReadOne/ReadAll. Why not just embrace OOP and go with one method (Read) that takes a specification which itself encapsulates the set of files you want to read? Then it works for none, one, and all files...and makes the argument about Mocking vs Faking somewhat moot; the mock is a simple "for any args return this" and the fake is just as simple.

I guess my point is that if you have to agonise over mock vs fake then really it's a sign that your design is not quite as testable as you might like :)


Mocks have to be updated and fixed every time you do any meaningful changes in production code. Good fake can be reused in many tests and will keep behaving like a real thing with minimum maintenance. Much less complex than maintaining hundreds of mocks each implementing different parts of the interface.


I agree that a fake implementation would probably be overkill for such situations, but I would also suggest that anything requiring mocks sounds like overkill too.

If a mock object returning specific results from particular functions is enough to satisfy some code's requirements, then I would seriously consider whether that code could take those results as normal function arguments (possibly lazy/thunked).


> How do I know without looking whether the fake store method REALLY stores the doc, or whether someone set it up to just return OK?

Why would you not look into the fake implementations? If it's just a two liner that takes the parameters and writes them to disk, then that's a LGTM from me. You would not test every getter/setter method in any of your classes, would you?

If it's more complex than this, you can absolutely write tests for your fakes. In many cases fake data-stores use tested backends, and tested client libraries. If the client wrapper is complex enough then you can/should test it as well.


Write integration tests than run against the real thing and the fake one.

Test for the relied upon behaviours.

It's essentially contract testing.

It's great objects that perform external actions.


> which may have its own behaviour problems

But those are easy to find. This is why double-entry bookkeeping is the accounting standard: it's easy to add incorrectly once. It's much less common to add incorrectly the exact same way twice.


The article addresses this: you need to write tests for the fake as well.

I'm not sure I buy this as something that's desirable to have to do, but it should address your concern.


Hmm, if I’m having to write tests for my tests, I feel something has gone wrong in my life ;)


A "fake" isn't a test; it's a real, working implementation of some interface. The only reason we don't use such implementations in production is for some non-functional reason (efficiency, resilience, etc.). For example, if we want a key/value store we might choose Redis for production, but a HashMap is a perfectly good fake (and you can bet that the language/library implementing HashMap has a ton of tests!)


If this leads to having to spend less time on writing tests in total, things have gone right.

These fakes are much less maintenance heavy. As changes to implementation details don't require changes to tests. Hence, whilst you need to write tests for your tests, you spend less time writing tests in total.


A fake is an implementation. The same tests you use on the real implementation also test the fake implementation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: