I've maintained for awhile now that the distinction isn't between "big" and "small" data, but between coarse and fine data. Now that everything is done through the web, previously common data sources (surveys, sales summaries, etc) are being supplanted by microdata (web logs, click logs, etc). It does take a different skill set to analyze noisy, machine-generated data than to analyze clean, survey-like data; it's a skill set that is more biased towards computational knowledge than classical experimental design, hence the shift in emphasis.
I like this distinction. I walked into a world of hurt when I was brought on to look at application user data after years of working with international trade data and national statistics. Even when it comes to formulating a hypothesis and subsequent experiment, the approach is entirely different.
I will say that the article's distinction between small and big data is also important, but that just comes down to processing power. I think the distinction you make is far more important and knowing whether you need coarse or fine data can help keep you out of the issues that are introduced moving from small to big data.
But I also don't really mind if Big Data is truly big, because it's clearly different data than what businesses are used to collecting and interpreting today.
I agree completely. I run a company that handles high levels of compute load for financial applications. I often describe what we do as "big compute," not big data, because the data is actually very small in size. OTOH, this tiny bit of data (real-time prices on some 1,000 assets) causes an ENORMOUS amount of computation. Often this distinction doesn't get picked up either, and people might mistakenly classify us as "big data."
I also like that distinction. To me "big" data isn't big until there is a lot of it...and there is a definite distinction between "How many bananas were sold Tuesday?" and "Was user's LED email indicator on when xyz happened?"