Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok, I'll take the counter argument.

FPGAs promise the designer 'arbitrary logic' and deliver 'a place others sell into.'

I disagree that FPGA experts "like" the tools they are given, they tolerate them. One of my friends worked at Xilinx for 15 years and understood this all too well. He felt the leading cause of the problem was that the tools group was a P&L center, they needed to turn a profit in order to exist. They got that profit by charging high prices for the tools and high prices for support. His argument was that 'easier' tools cut into support revenue. When I've had high level (E-level, but not C-level) discussions with Xilinx and Altera there has been a lot of acknowledgement about the 'difficulty of getting up to speed' on the tool chain and many free hours of consulting are offered. From a business engagement point of view, making hard to use tools and then "giving away" thousands of dollars of free consulting to the customer to gain their support seems to work well. The customer feels supported, and stops wondering why if the have consultants around for free those consultants wouldn't just make the tools more straight forward to use and available on a wider variety of platforms.

But the biggest thing has always been intellectual property. You buy an STM32F4 and it has an Ethernet Mac on it (using Synopsis IP as evidenced by the note in the documentation), you pay $8 for the microprocessor, work around the bugs, and get it running. If you buy an FPGA, lets say a Spartan 3E, you pay $18 for the chip, and if you want to use that Synopsis Ethernet MAC?[1] $25,000 for the HDL source to add to our project $10,000 if you are ok with just the EDIF output which can be fed into a place-and-route back end. Oh and some royalty if you ship it on a product you are selling.

The various places that have been accumulating 'open' IP such as Open Cores (http://opencores.org/)have been really helpful for this but it really needs a different pricing model I suspect. A lot of HDL is where OS source was back at the turn of the century (locked down and expensive).

[1] I did this particular exercise in 2005 when I was designing a network attached memory device (https://www.google.com/patents/US20060218362) and was appalled at the extortionate pricing.



> From a business engagement point of view, making hard to use tools and then "giving away" thousands of dollars of free consulting to the customer to gain their support seems to work well.

To me, the entire recent history of computer industry (well, all of it is recent BTW) shows that, if you want your technology to become mass-adopted, you need to make it easier for the little guy to get in the game. The high school kid tinkering with stuff in the parents' basement; the proverbial starving student. That's how x86 crushed RISC; that's how Linux became prominent; that's how Arduino became the most popular micro-con platform (despite more clever things being available).

You make the learning curve nice and gentle, and you draw into your ranks all the unwashed masses out there. In time, out of those ranks the next tech leaders will emerge.


I don't disagree, and I suggested as much to the Xilinx folks (well their EVP of marketing at the time) that if they just added $0.25 to the price per chip they could fund the entire tools effort with that 'tax' and since they would be 'giving away' the tools they could re-task all of the compliance guys who were insuring that licenses worked or didn't work into building useful features.

Their counter is of course that they have customers who sweat the $0.25 difference in price. (which I understand but $10,000 in tools and $15,000 in consulting a year is a hundred thousand chips. Which they say "oh at that volume we would wave the tooling cost." And that got me back to your point of "You already have their design win, why give them free tools? Why not give free tools who have yet to commit to your architecture?"

It is a very frustrating conversation to have.


What's your opinion on the new xilinx c-base design tools, and altera's opencl tools for doing compute acceleration ?


I haven't used either of them. I played around with Systems C a bit when it was the rage but found that my issues weren't in optimizing some bit of C code with a better opcode rather it was assembling a system with the peripherals I wanted in the places I wanted them.

For a long time I considered soft CPUs a bad idea (the Stretch guys kept trying to sell me on them but since I wasn't really doing things like deep packet inspection I didn't have a good use case, even RAID algorithms on them were better handled by pretty generic DSP type architectures.) However in playing with the Zedboard which has a couple of Cortex A9's attached to the Xilinx fabric I find some interesting things there. If only as a new kind of 'i/o' but that is neither I/O port based nor memory map based (it expresses as memory but it feels different than the memory mapping of old like on the PDP/VAX machines and 68K systems). Could just be nostalgia though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: