Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lisp has had vectors as well as lists for a long time. If you want a vector then use a vector.


Yup, though usual caveats on if the item stored is itself a fat object requiring a pointer chase... SIMD exists as well, SBCL has it available out of the box through sb-simd. SBCL also tries to be a bit smart with its linked lists: when first making a list, if you make it sequentially (make the cons cells sequentially), it'll all be allocated sequentially. And regardless of that, when the garbage collector moves a list, its new location will be sequential.

The OCaml people at Jane Street noted that if you just focus on cache lines, the overhead these days between list traversal vs. vector traversal can be closer to a factor of 2. Like 16 cycles vs. 30 cycles for a new cache line on an old Skylake architecture, thanks to prefetching -- explicit prefetching instructions are hard to get right, but CPUs have also gotten better in the last 10 years at automatic prefetching. (It might be even better on Apple's architecture which I believe has more line fill buffers?) A miss is still around 300 cycles though, and a cache line of 64 bytes very typically represents more than one item (e.g. 8 longs).


> Yup, though usual caveats on if the item stored is itself a fat object requiring a pointer chase... SIMD exists as well, SBCL has it available out of the box through sb-simd. SBCL also tries to be a bit smart with its linked lists: when first making a list, if you make it sequentially (make the cons cells sequentially), it'll all be allocated sequentially. And regardless of that, when the garbage collector moves a list, its new location will be sequential.

This is the sort of thing that I was referring to when I referred to "cache-aware implementations of Lisps".


This is obvious and irrelevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: