Privacy concerns aside - I'm unclear on the specific approach used in this study and how it differs from previous work and would love some input as to whether my interpretation is correct.
From my reading, the tl;dr is that they:
- Build a subject-specific model to predict fMRI activations when presenting a subject with words
(I don't believe this is novel in and of itself. I know it has been done with ECoG - [1] off the top of my head but I am fairly certain there are others. so maybe using fMRI is the main advance here?)
- Use GPT to generate candidate sentences, and see which candidates most match the true activations.
(This method of narrowing the solution space seems new.)
The improvement on within-subject accuracy over between subject leads me to believe that this is an actual benefit, but I'm struggling to determine how they quantify the improvements over and above "GPT is good at predicting human language".
I may be misunderstanding the approach altogether however, so take this with a grain of salt.
From my reading, the tl;dr is that they:
- Build a subject-specific model to predict fMRI activations when presenting a subject with words
(I don't believe this is novel in and of itself. I know it has been done with ECoG - [1] off the top of my head but I am fairly certain there are others. so maybe using fMRI is the main advance here?)
- Use GPT to generate candidate sentences, and see which candidates most match the true activations.
(This method of narrowing the solution space seems new.)
The improvement on within-subject accuracy over between subject leads me to believe that this is an actual benefit, but I'm struggling to determine how they quantify the improvements over and above "GPT is good at predicting human language".
I may be misunderstanding the approach altogether however, so take this with a grain of salt.
[1]: https://www.ucsf.edu/news/2021/07/420946/neuroprosthesis-res...