I fear you are right here, and that the problem is far more dire than much of academia realizes. I know enough highly intelligent people (some even with family / spouses in academia, surprisingly) that are otherwise very e.g. left / liberal / progressive and open, that are still basically saying academia needs to be gutted / burned down.
I've no idea what the actual stats are on faith in academia overall today, but I don't think it is looking good.
Because there isn't such a relation. It's a thing people believe when they don't have actual experience with peer review. If anything, predatory journals and low-quality pubs can charge more, since publication is more guaranteed (and researchers reaching for these pay-to-publish journals are more desperate).
It's a reputation economy. Like review sites. They start off truthful, and then as time goes on incentives shift to bad actors to subvert it. Or they just sell out their reputation.
Yelp, TripAdvisor, wire cutter, hell even Google results themselves.
Once you start poisoning that well, it's difficult if not impossible to claw it back.
> The first thing an academic does is check where a paper is published, before even reading it. It's a crutch
IMO, academics that do this are not very competent, because we have plenty of research suggesting that higher-profile journals are in fact less trustworthy in many ways, or that there is no correlation at all between reputation and quality (see my other post here in this thread).
Yes, some trash journals publish all trash, but, beyond that, competent researchers scan the abstract, look at sample sizes and basic stats, and if those check out, you skip to the methods and look for red flags there. Also, most early publications will be on an arXiv-like place anyway so you can't look to reputation yet.
Likewise, serious analytic reviews like meta-analyses don't factor in e.g. impact factor or paper citations, since that would be nonsense. They focus on methodology and stats.
I really think we ought to shame academics that are filtering papers based on journal alone, it is almost always the wrong way to make a quick judgement.
Do you not notice the circularity of your reasoning here?
Also I didn't say incompetent, I said "not very". More competent researchers make journal rep only a very small factor, and it is not via the "high rep = more trustworthy" direction (which is the bad heuristic), it is "pay-to-publish journals = not trustworthy" (better heuristic).
Once you have ruled out a publication being in a trash journal, reputation is only a very minor factor in consideration, and methodological and substantive issues are what matter.
It all depends on whether the paper fits the journal. Minor journals serve a useful service as a repository for minor results. And minor results are still worth publishing because they might provide a detail or technique later needed for a major result. The thing to be wary of is when you see a stunning result that should really be in _Nature_ or _Science_ in some minor journal. Why isn't it? Was it submitted there first and rejected? It would be nice if the history of a manuscript (and its peer review) stayed with a manuscript so you could see if the authors really corrected problems brought up by peer review or were just spamming journals with a flawed manuscript until they found one that published it.
Ah, look, another smug sneer that ignores the evidence I presented, and makes another circular argument (i.e. that because academics look at rep, this is justified, even though I provided evidence disputing this).
I know what journals are better / not. But reputation only is helpful in letting you ignore trash journals, once you are out of trash land, rep is just not a very meaningful factor, and you have to focus on methodology and substance.
I literally said it was posted in this thread, and a quick Ctrl+F of my username on this page would have found you it in a half second: https://news.ycombinator.com/item?id=47249236
Ah, but the naive public still broadly believes in peer review, and that high profile journals do good review. And the prominence and reputation that comes from these journals arguably then relies on this (increasingly false) public perception.
Would scientists feel the same if the public was more educated about how bad journals and peer review are? Not so easy to disentangle IMO.
The naive public does not believe anything in particular about peer review. They think new scientific results are significant when they read about them in the popular media, that’s it.
People who do need to work professionally with peer review, do understand what it actually does and its limitations.
You seem stuck somewhere in the middle, caring deeply about a system you don’t seem to fully understand.
> The naive public does not believe anything in particular about peer review
You'd need to provide evidence or an argument for this. The media reports on things in part based on journal prestige, and likely when questioned, people will say they can trust such things because good scientists have looked at the work and say it is good. This would be an implicit belief that peer review is generally working well, even if they don't use the term "peer review".
> You seem stuck somewhere in the middle, caring deeply about a system you don’t seem to fully understand.
Extremely presumptuous, as I work in this system, and have provided plenty of evidence for my claims. You've provided only sneers.
You've provided evidence that prominent journals experience retractions, fraudulent results, etc. All true. But it is not the job of peer reviewers to decide what gets published.
You've provided evidence that peer-reviewed science often turns out to be incomplete, inaccurate, wrong, fraudulent etc. All true. But it is not the job of peer reviewers to assure completeness, accuracy, or freedom from fraud.
A peer reviewer reads a paper and make comments on it. That's it! They don't check primary data, they don't investigate methods, they don't interrogate scientists, they don't re-run experiments just to double check. They assist a journal's editors in editing--that's it.
The check on published scientific results is the scientific process itself, not the publishing process. Prominent results attract further investigation, which confirms or disproves the reality of the underlying phenomena. Again: that's not the job of peer review.
Do some people ascribe too much authority to peer review? Yes, for sure. IMO your comments in this thread are exacerbating that problem, not addressing it.
> A peer reviewer reads a paper and make comments on it. That's it! They don't check primary data, they don't investigate methods, they don't interrogate scientists, they don't re-run experiments just to double check. They assist a journal's editors in editing--that's it.
Um, what? I have done all these things in reviews, and know other academics that have done these things as well. More confusingly though, if you are saying most reviewers don't do these things (which I agree with), this would only strengthen my point?
I'll let readers decide if it is my comments that exacerbate the problem, or if, perhaps, it is apologism for journalistic peer review that might be causing bigger issues in the present day.
Would be interesting if you would be willing to share a paper you reviewed and detail your review process of it. I don't see how one could check primary data or interrogate scientists in a blind review process, for example.
This is IMO just bad faith sealioning, you can look at the whole replication crisis in psychology and social science (esp. the work of people like Nick Brown and the GRIM test, or Uri Simonsohn), or sites like Retraction Watch, and see clear evidence of everything I am saying. There are endless papers in ML research going into issues with test datasets and data duplication, etc. In plenty of cases all data and code is made open, so it is trivial to check data issues and methods.
Also, review is back and forth, and has rounds: you almost always interrogate the scientists of the paper you are reviewing, this almost like the definition of peer review. I don't think you have any idea of what you are talking about at all.
Don't know why you are being downvoted, you are largely correct. I've provided plenty of evidence in another post in this thread showing that journal-based peer review is highly farcical.
EDIT: I still want review from a community of scientific peers. I just don't want this review to be in the hands of a tiny number of gatekeepers entangled with journals that largely just slow things down.
Because a lot of people are deeply invested in the present system perhaps? As the article pointed out, there's a lot of money involved, and there are a lot of people who've built their lives around flourishing in the existing system, cut-throat as it may be.
The other factor preventing a fix is that people with no actual serious experience of academic publishing and peer review will defend these journals, because they still think that (journal-based) peer review acts like some kind of meaningful quality filter. But, it really doesn't.
Because someone is surely going to try to defend journals via peer review in this thread, I want to provide a counter to the arguments that journal peer review does much good. Also, since everyone knows that if you just go to a poor enough journal, you can be published, I am going to focus on the (IMO mostly false) claim that higher-profile journals are still doing a good thing here.
There are numerous studies showing that higher-profile journals in general have more retractions and research misconduct [1-2], lower research quality [3], in fact weaker statistical power and reliability [4], and that statistical reliability even in high prestige journals is still extremely poor overall [5]. Also, making it through peer review is highly random and dependent on who you get as a reviewer [6], or is just basically a coin toss even when looking at reviewer groups:
In 2014, 49.5% of the papers accepted by the first committee were rejected by the second (with a fairly wide confidence interval as the experiment included only 116 papers). This year, this number was 50.6%. We can also look at the probability that a randomly chosen rejected paper would have been accepted if it were re-reviewed. This number was 14.9% this year, compared to 17.5% in 2014. [7]
We should just move to arXiv-like approaches and allow the scientific community to broadly judge relevance and quality. Journals just slow things down and burn funding for very little gain or benefit to anyone other than the journal owners.
> higher-profile journals in general have more retractions and research misconduct [1-2]
Given that reviews are not a mechanism to check for truth but soundness, the higher profile the thing I would imagine there would be more misconduct. I mean would one risk prison to steal 10$ or to steal 1 million $?
> lower research quality [3]
To cite exactly from your link "the evidence is mixed about whether they are strongly correlated with indicators of research quality.". I find saying "lower" a bit too strong given the original quote.
> in fact weaker statistical power and reliability [4]
For a specific field "cognitive neuroscience and psychology papers published recently"!
> statistical reliability even in high prestige journals is still extremely poor overall [5]
> Also, making it through peer review is highly random and dependent on who you get as a reviewer [6], or is just basically a coin toss even when looking at reviewer groups:
It's a coin toss if paper could get accepted at all, and that's less than ideal but what the system should do (at least) is reject obvious crap, not ensure that something gets clearly accepted. The danger is False Positive (accepted even if it's crap) rather than False Negative (rejected even if it might be something useful).
Overall note: the review system is not ideal and should be improved. But it's a hard, complex and delicate problem.
Oh, I agree this is all super complex and delicate. If I had more time, I'd love to write a more nuanced, many-thousands-of-words blog post going into which journals and fields actually have good peer review and can be more / less trusted.
I just wanted to make a strong rhetorical case by highlighting some things that might be surprising to people making more naive defenses of journals via peer-review-based arguments.
I am sympathetic to the argument you wish to make, that peer review is no panacea, but the actual evidence you offer has nothing to do with this claim.
You are trying to say that high profile journals have more retractions, which is well known as you share.
How does that have anything to do with peer review? Are you saying that there is more review or less review in some cases and that influences retraction rate? In what evidence? In what world does the arxiv system moderate this discrepancy?
> How does that have anything to do with peer review?
I already addressed this. People know peer review can be bad, but some think "good journals" still do good peer review. This is not so clear.
> In what world does the arxiv system moderate this discrepancy?
Open systems allow the scientific community to figure out ways to properly assess research quality and value more cheaply, and without passing through (often arbitrary and random) small numbers of gatekeepers that don't even do a reliable or good job gatekeeping in the first place.
Your argument depends on worse peer review at top journals - but fundamentally, you fail to show how doing any peer review is strictly worse than doing no peer review.
I understand that we want arxiv to exist, and it does, and it’s growing. That doesn’t mean we don’t want Nature or Science to triage the most compelling stories.
Importantly, we can already begin the search for these ‘cheaper’ review strategies while not losing the helpful information filter we get by seeing where things are presented/published
> Your argument depends on worse peer review at top journals - but fundamentally, you fail to show how doing any peer review is strictly worse than doing no peer review.
No, it doesn't. The argument is that peer review is incompetent gatekeeping in general, and so slows things down and makes thing expensive. Also, I am countering the argument "we need journals because journals do peer review" by arguing "peer review by journals isn't clearly actually good", I am not saying "peer review in general is unneeded", as I support review by the entire scientific community, rather than journal gatekeepers.
> you fail to show how doing any peer review is strictly worse than doing no peer review
I wasn't trying to show that. I have provided plenty of arguments to show why killing journal-based peer review could definitely speed things up and so potentially make things better. I want actual organic review by the community, not by tiny groups of gatekeepers.
I do work in science, I am claiming that pre-publication / journalistic peer review is limiting (and biasing) the amount of post-publication / non-journalistic peer review that can happen, and it is not limiting this in a very reliable or even IMO particularly desirable way.
There is definitely a problem with the over-production of junk science, and we definitely need a way to filter this out somehow. I am just claiming journalistic / pre-publication peer review does not do this effectively or reliably at all anymore (if it ever did).
> Personality isn't an internal property - it's a judgment made by people watching behavior.
Partly, yes, but personality is also an internal property, or it is coherent and correct enough to generally say that it has internal aspects. I.e. a person's personality is the set of (relatively) stable and difficult-to-change patterns that manifest in their behaviour in broad contexts, and these patterns are almost certainly encoded internally in the brain in some form. It is not much different than saying a person's intelligence / IQ is partly internal.
Otherwise, I do agree with your more careful framing, and I wish people thought and spoke more carefully about these things, and doubly so for LLMs.
It is a wordcel problem, i.e. the belief that language is all there is for modeling reality, even though this is obviously false and has been clearly disproven by decades of research in psychology, cognitive science, and neuroscience. At best we can say that sometimes language has a strong influence on our perceptions of reality.
EDIT: For a neuroscience reference that also argues why the general perspective is obviously false: https://pmc.ncbi.nlm.nih.gov/articles/PMC4874898/. But really, these things ought to be obvious from introspection.
Also in my dealing with birds and animals of all sorts I've come to believe that they are very capable in many forms of cognition without the use of language.
There was a fad called "structuralism" that liked to imagine that such and such is "structured like a language" but then when we got a paradigm for language it was one of those "normal science" paradigms that Kuhn warned you about, like you could write papers grounded in the Chomsky theory for a lifetime but it wouldn't help you learn to read Chinese more quickly or speak German without an accent or program a computer to parse tweets. That is, the structure of language is absolutely useless except for writing papers about linguistics -- and the "language instinct" becomes some peripheral that grafts onto an animal but you need the rest of the animal for it to work.
Now LLMs may not be a model for how we do it but they are certainly going to bring back structuralist and "wordcel" positions because they do seem to show, somehow, that "language is all you need" to accomplish whatever it is LLMs accomplish.
> Now LLMs may not be a model for how we do it but they are certainly going to bring back structuralist and "wordcel" positions because they do seem to show, somehow, that "language is all you need" to accomplish whatever it is LLMs accomplish.
People will try to bring back these obviously false models of cognition, but, so far, the dismal performance of LLMs on e.g. SpatialBench [1], and, almost certainly ARC-AGI-3, or e.g. the kind of data and effort required to get something like V-JEPA-2 [2], will be strong counter-examples to this. And, yeah, obviously animal cognition, esp. smart animals like birds, or the crazy stuff we see in chimp and gorilla ethology (border patrols, genocides, humor, theory of mind, bla bla bla).
This is IMO largely false, and empirically things like Sapir-Worf and strong linguistic relativism, or that language == thought are widely considered disproven [1-3].
This is also sort of a wordcel take, in that it neglects that there are plenty of mental structures that are not solely linguistic. I.e. visuo-spatial models, auditory models, kinaesthetic, proprioceptive, emotional, gustatory, or even maybe intuitive models, and symbolic models (which have both linguistic and visuo-spatial aspects). Yes, your models constrain your perception of reality, but it is not clear how important language really is to many of those models (and there is strong evidence it may not matter at all to a lot of cognition [3]).
Spatio-temporal, auditory models, etc. are themselves abstractions, sure. But they are fundamentally different in kind, in that those models respond to immediate sensory stimuli. Per se, they do not allow for abstract reasoning over internally generated representations. One aspect of the ever-elusive "intelligence" is the ability to practice reasoning over things not immediately in front of you.
This extra abstract reasoning capacity absolutely does constrain your ontology -- you don't know what you don't know (the graph doesn't witness its substrate).
> But they are fundamentally different in kind, in that those models respond to immediate sensory stimuli. Per se, they do not allow for abstract reasoning over internally generated representations
This is obviously deeply incorrect, and the kind of thing that people call "wordcel" thinking. Mathematical intelligence is highly visuospatial, and often requires constructing images and/or imagining motion (arguably invoking also either proprioceptive and/or kinaesthetic qualia).
Is it possible you are aphantasic? This seems to me to be the only way one could think that one cannot have non-verbal internally generated representations.
Visuospatial abstraction as distinct from visuospatial perception/interpretation is the dichotomy in question here.
I have a fine mind's eye and a functional inner voice. I'm speaking to the difference between, say, a reactive perception model (which is a model) and an abstracted cognition model (which is also a model, but one that can be interacted with without external input). It is clear almost all animals have the former, and a couple might have the latter, but this distinction is a core concern for usage of abstract systems such as linguistics.
> I'm speaking to the difference between, say, a reactive perception model (which is a model) and an abstracted cognition model (which is also a model, but one that can be interacted with without external input). It is clear almost all animals have the former, and a couple might have the latter, but this distinction is a core concern for usage of abstract systems such as linguistics.
This distinction seems perfectly fine and clear to me, so I suspect we probably actually don't disagree that much on specifics, and that this was maybe a semantic / violent agreement thing.
I still don't think this distinction helps defend your statement "Language constrains your perception of reality to only the set of concepts conceivable within that language.", because, obviously, you can have abstract cognitive models ('concepts') that are non-linguistic, and thus, your perception of reality is not constrained only by language. I.e. remove the "only" and I have no real substantive disagreement.
So it seems we can probably mostly agree on something like "Your abstractions constrain your perception of reality to only the set of concepts [ideas, representations] conceivable from those abstractions".
It is tricky because "concept" can strongly imply a linguistic model, and "model" is really the term we want here, e.g. a "visuospatial concept" is unusual, I admit. But, still, linguistic models are definitely not the only game in town re: perception.
> evidence from neuroimaging and neurological patients
Has "neuroimaging" successfully modelled those "universal human rights" the OP was mentioning? If yes, how did it look?
More generally, positing that all languages are, in the end, interchangeable (because that's what the opponents of something similar to Sapir-Worf are saying) is very reactionary and limited in itself, and its telling them me calling those anti-Sapir-Worf people "reactionaries" will for sure tickle in them something that wouldn't have happened had I used a different "neuoroimaged" concept which, supposedly, should have meant the same thing for them (but it doesn't).
See any of my links, but especially the third. Animal cognition and human neuroscience studies strongly disprove the importance of language to cognition. Conflating language and thought is so obviously false in 2026 it is extraordinary that people still think like this.
I was ignoring the comment about fascists because it is simplistic and low-quality, and will similarly not be responding to whatever you (incorrectly) think I was claiming about universal human rights. I only wanted to correct the extremely false (or at least hugely overstated) assumptions about language and perception of reality.
Fascism is an overly simplistic ideology, hence the obvious description. It was just an example. I'm not sure what you thought the other commenter was trying to say either but they were just recalling a specific example to stay on topic in the conversation.
I've no idea what the actual stats are on faith in academia overall today, but I don't think it is looking good.
reply