Actually that's precisely what flagging is for - getting a human moderator to look at the story
Do you know that is what happens? Because I don't think it does. I imagine an algorithm is applied that weighs votes/views against flags and acts accordingly- I haven't seen anything suggesting that a human moderator is involved here.
I know for a fact that duplicate comments, hellbanned accounts, and banned domain names are automatically killed on submission. I'm pretty sure I've seen comments after a popular story being killed where a moderator is saying why they killed it. Whether the algorithmic methods apply to popular stories I've no idea, but front-page killings aren't frequent enough that an automated method would be desirable. I also figure there are a bunch of early-YC users with adminlike powers. Once you have a financial partnership with someone, trusting them with elevated privileges on a tangential news board is small change (for example, YC startups can post clearly unreviewed job ads).
I don't doubt that yc.starups can post job ads without a middleman. But what about the content makes it clear they were not reviewed? Profanity? Nudity? Ugly kitten clip art?
What are the tell tale refinements that demonstrate a job ads has been reviewed?
Maybe around several months ago there were a few right in a row that were clearly written without thought to how they'd be perceived (sorry, I don't remember actual details and it doesn't look like they're archived). Nothing outright offensive, just immature sounding. Think 'brogrammer' but less deliberate.
Do you know that is what happens? Because I don't think it does. I imagine an algorithm is applied that weighs votes/views against flags and acts accordingly- I haven't seen anything suggesting that a human moderator is involved here.