Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I built my own web search index on bare metal, index now up to 34m docs: https://greppr.org/

People rely too much on other people's infra and services, which can be decommissioned anytime. The Google Graveyard is real.



Number of docs isn’t the limiting factor.

I just searched for “stackoverflow” and the first result was this: https://www.perl.com/tags/stackoverflow/

The actual Stackoverflow site was ranked way down, below some weird twitter accounts.


I don't weight home pages in any way yet to bump them up, it's just raw search on keyword relevance.


Google's entire (initial) claim-to-fame was "PageRank", referring both to the ranking of pages and co-founder Larry Page, which strongly prioritised a relevance attribute over raw keyword findings (which then-popular alternatives such as Alta Vista, Yahoo, AskJeeves, Lycos, Infoseek, HotBot, etc., relied on, or the rather more notorious paid-rankings schemes in which SERP order was effectively sold). When it was first introduced, Google Web Search was absolutely worlds ahead of any competition. I remember this well having used them previously and adopted Google quite early (1998/99).

Even with PageRank result prioritisation is highly subject to gaming. Raw keyword search is far more so (keyword stuffing and other shenanigans), moreso as any given search engine begins to become popular and catch the attention of publishers.

Google now applies other additional ordering factors as well. And of course has come to dominate SERP results with paid, advertised, listings, which are all but impossible to discern from "organic" search results.

(I've not used Google Web Search as my primary tool for well over a decade, and probably only run a few searches per month. DDG is my primary, though I'll look at a few others including Kagi and Marginalia, though those rarely.)

<https://en.wikipedia.org/wiki/PageRank>

"The anatomy of a large-scale hypertextual Web search engine" (1998) <http://infolab.stanford.edu/pub/papers/google.pdf> (PDF)

Early (1990s) search engines: <https://en.wikipedia.org/wiki/Search_engine#1990s:_Birth_of_...>.


PageRank was an innovative idea in the early days of the Internet when trust was high, but yes it's absolutely gamed now and I would be surprised if Google still relies on it.

Fair play to them though, it enabled them to build a massive business.


Anchor text information is arguably a better source for relevance ranking in my experience.

I publish exports of the ones Marginalia is aware of[1] if you want to play with integrating them.

[1] https://downloads.marginalia.nu/exports/ grab 'atags-25-04-20.parquet'


Though I'd think that you'd want to weight unaffiliated sites' anchor text to a given URL much higher than an affiliated site.

"Affiliation" is a tricky term itself. Content farms were popular in the aughts (though they seem to have largely subsided), firms such as Claria and Gator. There are chumboxes (Outbrain, Taboola), and of course affiliate links (e.g., to Amazon or other shopping sites). SEO manipulation is its own whole universe.

(I'm sure you know far more about this than I do, I'm mostly talking at other readers, and maybe hoping to glean some more wisdom from you ;-)


Oh yeah, there's definitely room for improvement in that general direction. Indexing anchor texts is much better than page rank, but in isolation, it's not sufficient.

I've also seen some benefit fingerpinting the network traffic the websites make using a headless browser, to identify which ad networks they load. Very few spam sites have no ads, since there wouldn't be any economy in that.

e.g. https://marginalia-search.com/site/www.salon.com?view=traffi...

The full data set of DOM samples + recorded network traffic are in an enormous sqlite file (400GB+), and I haven't yet worked out any way of distributing the data yet. Though it's in the back of my mind as something I'd like to solve.


Oh, that is clever!

I'd also suspect that there are networks / links which are more likely signs of low-value content than others. Off the top of my head, crypto, MLM, known scam/fraud sites, and perhaps share links to certain social networks might be negative indicators.


You can actually identify clusters of websites based on the cosine similarity of their outbound links. Pretty useful for identifying content farms spanning multiple websites.

Have a lil' data explorer for this: https://explore2.marginalia.nu/

Quite a lot of dead links in the dataset, but it's still useful.


Very interesting, and it is very kind of you to share your data like that. Will review!


Google’s biggest search signal now is aggregate behavioral data reported from Chrome. That pervasive behavioral surveillance is the main reason Apple has never allowed a native Chrome app on iOS.

It’s also why it is so hard to compete with Google. You guys are talking about techniques for analyzing the corpus of the search index. Google does that and has a direct view into how millions of people interact with it.


> That pervasive behavioral surveillance is the main reason Apple has never allowed a native Chrome app on iOS

The Chrome iOS app still knows every url visited, duration, scroll depth, etc.


Yes indeed, they have an impossibly deep moat and deeper pockets. I'm certainly not trying to compete with them with my little side project, it's just for fun!


> That pervasive behavioral surveillance is the main reason Apple has never allowed a native Chrome app on iOS.

There is a native Chrome app on iOS. It gets all the same url visit data as Chrome on other platforms.

Apple blocks 3rd party renderers and JS engines on iOS to protect its App Store from competition that might deliver software and content through other channels that they don't take a cut of.


Sure, but the point is results are not relevant at all?

It’s cool though, and really fast


I'll work on that adjustment, it's fair feedback thanks!


Unfortunately this is the bulk of search engine work. Recursive scraping is easy in comparison, even with CAPTCHA bypassing. You either limit the index to only highly relevant sites (as Marginalia does) or you must work very hard to separate the spam from the ham. And spam in one search may be ham in another.


I limit it to highly relevant curated seed sites, and don't allow public submissions. I'd rather have a small high-quality index.

You are absolutely right, it is the hardest part!


What do you mean they're not relevant? The top result you linked contained the word stackoverflow didn't it? It's showing you exactly what you searched for. Why would you need a search engine at all if you already know the name of the thing? Just type stackoverflow.com into your address bar.

I feel like Google-style "search" has made people really dumb and unable to help themselves.


the query is just to highlight that relevance is a complex topic. few people would consider "perl blog posts from 2016 that have the stack overflow tag" as the most relevant result for that query.


Confluence search does this, for our intranet. As a result it's barely usable.

Indexing is a nice compact CS problem; not completely simple for huge datasets like the entire internet, but well-formed. Ranking is the thing that makes a search engine valuable. Especially when faced with people trying to game it with SEO.


This is pretty cool. Don't let the naysayers stop you. Taking a stab at beating Google at their core product is bravery in my book. The best of luck to you!


Thank you kindly! It's just for fun.


> it’s just for fun.

amazing, for real.

everything i’ve read and heard about the good internet is that it was good because sooooo many of the people did stuff for exactly that, fun.

i’ve spent some time reading through some of the old email lists from earlier internet folks, they predicted exactly what weve turned this into. reading the resistance against early adoption of cookies is incredible to see how prescient some of those people were. truly incredible.

keep having fun with it, i think it’s our only way out of whatever this thing is we have now.


Couldn't agree more! The early pioneers of the Internet were hackers and tinkers, I've tried to maintain the same ethos.


That's super cool! Do you have any plans to commercialize it or it's just a pet project?


Pet project just for fun, thanks!


Lol, a GooglePlus URL was mentionned on a webpage i browsed this week.#blastFromThePast


I still remember their circles interface ;-)


I tested it using a local keyword, as I normally do, and it took me to a Wikipedia page I didn’t know existed. So thanks for that.


It will throw up weird and interesting results sometimes ;-)


Thanks for sharing, this is really impressive.

Can you talk a bit about your stack? The about page mentions grep but I'd assume it's a bit more complex than having a large volume and running grep over it ;)

Is it some sort of custom database or did you keep it simple? Do you also run a crawler?


I huge Lucene index for storage and search, with a custom crawler that I wrote myself. It's a fun engineering problem.


Unfortunately the index is the easy part. Transforming user input into a series of tokens which get used to rank possible matches and return the top N, based on likely relevence, is the hard part and I'm afraid this doesn't appear to do an acceptable job with any of the queries I tested.

There's a reason Google became so popular as quickly as it did. It's even harder to compete in this space nowadays, as the volume of junk and SEO spam is many orders of magnitude worse as a percentage of the corpus than it was back then.


I am definitely not trying to complete with Google, instead I am offering an old-school "just search" engine with no tracking, personalization filtering, or AI.

It's driven by my own personal nostalgia for the early Internet, and to find interesting hidden corners of the Internet that are becoming increasingly hard to find on Google after you wade through all of the sponsored results and spam in the first few pages...


There may be a free CS course out there that teaches how to implement a simplified version of Google's PageRank. It's essentially just the recursive idea that a page is important if important pages link to it. The original paper for it is a good read, too. Curiously, it took me forever to find the unaltered version of the paper that includes Appendix A: Advertising and Mixed Motives, explaining how any search engine with an ad-based business model will inherently be biased against the needs of their users[0]

[0] https://www.site.uottawa.ca/~stan/csi5389/readings/google.pd...


Nice find, will review!


The input on the results page doesn't work, you always need to return to the start page on which the browser history is disabled. That's just confusing behaviour.


I guess you used the return key instead of clicking on the search icon? Seems to be a bug with the return key, I'll fix that this weekend sorry.


True, didn't occur to me, that I should click on the icon instead. Once I have clicked on the search icon once, enter also works. When I input a short query (single letter) it sometimes just shows a blank page, but maybe that is just HNs hug of death. Consider putting the query term more prominently in the front of the URL, so users can edit it. Also from the startpage, the URL in the URLbar isn't updated. As I already wrote, the browser shows completion for the searchbar on the result page, but does not for the one one the startpage. For my taste I would prefer less JS trickery, which would maybe already get rid of some of these issues.


Appreciate the detailed feedback! A lot of the JS trickery and URL shenanigans I'm doing is to prevent bot spam attempts, which was a real problem in the beginning.


Sad state the web is in.

It is intended, that the page currently shows a link to the wordpress login?


It does not use WordPress.


I'm sorry, I am dumb and visited http://grepper.org/ . Where does your name come from I guess from grep for the WWW?


Yes correct, that is where the name comes from.


I made also something for my own search needs. It's just an SQLite table of domains, and places. I have your search engine there also ;-)

https://github.com/rumca-js/Internet-Places-Database

Demo for most important ones https://rumca-js.github.io/search


Thank you, will check it out!


You should consider filtering by input language. Showing the same Wikipedia article in different languages is not helpful when I am searching in English. Also you may unify by entries by URL, it shows the same URL, just with different publish dates, which is interesting and might be useful, but should maybe be behind a toggle, as it is confusing at first.


Great feedback, agree I need to filter here. Some website localization is very hard to work around, because they will try to geo-locate the IP address of your bot and redirect it accordingly to a given language...


The issue I was having was with the query "term+wikipedia" it then shows the wikipedia article in Czech, Hungarian, Russian, some kind of Arab and other before finally showing the English version. Then also a lot of that occur 2,3,4+ times with the same URL, just differing in crawltime by a few minutes.


It's a difficult problem to fix, you can set an Accept-Language header on crawl requests but his only works if the target website uses "Content Negotiation." Some sites ignore headers and determine language based on the IP address (Geo-IP) or the URL structure (e.g., /es/ vs /en/), basically a mess...


I don't get the problem you claim. You crawl something and get a document in whatever language the site delivers you. You know the language of that document with the lang=... attribute of the document. What results you show for a given language is under your control and not influenced by what the crawled site chose to serve to the crawler.


I'm working on the language improvements presently, but I need to clean out a lot of bad entries in my index. In essence what I am trying to say is many servers ignore "Accept-Language" so you have to rely on other means of detecting the language of the page reliably, e.g. inspecting the body content of the response. It's a non-trivial problem online.


So html lang=... is wrong, or doesn't exist?

> I am trying to say is many servers ignore "Accept-Language"

I wouldn't have expected that to be a hard rule, more like if there are multiple pages to return to have a factor, which one the user most likely wants.


This is mad but cool. Keep at it.


Thanks, mad is fun for me! It costs me nothing if it fails.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: