Hacker Newsnew | past | comments | ask | show | jobs | submit | nurettin's commentslogin

How do you express |a - b| ?

(bu tam-sayının) mutlak-değeri, (bunun 0'dan büyüklüğü) doğruysa, bu, değilse, bunun -1'le çarpımıdır.

(5'in mutlak-değerini) yaz.

(-5'in mutlak-değerini) yaz.

You can try it in the playground.


Why not plan for a future where a lot of non-trivial tasks are automated instead of living on the edge with all this anxiety?

[flagged]


come out of the irony layer for a second -- what do you believe about LLMs?

I mean.. LLMs have hit a pretty hard wall a while ago, with the only solution being throwing monstrous compute at eking out the remaining few percent improvement (real world, not benchmarks). That's not to mention hallucinations / false paths being a foundational problem.

LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.

So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.


If all of it is going away and you should deny reality, what does everything else you wrote even mean?

Yes, it is simply impossible that anyone could look at things and do your own evaluations and come to a different, much more skeptical conclusion.

The only possible explanation is people say things they don't believe out of FUD. Literally the only one.


I love that they don't just work on the edges and declare their work complete. No, shadows also have to be perfect!

Reminds me of this underrated library which uses braille alphabet to draw lines. Behold:

https://github.com/tammoippen/plotille

It's a really nice plotting tool for the terminal. For me it increases the utility of LLMs.


Start by greeting people that you pass by often. Don't be shy to engage in small talk. Trust society and it gets better over time.

You mean you discovered parallel arrays?

specifically I've discovered how to 'trick' mainstream cloud storage and mainstream query engines using mainstream table formats how to read parallel arrays that are stored outside the table without using a classic join and treat them as new columns or schema evolution. It'll work on spark, bigquery etc.

Whats a good place to see parallel arrays defined. I have no data lake expetience. Know how relational db works.

I mean,

    Table1 = {"col1": [1,2,3]}
    Table2 = {"epiphany": [1,1,1]}
    for i, r in enumerate(Table1["col1"]):
      print(r, Table2["epiphany"][i])

He's really happy he found this (Edit: actually it seems like Chang She talked about this while discussing the Lance data format[1]@12:00 in 2024 at a conference calling it "the fourth way") and will represent this in a conference.

[1] https://youtu.be/9O2pfXkCDmU?si=IheQl6rAiB852elv


Seriously, this is not what big data does today. Distributed query engines don't have the primitives to zip through two tables and treat them as column groups of the same wider logical table. There's a new kid on the block called LanceDB that has some of the same features but is aiming for different use-cases. My trick retrofits vertical partitioning into mainstream data lake stuff. It's generic and works on the tech stack my company uses but would also work on all the mainstream alternative stacks. Slightly slower on AWS. But anyway. I guess HN just wants to see an industrial track paper.

Why a paper? A repo should do the trick.

That code is for in memory data right? I see no storage access.

What is really happening? Are these streaming off 2 servers and zipped into 1. Is this just columnar storage or something else?


OT: The time between releasing a free Rubik's cube program to play store and receiving a cease & desist has always impressed me.

My 10 year old has been building this website using google sites for a year now, he collects interesting/fun/functional links.

https://awebsite.space


Forgot that it needs www

https://www.awebsite.space/


Yes! gcc/omp in general solved a lot of the problems which are conveniently left out in the article.

The we have the anecdotal "They failed firefox layout in C++ twice then did it in Rust" < to this I sigh in chrome.


The Rust version of this is "turn .iter() into .par_iter()."

It's also true that for both, it's not always as easy as "just make the for loop parallel." Stylo is significantly more complex than that.

> to this I sigh in chrome.

I'm actually a Chrome user. Does Chrome do what Stylo does? I didn't think it did, but I also haven't really paid attention to the internals of any browsers in the last few years.


And the C++ version is add std::execution::par_unseq as parameter to the ranges algorithm.

This has the same drawbacks as "#pragma omp for".

The hard part isn't splitting loop iterations between threads, but doing so _safely_.

Proving an arbitrary loop's iterations are split in a memory safe way is an NP hard problem in C and C++, but the default behavior in Rust.


Well, if you are accessing global data with ranges, you are doing it wrong.

Naturally nothing on C++ prevents someone to do that, which is why PVS, Sonar and co exist.

Just like some things aren't prevented by Rust rather clippy.


Concurrency is easy by default. The hard part is when you are trying to be clever.

You write concurrent code in Rust pretty much in the same way as you would write it in OpenMP, but with some extra syntax. Rust catches some mistakes automatically, but it also forces you to do some extra work. For example, you often have to wrap shared data in Arc when you convert single-threaded code to use multiple threads. And some common patterns are not easily available due to the limited ownership model. For example, you can't get mutable references to items in a shared container by thread id or loop iteration.


> For example, you can't get mutable references to items in a shared container by thread id or loop iteration.

This would be a good candidate for a specialised container that internally used unsafe. Well, thread id at least; since the user of an API doesn't provide it, you could mark the API safe, since you wouldn't have to worry about incorrect inputs.

Loop iteration would be an input to the API, so you'd mark the API unsafe.


There’s split_at_mut to avoid writing unsafe yourself in this case.

Afaik it does all styling and layout in the main thread and offloads drawing instructions to other threads (CompositorTileWorker) and it works fine?

That does sound like Chrome has also either failed to make styling multithreaded in C++ (or haven't attempted it), while it was achieved in Rust?

It has pretty graphics.

Satellites don't work because iran gov. is broadcasting gibberish causing satellite connections to drop.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: