Same. New ideas are like starting a fire. Piling too much on top or blowing too hard will stop it. You (together, however distributed across roles) do have to assess if you can handle one more fire, if it comes on top, replaces an old one etc. Getting to this decision in your specific setup is the tough and important part.
10x people can be like one-shot LLMs, your request is for sure wildly underspecified and what you get is 90% determined by the "smoothing term" applied by not you. This is why the right amount and frequency of interation is needed.
Libraries differ in implementation and compilers have freedom to optimize floating point. William K was always furious that the kids don't get it (A nice The Tragically Hip song btw), his rants in his papers are actually entertaining to read and after you are one of the lucky 10000.
The first time someone encounters 2001, they will almost certainly come away with some WTF? vibes, at least if they're being honest with themselves.
For my first time, I made the mistake of renting a VHS and watching it on a 19" TV. Heard this was a good SF movie, guess I'll check it off my list. Yeah, no. What I saw later in a 70mm cinema was the same content, same story, same words and images, but a very different movie. The setting and presentation made all the difference between a seemingly-pointless waste of time and a profound life experience.
That said, what we saw isn't what Kubrick filmed. Bowman's exercise sequence was originally a full 10 minutes long, just pacing around in circles, and a few other sequences including the Dawn of Man prologue were also much longer. Audiences in 1968 weren't buying it. Kubrick had to tighten things up, because complaining about the audience's attention span wasn't the option back then that it apparently is now.
Had an original SE as banking backup. Recently the banking app demanded a newer iOS after being updated. Now that good old little device that was supposed to save me eventually is basically bricked for me.
I'd briefly come across Elk, but couldn't tell how it was better than what I was using. The examples I could find all showed far simpler graphs than what we had, and nothing that seemed to address the problems we had, but maybe I should give it another look, because I've kinda lost faith that dagre is going to do what we need.
If I can explain briefly what our issue is: we've got a really complex graph, and need to show it in a way that makes it easy to understand. That by itself might be a lost cause already, but we need it fixed. The problem is that our graph has cycles, and dagre is designed for DAGs; directed acyclic graphs. Fortunately it has a step that removes cycles, but it does that fairly randomly, and that can sometimes dramatically change the shape of the graph by creating unintentional start or end nodes.
I had a way to fix that, but even with that, it's still really hard to understand the graph. We need to cut it up into parts, group nodes together based on shared properties, and that's not something dagre does at all. I'm currently looking into cola with its constraints. But I'll take another look at elk.
Our graphs are hierarchical, can contain cycles, too and have a bunch of directed subgraphs. We reach 500 nodes with 20k ports and 10k edges and "getting the graph" is still possible but takes a bit of practice. Cycle breaking is okish for us, because there is a strong asymmetry between many "forward" and much less "backwards" edges that makes the heuristics succeed often.
The biggest improvement for us was deduplication by using generators an referencing already emitted objects. Don't run flatc on a JSON, it doesn't do that.
Testing is answering "does it do, what it is supposed to do?" and autonomous means "according to it's own law(s)". Sounds like a contradiction to me. I'd answer with "none".
One can define properties the software is supposed to have, then autonomously test for those properties (as in, initiate a process that spends arbitrary amounts of time running new tests to try to show the software fails to have those properties.)
Is this not autonomous because the properties weren't created without humans being involved? How could that even be possible?
10x people can be like one-shot LLMs, your request is for sure wildly underspecified and what you get is 90% determined by the "smoothing term" applied by not you. This is why the right amount and frequency of interation is needed.
reply