You have to understand the people in the article are execs from the chip EDA (Electronic Design Automation) industry. It's full of dinosaurs who have resisted innovation for the past 30 years. Of course they're going to be blowing hot air about how they're "embracing AI". It's a threat to their business model.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
It’s in software too. Old guard leadership wanting “AI” as a badge but not knowing what to do with it. They are just sprinkling it into their processes and exfiltrating data while engineers continue to make a mess of things.
Unlike real AI projects that utilize it for workflows, or generating models that do a thing. Nope, they are taking a Jira ticket, asking copilot, reviewing copilot, responding to Jira ticket. They’re all ripe for automation.
Kind of a wild idea, but have you considered using this as a markup language for logic diagrams? I'm thinking something like mermaid - https://mermaid.js.org/ While this might not be super useful for chip design, it is a fully functional HDL, and since it is gate level, it would map nicely to diagrams.
That is a very interesting idea! Tbh, I have been thinking about something along those lines. I was messing around with gemini 3.0 back when it came out and made this program called Logic Lab (https://logiclab-227111532364.us-west1.run.app/). I was thinking of exporting/importing the components as SHDL, but as of rn they are just exported in some generic format gemini made.
> every mention in a package.json or requirements.txt
OK, what about those of us who aren't writing libraries?
As a personal anecdote, the amount of opportunities that have been opened up to me as a result of my open source project are worth way more than any $1 per mention or user.
As much as I love Zed, I am of the belief that VScode (and its derivatives) will remain the dominant build-your-own IDE for a really long time unless something like Zed can support web based extensions. I created a VScode extension for chip designers and I would love to port it to Zed, but I can't because it's a visualization extension with a custom webview.
I realize the irony here that Zed is fast because it's not web based, but I stand by my claim that being able to optionally display web UIs would be a really cool feature to have. It would open the door to a lot of extensions.
Dating apps are a Skinner Box by nature. They give randomized reward in the form of likes and matches. If you're attractive, you're the product because you don't need premium service to get more dates.
Give me Yelp for date spots and take a cut of the ad revenue. That way, there's at least an incentive to get people to not ghost each other long enough to actually meet up for a date. Hopefully that will do some level of incentivizing human connection.
You don't even have to be attractive. There is some dude in a slum in Manilla right now smoking a hand rolled cigarette between his unkept bearded lips, using his wrinkled face and cataract ridden eyes to squint at a screen to scam the next guy with his keyboard propped up on his oversized tummy. Meanwhile he has an image of a beautiful 20 year old Texas country girl on his profile and maybe even a Russian hooker on call for when he needs to make bait videos. And he probably gets 100x the interest of your 'legitimate' average male user.
Another one is custom CSS (though it doesn't look like positron does that.) You can change styling properties like spacing and weighting as well.
I have a love/hate relationship with the VScode webview panels, but the message handler is not my favorite implementation in the world. I would love a way to send binary data, and get semantic token colors.
The only issue is that when you have a custom build of VScode, you have to manage a fork of VScode, and potentially pull in updates as VScode updates. How do you manage that?
Unfortunately socketed processors only really work with DDRx type DRAM interfaces. GPUs use GDDR and HBM interfaces, which are not ideal for sockets. In the case of HBM, you have 1024 data traces per DRAM chip, which would make the socket have an insane number of pins. GDDR has fewer pins, but makes up for it with higher data rates (32 pins at 16Gb/s) and that is impractical to use in a socket due to the variance in contact area resulting in impedance matching issues.
Love to see this at the top of HN! I haven't written anything with this language yet, but I have met some of the developers of this language. They're pretty great and they are doing a lot of really good work in the open source hardware community. Another project they maintain is Surfer: https://surfer-project.org/
The challenge of a HDL over a regular sequential programming (software) language is that a software language is programmed in time, whereas a HDL is programmed in both space and time. As one HDL theory expert once told me "Too many high level HDLs try to abstract out time, when what they really need to do is expose time."
> The challenge of a HDL over a regular sequential programming (software) language is that a software language is programmed in time, whereas a HDL is programmed in both space and time. As one HDL theory expert once told me "Too many high level HDLs try to abstract out time, when what they really need to do is expose time."
That's an excellent quote, I might steal it :D In general, I think good abstractions are the ones that make important details explicit rather than ones that hide "uninteresting" details.
> As one HDL theory expert once told me "Too many high level HDLs try to abstract out time, when what they really need to do is expose time."
Exactly! It's astounding how often the documentation of some vendor component has in the documentation: "data_out is valid 2 cycles after read_enable is asserted", and NOTHING in the actual module definition makes a mention of this. There's so much dumb and error-prone mental arithmetic designers have to do to synchronize such latencies.
Spade does make a nod at this, with its pipelining notation. The issue I have with it is that it takes a too simplistic approach to said port timings. In a Spade pipeline you separate "pipeline stages" by adding a "reg;" statement on its own line. (It's an approach shared by another language called TL-Verilog.). A consequence of this style is that all inputs arrive at the sime time (say cycle 0), and all results are produced at a second time (say cycle 5). This is irrespective of if an input is actually only ever needed in a final addition in cycle 4. It'll insert the 4 extra registers regardless. Likewise, it leads to unnatural expression of subpipelines, where syntactically you can already see a value, but can only _access_ it 3 pipeline stages later.
With SUS, I have a solution to this: Latency Counting. (Intro here: https://m.youtube.com/watch?v=jJvtZvcimyM&t=937). A similar philosophy is also followed by Filament, though they go a step further with adding validity intervals too.
The gist of Latency Counting is that Instead of explicitly marking and naming "Pipeline stages", you annotate a statement to say that it "takes one cycle", and through exploring the dependency graph between your wires, it assigns a unique "absolute latency" to every wire, and places registers accordingly. (And now it can even infer submodule parameters based on this pipelining, in a bid to do HLS-style design in an RTL language).
It needs dependent types to encode when signals are valid.
I have looked at TL-Verilog, I love the language, I am on the fence with the syntax, which arguably, is nearly an inconsequential nit given how far languages and tools need to progress.
Not exactly. The name of the game with GDDR memory is "speed on the cheap." To do this, it uses a parallel bus with data rates pushed to the max. Not much headroom for things that could compromise signal integrity like socketed parts, or even board traces longer than they absolutely need to be. That's why the DRAM modules are close to the GPU and they're always soldered down.
Also, the latency with GDDR7 is pretty terrible. It uses PAM3 signaling with a cursed packet encoding scheme. At least they were nice enough to add in a static data scrambler this time around! The lack of RLL was kind of a pain in GDDR6.
As of recent, I have had this pet theory that there's a brilliance / stupidity spectrum, but if you go too far in one direction it loops back on itself. Some things are just so stupid that they're brilliant. I really like this!
Also, I should clarify that "brilliance" and "stupidity" in this theory are not raw intelligence, but the application of said intelligence.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
reply