Hacker Newsnew | past | comments | ask | show | jobs | submit | hokkos's commentslogin

Most CVE now are pure spam without value, all I get is dev dependencies affected by regex that could take too long, scanner should do a better job to differentiate between dependencies and dev dependencies.

this ai is too much "aligned" to return anything of value, considering the content it has to look into and the questions it needs to answer.

Bring your own key and use Claude. We found that it's most willing to run deep research queries here.

Which model is being used? Is there an unrestricted open weight model that could be used?

What do you mean?

As in, OpenAI, Anthropic, and Google's models won't follow instructions regarding forensics for this?


Who here is naive enough to think that this little loop hole hasn't been nicely tied off?

They should use Grok, it feels the most open out of the big 4

If AI is soo productive why do they even sell it and don't hoard it for themselves to build a competing offer to everything ?

No one is claiming that level of productivity.

Oh yes they are. People are claiming 100x improvements, which is completely insane. But they do claim it.

OK sure, there are always lunatics on the fringe, but OP is casting that argument out as if they’re attacking a mainstream opinion.

looking at the code examples i don't see the point of JSX, seems to decrease type safety and typing completion


I use https://typespec.io to generate openapi, writing openapi yaml quickly became horrible past a few apis.


Ha yes, see one of my other comments to another reply.

I never got to use it when I last worked with OpenAPI but it seemed like the antidote to the verbosity. Glad to hear someone had positive experience with it. I'll definitely try it next time I get the chance


it's because you only do it once per project.


some lib literally publish a new package at every PR merged, so multiple times a day.


it reminds me of the EXI compression for XML that can be very optimized with a XSD Schema with a schema aware compression, that also use the schema graph for optimal compression : https://www.w3.org/TR/exi-primer/


I also have an elegant proof, but it does't quite fit in a HN comment.


No support for symbols, amirite?


whatever you do with xslt you can do it in a saner way, but whatever we need to use serial/bluetooth/webgpu/midi for there is no other way, and canvas is massively used.


I'd love to see more powerful HTML templating that'd be able to handle arbitrary XML or JSON inputs, but until we get that, we'll have to make do with XSLT.

For now, there's no alternative that allows serving an XML file with the raw data from e.g. an embedded microcontroller in a way that renders a full website in the browser if desired.

Even more so if you want to support people downloading the data and viewing it from a local file.


If you're OK with the startup cost of 2-3 more files for the viewer bootstrap, you could just fetch the XML data from the microcontroller using JS. I assume the xsl stylesheet is already a separate file.


I don't think anyone is attached to the technology of xslt itself, but to the UX it provides.

Your microcontroller only serves the actual xml data, the xslt is served from a different server somewhere else (e.g., the manufacturer's website). You can download the .xml, double-click it, and it'll get the xslt treatment just the same.

In your example, either the microcontroller would have to serve the entire UI to parse and present the data, or you'd have to navigate to the manufacturers website, input the URL of your microcontroller, and it'd have to do a cors fetch to process the data.

One option I'd suggest is instead of

    <?xml-stylesheet href="http://example.org/example2.xsl" type="text/xsl" ?>
we'd instead use a service worker script to process the data

    <?xml-stylesheet href="http://example.org/example2.js" type="application/javascript" ?>
Service workers are already predestined to do this kind of resource processing and interception, and it'd provide the same UX.

The service worker would not be associated with any specific origin, but it would still receive the regular lifecycle of events, including a fetch event for every load of an xml document pointing at this specific service worker script.

Using https://developer.mozilla.org/en-US/docs/Web/API/FetchEvent/... it could respond to the XML being loaded with a transformed response, allowing it to process the XML similar to an XSLT.

You could even have a polyfill service worker that loads an XSLT and applies it to the XML.


Of course there is a better way than webserial/bluetooth/webgpu/webmidi: Write actual applications instead of eroding the meaning and user expectations of a web browser. The expectation should not be that the browser can access your hardware directly. That is a much more significant risk for browsers than XSLT could ever be.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: