Hacker Newsnew | past | comments | ask | show | jobs | submit | JackeJR's commentslogin

For (1) you can prepay i think up to 10 years? And every year you just prepay 1 year again and you will have 10 years to remember that you forgot to pay a domain registration bill.


Well a good surveyor would take into account order effects, i.e. the order in which questions were asked for example using Latin square designs.


N=1 study should not have made it into headlines.

> Although the research marks a milestone in the search for treatments of type 1 diabetes, it’s important to note that the study involved one one participant, who received a low dose of cells for a short period—not enough for the patient to no longer need to control their blood sugar with injected insulin. An editorial by the journal Nature also says that some independent research groups have failed in their efforts to confirm that Sana’s method provides edited cells with the ability to evade the immune system.


Despite that, glad to see it in a human subject.

I’ve had T1D for more than 30 years and have seen every headline under the sun with a “cure” always sometime in the next 5 years, so my expectations are properly tempered.

Still excited by it but a long way from clinics handing this out as a solution (if it’s viable).


5 years is modern for a long time. Used to be in the next decades. I've had it for about the same time and about 10 years ago I stopped following all research since it never goes anywhere. I'll wait till they start doing late stage trails to be even interested to read the full report.


The thing is that with such a sample, we don't really know

1. If the effect is real. i.e. had the patient not been given the injection, would his/her condition improve spontaneously.

2. Assuming the effect is real, what are the circumstances that make the treatment work for this person.

Not to be overly dismissive of the good work but it is too early to be optimistic about this given the above and the fact that the results were not replicated out of Sana suggest that there is a lot that we need to work out before this becomes a viable treatment for the masses.

The harms of hyping this up is that readers will get their hopes up and then be disappointed when things don't pan out as do most scientific endeavours. Overtime, readers will learn to distrust anything that is being reported because 90% of which do not translate to real world impact. It is hard to get the nuance that "science takes many many failures and iterations" to the public and the more likely outcome of such reporting is general distrust of science when things don't go the way that is hoped for.


Type 1 diabetes does not spontaneously resolve


And also have parity built in for file recovery. The alternative will be to use par2 to create parity files.


The parity files are the killer feature for me. Probably 95% of the downloads from Usenet end up needing them.


Just have sane firewall rules and you are good. E.g. if I install openssh-server and it auto starts, it doesn't make it out of my machine because my nftables does not allow inbound on port 22. It's just knowing the default behaviour and adjusting your practices for it.


That is a workaround for a ridiculous issue.


A sane firewall won't protect you from privilege escalation from a local attacker. While unlikely, this is one more breach that could be exploited.


Debian bundles AppArmor profiles for most services. This will prevent an attacker from accessing outside the perimeter drawn by the AppArmor profile.


This is the "you're holding it wrong" response to a clear design issue.


Aren't firewall rules part of the "configuration" the OP talked about?


No, because you can install and configure the firewall before you install package X. (without knowing anything about X, your firewall defaults can just prevent X from doing anything)

But you can't (easily) configure package X itself before you install it; and after you install it, it runs immediately so you only get to configure it after the first run.


causal inference with GenAI


It swings both ways. In some circles, logistic regression is AI, in others, only AGI is AI.


There are many reasons this comparison is not made. I will just touch on one. The target medium is different. For html, you have monitors of different sizes as well as windows that can be resized. For latex, you choose your target at the start: A4 paper? Screen presentation? A0 poster?

With a fixed medium in mind, you can be extremely particular on where on this canvas you want a piece of text/graphic or whatever.

Without a fixed medium, you have to have logic to address the different mediums and compromises have to be made.


That seems contradictory, when Latex is rather famously imprecise at placing figures and such. Weren't both languages (at least at some point) intended to take layouting control away from the writer?

But regardless, I think that, in addition to moving away from Latex we should also reconsider the primary output format. Documents are rarely printed anymore, and inaccessible, fixed-size A4 pdfs are annoying to read on anything but an iPad Pro.


LaTeX isn't intended to take layout control away from the author so much as it is intended to automatically produce a good-enough layout allowing a single author to produce a very large document without employing a designer.

HTML by contrast explicitly does remove control over layout from the author and place it in the hands of the user (and their chosen user agent).

Both languages have mechanisms to (somewhat) separate the content from the formatting rules.


Both claims are incorrect.

LaTeX would rather produce a bad document if it cannot produce a good one. Example: overfull hbox. A designer is still required who creates the documentclass, it is just that LaTeX comes with some predefined ones intended for scientific publishing.

HTML+CSS require pixel-perfect rendering. Example: ACID2 test. While it might have been the idea of plain HTML at some point (<em> instead of <i>), control has never been taken away from the author thanks to CSS.


You're badly mischaracterizing tests like ACID2. The test definition includes a long list of things that invalidate the test, including things like changing the zoom level. So it's wrong to construe that test as requiring pixel-perfect rendering when it explicitly doesn't cover exactly the kind of user agent controls I'm talking about.

Your comments about LaTeX do not seem to contradict anything I said.


Well, maybe they’re not printed by yourself. But many academics, often young people included, still print papers.


Which they can still do. Printed HTML is perfectly serviceable (definitely better than non-printed PDF).

Or are you arguing that there are somehow more people reading printed papers than digital?


HTML+CSS has facilities to target a page format (CSS @page rule, cm and in dimension units). Not to say that it's on the same level as LaTeX, but it's pretty impressive by its own right.


Note that this won't prevent the page from being displayed in other sizes, where it will most likely have a broken layout instead.


True, but with CSS you define one layout for the web format alongside a different layout for the print format, using the @page rule.


If I want only the print format this will require double the effort though.


No it doesn't? If you only want the print format, just do the print format layout, ignore the web format. Why do you care that it's broken there?


Are there good deep dives on how far you can practically this? Especially in combination with headless browser pdf generation?

Last time I looked into it, a while ago, my impression was that it would get rickety too soon. It’d be a good place to be, I think, if web and “document” tech stacks could have nice and practical convergence.


We use CSS paged media to create e-books and invoices (using weasyprint [0]). One of the most helpful resources for me was print-css.rocks [1], they cover a lot of what’s possible and include which tools support which parts of it (tools targeting paged media, browser support is essentially non-existent and outside using JS to fake it with paged.js, not relevant). The expensive tools tend to support more features, but thanks to some donations/sponsorships, weasyprint has really caught up and now supports a very large part of the spec.

> Especially in combination with headless browser pdf generation

I have no idea why you’d want to do that. Browsers are bad at it, dedicated tools are great at it.

[0]: https://weasyprint.org/

[1]: https://print-css.rocks/

[2]: https://pagedjs.org/


> I have no idea why you’d want to do that. Browsers are bad at it, dedicated tools are great at it.

Fair! I was just aspiring to a place where web pages and documents converge more.

Thanks for the recommendations!


I'd say it's already there. See for example the https://pagedjs.org/ project which allows advanced typesetting (including for printing) using web technologies. It is already used in production by at least one book publisher (C&F editions)


I've used it for my own such production, perfect binding with a hand guillotine and screw clamps in my attic - nothing remotely professional, but you still have to start by making a book block, and Paged.js is a solid call there. Unless beauty of typography (more than TTF/OTF hinting can handle) is of particular merit, it's usually my preferred first typesetting option.

As an old hand with PDF-in-browser production, I expected much worse of Paged.js than I found. It's powerful and mostly enjoyable to use! Oh, you end up with a large set of CSS rules, and it is not without bugs and gotchas (failing to specify a bleed rule somewhere at least once in every @page context subtly breaks layout; footnote layout is functional but automatic call numbering isn't always perfect, etc.)

You should definitely not expect to take Paged.js out of the box, slap a theme on it, and go; it comes as a box of parts with a mostly complete machine inside, and if it breaks you get to keep all the pieces. I imagine the publisher who uses it must have some prior interest in web technologies, for example.

Nor is Paged.js remotely as capable or flexible as InDesign or a comparable tool, especially for the deeply rudimentary condition of web typography overall - something even as elaborate a tool as this can't really approach fixing.

But Paged.js is also unlike InDesign in having a much shallower (days vs months) learning curve for folks like us with prior web experience, and however equivocal a review I may now be giving of its technical merits, I do actually like working with Paged.js quite a lot.


I've also used pagedjs for a relatively complex booklet with bidirectional text in different languages, images and long footnotes. The result was great but there were some annoying bugs, some of them seeming to be possible underlying bugs in chrome and Firefox. Still, latex would have been even more frustrating.


Coincidentally, I've also used pagedjs for a project recently (125K novel) and encountered some bugs/minor issues. Overall though, I would say I had an immensely positive experience (because even when stuff broke, it was still just HTML, CSS, and JS--so I, like any other web developer, could fix it).

That said, it's a shame that the relevant W3C specs (see https://pagedjs.org/about/) still aren't fully supported by browsers (but perhaps such is the fate of niche features), but with that being the case, I'm infinitely thankful that pagedjs exists as a polyfill.


Oh, I certainly don't doubt that. And as I said, I haven't really found Paged.js all that frustrating! I have extensive though not recent Pagemaker experience; I expected InDesign to be easier, and now I rue the day when that's where I'm forced to resort.

In my experience Paged.js is at its best when building to PDF, but then that's always my intermediate format when working to paper, because that's where PDF's inflexibility shines. The source of a book block, everything that builds to that PDF, partakes of all the infelicities of the JS ecosystem. But to remake the book itself again, all I need do to start is print the PDF.


I pay for a tool to convert HTML/CSS into PDFs https://www.princexml.com/ and it seems to work well. I don't have the best idea of how it compares to the various free options though.


> For latex, you choose your target at the start

Yes, sometimes, but I would say that one of the benefits of latex is how easy you can switch to another layout. But I guess the point is that you typically render to a set of outputs with fixed dimensions (pdf)


> For latex, you choose your target at the start: A4 paper? Screen presentation? A0 poster?

You can change that as you go along.


> You can change that as you go along.

that's not the point they were trying to make. you may need to change the display target for every viewer.


That's an interesting point.

HTML is usually distributed 'as source'. TeX (or LaTeX) is usually compiled to some other format first, before being distributed.


Yeah, I wonder how good TeX renderers would have gotten if page rendering happened client side in a browser.


TeX worked (and works) in batch mode on ancient hardware. Today's computers are so much faster, that this should actually be fast enough to run interactively without having to change or optimise too much.


Why does TeX take so long to render documents today, then?


browser() ?


trace subsumes browser, it's much more flexible and can be applied to library code without editing it.


Is there a way to trace an attribute to a function? I couldn't find one, but curious if it exists. I seemed blocked by the fact that trace seemed to expect a name as a character string. Some functions in base R have functions in their attributes which modify their behavior (e.g. selfStart). I ended up just copying the whole code locally and then naming it, but for a better interactive experience I really wish there was a way to pass a function object as I can with debug.


trace is great for shimming in your own code to an existing function, but it’s not an interactive debugging tool.


It sure is. If you set the second argument to browser you can step through any function.


Granger's causality is a very restrictive and incomplete view of causality. Pearl's counterfactual system with do calculus is a more general way to think about causality. This SURD appears to be a souped up version of Granger.


And the potential outcomes framework (Neyman-Rubin) is even more general :)

Either way, Holland's 'Statistics and Causal Inference' paper (1986) is a nice read on the different frameworks for causality, especially in regards to Granger (&friends) versus do-calculus/Neyman-Rubin.


> Holland's 'Statistics and Causal Inference' paper (1986)

In case anyone else wants to take a look:

https://www.jstor.org/stable/2289064

https://doi.org/10.2307/2289064


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: