Hacker Newsnew | past | comments | ask | show | jobs | submit | dgrant's commentslogin

Love Liftosaur, it's the first and only app I've ever used and I wouldn't still be lifting today if it wasn't for this app.


I think the verb tenses were not perfect... I think he just meant: why did they build only 1 in the first place? They could have built N instead, at a lower cost per unit.


Because there's only one L2 point, and it'd be very risky to have two units orbiting. So your spare would just sit around costing money in the case the first one works. The folks that do this are pretty dang good at what they do, so they're willing to bet $10 billion on success the first time, vs $20 billion to hedge with a spare. Even with a spare it's not clear to me a malfunctioning first example would have enough Delta-V to get it safely out of the L2 point proximate.

If it fails, they'll learn all they can, then try again with another follow on project, that likely will take advantage of technological improvements since functional requirements on this one were set in stone. Pre-building a spare just doesn't make sense with this kind of project.


Space/Science/Military spending do not seem to follow the normal rules of civilian production nor schedules. Alos, once the satellite is built and launched, there is the ongoing budgeting of the actual operations of the satellite. The budgets are limited in those capacities as well.


Hi. I used to work at Teradici. It was always interesting that Pixar went with VDI because it meant the CPUs that were being used as desktops during the day could be used for rendering at night. Roughly speaking. The economics made a lot of sense. A guy from Pixar came to Teradici and gave a talk all about it. Amazing stuff.

Interesting contrast with other companies that switched to VDI where it made very little sense. VMware + server racks + zero clients compared to desktops never made economic sense, at the time. But oftent here is some other factor that tips things in VDI's favour.


Yep, all of their workstations were dual socket servers, where each socket was a workstation VM with PCIe passthrough, and each getting their own hostcard+GPU. Each VM had dedicated memory, but no ownership of the cores they were pinned to, so overnight if the 'workstations' were idle, another VM (also with dedicated memory) would spin up (the other VMs would be backgrounded) and consume the available cores and add itself to the render farm. An artist could then log in and suspend the job to get their performance back (I believe this was one of the reasons behind the checkpointing feature in RenderMan).

The Teradici stuff was great, and from an admin perspective having everything located in the DC made maintenance SO much better. Switching over to VDI is a long term goal for us at Blue Sky as well, but it'll take a lot more time and planning.


That's one reason for the checkpoint feature, yes, but there are others. A few years back (Dory-era), I participated in a talk at SIGGRAPH '15 about some of them:

https://dl.acm.org/doi/abs/10.1145/2775280.2792573

http://eastfarthing.com/publications/checkpoint.pdf


Location: Vancouver, Canada

Remote: Yes

Willing to relocate: No

Technologies: Python, Django, Java, Linux, Google Cloud Platform (GCP), Postgres, MySQL, NoSQL, Docker, JavaScript/Typescript

Résumé/CV: https://www.linkedin.com/in/davidjgrant

Email: davidgrant at google


Sounds exactly like me. I work very late at night as well to make up for the day where I have to check on the kids a lot as my wife works away from home.


How much of a difference will these make? Trying to decide if it's worth my time.


I did this on my i5 4300U laptop and whilst I didnt formally benchmark it, the boot time halved, and it feels a LOT faster to use. So much snappier.


Considering it takes all of 15 seconds to setup (5 minutes including reading), it's really not a matter of if it's worth your time, but if it's worth the risk, since it disables mitigations for hardware vulnerabilities.


I don't think there's an easy answer. It will very much depend on your hardware and what kind of software you run. I'd say the best way to tell is just to try it, measure performance, and compare results.


first, do you use js?


You make it sound as if JS was the only attack surface, whereas it's just the most common one.


I did not get that impression at all from that comment, FWIW.


You're right - high level integration tests (end to end) are important, and most would argue that they are more important. If you had to choose one type of testing only, that would be it.

Unit tests are also important, as you said.


100% this. I just switched to using Black recently and not having to ever fix a lint issue again has been life-changing. Use Black with pre-commit (https://pre-commit.com) and never look back.


Could this be used to generate XML?


Yes, it's open-source.


Could I use this to generate XML?


if you already have a way of printing XML, you can add a "printer" field (that is a python function) to your template, like so:http://github.com/plaitpy/plaitpy/blob/master/templates/test...

if that function uses an import, you might also need to add an "imports" field, like in this example: https://github.com/plaitpy/plaitpy/blob/master/templates/web...

otherwise, that's a feature that can be added here: https://github.com/plaitpy/plaitpy/blob/master/src/fields.py..., if it works for you (and is added as a flag), i'd be happy to take patches.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: