Hacker Newsnew | past | comments | ask | show | jobs | submit | hmaarrfk's commentslogin

From the same developer ;)


Oh thanks, I didn’t realize that. Having recently evaluated Vispy that really helps me place what this tool might be for


Thanks for sharing. That was a great read


When you are trying to design high performance IP, you are often trying to ensure that your design is mathematically correct, inputs and outputs are matching a complicated 100 page specification. You are also trying to parse out the minimum set of workable requirements for "version 1" all with fitting into utilization constraints that ultimately are undefined.

Your mindset is really split. "Building up a software dashboard" to visualize your test results is really the last thing on your mind. You definitely don't want to be building the dashboard for all your customer's platforms.

Having somebody (a company) help on this front is really useful.

As a non-website designer, I used to think the same of tools like netlify, but they seem to be popular as ever, especially in a collaborative workspace when you need to handoff a project from one team to the next.


Thank you for tackling this critical problem for logic designiners. I think the tools available are much too old for fast paced workflows.

From my experience attempting to get a similar workflow down for my company:

I tried to use verilator a while back but ultimately I couldn't because it didn't have same constraints in the verilog language features that I was going to use in production. It doesn't even matter who was missing a feature, verilator or the proprietary tool, it was just about getting them to be same that caused the cognitive dissonance that I didn't want to deal with.

I ultimately decided to move away from verilator and use the clunky proprietary tools since it was what would be used in production. Getting "verilator compatibility" seemed like a "nice to have".

Second, the a winning local-first framework of verilator wasn't really established. You show in your example running a test from the yaml file using what looks like a bash script. Even as an experienced programmer who knows bash and sh well, I still find it very hard to write complex thoughts in it. The last high level attempt I found to bridge this gap is likely https://www.myhdl.org/ I don't know them personally, but it seemed like they had some very good thoughts on what makes writing good hardware level tests good. I think it would be worth reaching out to them if you haven't already.

The one thing that even more critical was a way to run our tests locally. The 10-20 seconds it takes to start a docker image (best case) in the cloud is really frustrating when you are "so close to finding a bug" and you "just want to see if this one line change is going to fix it". Once we got our whole pipeline going, it would take 1-6 minutes to "start a run" since it often had to rebuild previous steps that cache large parts of the design.

So I think you will want to see how you can help bring people's "local's first" workflows slowly into the cloud. Some tools (or just tutorials) that help you take a failing test, and run it locally and on the cloud will be really good especially as you get people to transition!


I really appreciate the feedback: these are all very valid points.

Bash is just an example; most people should make the test rule call the simulator executable directly or via a thin wrapper script. MyHDL is interesting too. Admittedly this is the first we've heard of it but we'll take a look.

We are working on supporting additional simulators in addition to Verilator.

Also, we are working on an API and command line tool so you can kick off test runs and view the results from the command line. This CLI tool should also support local test runs at some point.


This is a story where a one line change of a hardcoded value actually went well.

I could image a scenerio where somebody stored the number of months of backlog as a 2 bit value. 0, 1, 2 or 3, you know, to be smart and clever. This may not appear as a problem during testing because it may be hidden many layers down, in some downstream service that is untested. Maybe in some low code automation service....

Changing it to 4 would mean the backlog is 0. Who knows what the consequences might be. Would that service go and cancel all jobs in the production queue? Would it email all customers mentioning their stuff is cancelled?

I get that this is a seem gly easy change, but if a change of policy is expressed to the software team as an urgent problem, this seems like the management team needs better planning, and not randomly try to prioritize issues.....


None of the requested changes involved more testing or risk-reduction.

They actually increased risk by insisting on refactoring a bunch of nearby things as the "cost" of the change.


The audit trail likely represents actual risk reduction of someone undoing or misunderstanding the change later, since the change has no meaning outside the context of the request.

"Fixing preexisting errors that violate new company policy" also arguably involves real risk reduction; you gotta do that work sometime, and if everyone in the company agrees the time is now, the best time is now.

Using Marge instead of Homer is not "risk reduction" but presumably testing accounting close is also critical.

Tony's request is also reasonable, unless you want to leave the next dev in the same shithole you were in re. the wiki state.


On the flip side, if nearby things are never updated to match changing understanding of the system, then very shortly the code will be cluttered with possibly dozens of different styles: multiple naming conventions, constants in some places, hard-coded values in others, and values read from a parameter file in others, and other kinds of variations. The result will be a chaotic scramble that has no clear structure, and requires programmers to know and understand multiple different ways of expressing the same business concept.

Now that is truly increased risk.


Long term vs short term. It's a bad idea to rush through refactors.



Not when you have an exec-inspired one liner patch that has to get rushed through at great impact to the business.

You can't just stack up a bunch of aphorisms and delegate your thought to them, some situations have context.


You said, "It's a bad idea to rush through refactors".

What constitutes "rushing through" a refactor, and what forces and context make it bad to do so? What can we do, if anything, to make it so that refactoring is as much a part of everyday development as the CI/CD process, and thus becomes just part of the work that's done, not something to be put off until the business decides there's nothing else with a higher priority?


In the linked article, the situation was that they absolutely needed to change some hard-coded MONTHS value from 3 to 4 in order to keep the factory running.

That change should be shipped in isolation. With manual testing to cope for the lack of coverage, presumably. Refactoring doesn't have the same urgency as keeping the factory running, no matter how much we all believe in keeping the campground clean. It can wait until Monday in this particular case.


OK, but that's not the question I asked. What I want to know is how we can make refactoring as much a non-negotiable part of the process as code review, tests, CI/CD, or whatever you consider essential, non-skippable, even under a short timeframe?


In the context they were in, the answer to all of those questions is "shut the fuck up, we can talk about it later".

In the normal course of business, it's a different conversation. Even then, if you're making some poor dev refactor a bunch of code because they had the misfortune to touch it, maybe you should have done the work yourself a long time ago. Or written a linter and ticketed owners.

You don't want to make feature development some sort of pain lottery.


Am I reading you right, that you consider refactoring "pain"?


Unplanned work being arbitrarily scoped into sprint is pain. Doesn't matter the source.

In practice, your approach often turns into "junior engineer abuse" where they have to clean up a bunch of unrelated pre-existing stuff to make the seniors happy as a condition of shipping their feature.


There's all sorts of ways it could go wrong. Perhaps the real question is where blame will fall if it does. If the big boss says "I decided to take the risk and push this through, I accept this was a consequence of that", great. If the programmers get beatings, not so great.


the other thing i notice from the story was that an update on something considered mission Critical was not given an update on within 24 hours.

IT should have volunteered the info regarding how far back in the backlog this was classified as soon as that prioritization was made. "Behind 14" and with many people on the testing side occupied is obviously not going to help with "layoff level priority".

To me, the classification of "enhancement" just doesn't seem to capture the urgency.


I think the correct people and processes were followed, but they could have saved a great deal of time aligning on the importance and priority of the task by putting together a meeting with the leads.

For a time-sensitive and critical update to core functionality, the director of operations should have been aware of the mean time to deployment for the software and put together a team to fast track it, instead of entering it into the normal development pipeline with a high priority.


Knight Capital!


https://www.bugsnag.com/blog/bug-day-460m-loss/

It made me laugh! And cry inside


Thanks for this link.

They also seemed to have avoided libraries like numpy.

In my mind, python made it possible to hardware optimize with libraries like numpy quite easily. Avoiding it is a mistake. I'll try to see if I have time to play the game myself and throw my attempt in there.


But isn't what makes numpy efficient written in C?


Everything that makes Python efficient is written in C or C++ or the like. Python in these situations is just a glue language (with an optional interactive layer) that makes using these libraries more feasible.


Well, the Java source uses threads. Guess how that's implemented.

FWIW if python provides an abstraction that keeps the code readable while keeping the efficiencies of C, I think that should count for python, not against it.


I mean.. threads are implemented by the OS, I don’t really see the equivalency here.


If your web service runs with numpy, excellent.


A web service is not typically compute-bound, so it's doubtful a Python web app is going to use 38x more energy than a Java web app.


All the projects you listed are community projects that have grown over the years to become the backbone of scientific computing.

conda-forge, has evolved as one of the major conda community projects that helps release the latest releases from the projects you listed.

You can help improve the state of pytorch packaging on conda-forge too!

We've even released Pytorch 1.13 + Python 3.11 on linux and OSX! Give it a shot and let us know what you think!

edit: Link to the conda-forge pytorch development repository https://github.com/conda-forge/pytorch-cpu-feedstock


I was trying to find this thread again, https://jumpshare.com/

The logos look really similar....


I typically stick to "SI" units.

Then, somebody asks me to code in the temperature of a system. And I have to think: "Is now really the time that I want to teach people the difference a kelvins and celsius?"

So my rule becomes, SI "except" temperature. Sigh...


We also strictly stick to SI however we usually say kilograms in var names to be clear.

Haven't come across temperature however we would probably stick with kelvin.

We use a strict set of units in databases and while processing, conversions are localized if necessary only at the view layer.

We also only use UTC for date/times.

We only use E164 format (without spaces etc) for phone numbers: e.g. +12345678901 for an example number in OH, US. see National format https://libphonenumber.appspot.com/phonenumberparser?number=...

We only use iso3166-1 country codes and iso3166-2 region codes and translate on view.


This, one million times this. Use SI units. Don't measure distance in hotdogs, time in fortnights and speed in hotdogs per fortnight! It is as stupid as it sounds.

If you do, be explicit about it either in parameter or function name. I'm not going to put you on my shitlist if you're naming your function `microsleep`, but if I have to go look into implementation to see that you count timeout on your database in microseconds (looking at you, couchbase, like you ever could return something from a larger dataset in microseconds, lol) or, even worse, cache expiry time in minutes (hello unknown developer), I am going to go on the internet and complain about you.


This is actually an issue with thermal imaging cameras. Typically you'll get calibrated readings back in Kelvin, not in Celsius. Usually it's well documented by the camera manufacturer, but if you're providing an API to users you need to make them aware what the units are and make a decision on what you're going to return. For example this crops up if the sensor only provides counts which you need to convert into a temperature.

From a hardware perspective it makes sense to use K because you can encode the image directly using unsigned numbers plus some gain to allow for fractional measurements.


But you're still including the units in your identifier names (or encoded in type system), right?


No. Typically SI is implicit. Everything else is explicit.


SI doesn't prescribe that you have to use a single unit for all measurements. Are distance in meters or kilometers? Weights in kilograms or grams?

I assume you always just use the base units? kg, m, s, etc.? (I always think it odd that kilogram is the base.) I feel like could get weighty for some applications of a different scales when milligrams, millimeters, kilometers, days, etc. could be clearer. And even if you use "standard" units, if you aren't clear about what standard you use and what that makes that units, people won't always guess the correct option.


which is great when you're writing from scratch, but as soon as you have to start calling a library with functions based in non-SI units then you've got some ambiguity.


There should be a exception in every code standard that says SI units are OK where otherwise all lower case is enforced. Example to use mega (M) vs milli (m).


MKS or CGS?


CGS is so the sixties!


I'm confused, because one delta degree Celsius is exactly the same as one delta degree Kelvin. And you can convert with an offset.


In thermodynamic calculations you likely need absolute (Kelvin) values. But in many calculations where only temperature difference is used, either unit works equally well.


That sounds like a case for Kelvin. I still don't see why Celsius is the one exception that isn't SI.


I'll take a survey tomorrow to see how many people know what kelvin to C conversion is off the top of their head.

The other issue is when you get to Candelas!


How often are you doing the conversion manually? It seems like the sort of thing that should happen in the presentation layer so people never see the actual Kelvin amount. If you have a system where you always use SI then it's strange the have a single exception for temperature.


Hm, is it wrong to dream of a world, where this along with other basic science, would be considered basic knowlege?

Not blaming anyone who does not know it, but I would argue for more and better science education ..


Ramona Optics | Embedded Systems Engineer | Durham, NC | Onsite

At Ramona Optics, we’re combining machine learning with novel optics to reinvent what’s possible with a microscope. Founded by Caltech graduates, Roarke Horstmeyer and Mark Harfouche, Ramona Optics is developing the first gigapixel microscope that can capture cellular-level detail over an entire square foot.

We are looking for an Embedded Systems Engineer to help bring Ramona Optics’ next generation of smart gigapixel microscopes to life. The person in this role will work closely with company founders to push the limits of the Ramona’s existing technology and help the business grow and evolve as we continue to develop our products and enter new markets. Your role will be to create the bridge between the information acquisition hardware, and the high-level analysis software. You will also be intimately involved in high-level decisions that company utilizes the hardware to manage the large volumes of acquired data -- 10s of gigabytes per second!

Email: hiring@ramonaoptics.com for more information


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: