Hacker Newsnew | past | comments | ask | show | jobs | submit | j-cheong's commentslogin

Good question. Actually it's not possible for a power loss to occur after a change is made to the database but before the record is marked in the WAL. This is because Postgres ensures that all changes are written to the WAL and flushed to disk before they are applied to the database. This write-ahead mechanism guarantees that even if a power outage occurs immediately after a change is applied, the transaction's record is already safely stored in the WAL.


i thought this was pretty cool. am i alone? lol

how hard is it to steal your eye scan?


From you? I imagine it would be difficult. From them? Less so.


Changing a PostgreSQL column type without following the author's instructions and just running the following command is VERY anti-pattern. Confused why people do this in the first place.

ALTER TABLE table_name ALTER COLUMN column_name [SET DATA] TYPE new_data_type

>you need to make sure the source system has enough disk space to hold the WAL files for a long enough time

if the asynchronous replication process has an external buffer instead of the WAL, then it addresses this issue


> Confused why people do this in the first place

Probably because every tutorial on the Internet, along with the docs, recommends doing it this way. All the gotchas are buried in the footnotes.


The unsafe ALTER COLUMN is one step.

The safe option is four steps minimum.

It's not hard to see why people would be tempted by the unsafe option.


> Confused why people do this in the first place.

Because you lose a significant amount of performance if you start adding NULL and variable-length columns just because you're afraid of a table rewrite.

Because the resulting table will not have had 1 table of update-induced bloat at the end of the operation.

Because you can be sure the modification is applied atomically and you as the user can be sure the migration from A to B goes through as expected or has a graceful rollback to the old data, rather than getting stuck or failures halfway through the migration.

Because toasted data from DROP-ed columns is not removed from storage with the DROP COLUMN statement, but only after the row that refers to that toasted value is updated or deleted.

...

Every column you "DROP" remains in the catalogs to make sure old tuples' data can be read from disk. That's overhead you now will have to carry around until the table is dropped. I'm not someone who likes having to carry that bloat around.


Not everyone has billion rows in their tables ¯\_(ツ)_/¯


>I don't understand why x-day free trials haven't been replaced with usage-based free trials.

Hmm I would say usage-based free trials are problematic because a small company might only use it 10 times but an enterprise might need to run 10k files to fully trial the product. So what usage level would you set it at? If you go too high the small companies can be on a free trial for years, effectively a freemium model.


Kind of unrelated but I'm curious if there are any really robust usage-based billing solutions out there. Curious how they're architected to solve usage-based billing across their customers/various use cases.

I'm always concerned about automating the billing process and risking accuracy/trust.


Zuora is a well-known Enterprise-scale commercial option that displaced many others as “SaaS” took off several years ago (and its associated accounting standards).

Depending on complexity, Netsuite can address some moderate scale use cases. Stripe, Chargebee, etc address more of the SMB-scale needs.


Avoid Chargebee at all costs. I've never made an architectural decision I regretted more.


If you live with a smoking man, then they shouldn't be counted in the nonsmoker category?


I think the anger comes from the rug pull, not that they chose to be source available, right?

If they chose to be source available from day 1, then no harm done?


Yep; though HashiCorp would be a much smaller company had they not started with an open source license.


I was under the impression that workers earning less than $151,164 annually usually don't have noncompetes anyway? Sounds like a lot of people will get bucketed into "senior executives" group. At least new noncompetes can't be created.


Non competes are everywhere. Famous case with Prudential Security[0] where they had everyone sign non competes, that includes minimum wage workers, and they enforced them, which put an outsized strain on the minimum wage workers in particular.

Its a harmful practice across the board.

[0]: https://www.cbsnews.com/news/noncompete-agreement-feds-sue-3...


That's the motivation behind this rule. About one in six food outlets were demanding non-compete terms in employment, to prevent their employees from quitting to work for higher-paying outlets.[1] (Not McD and Burger King; mostly the smaller ones.)

[1] https://thecounter.org/biden-targeting-non-compete-agreement...


I've know places that pay 1/3 of that and have noncompetes.

Although, someone in this type of a role can often get away with ignoring noncompetes as long as they're smart about how they exit.


>Although, someone in this type of a role can often get away with ignoring noncompetes as long as they're smart about how they exit.

Simply put though, they shouldn't have to.


I absolutely agree, but I make it a point to mention their limits of enforceability whenever I can because it is information worth spreading for those worried about one.


lmao


this! swap out AI for any other big topic and result is similar


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: