Good question. Actually it's not possible for a power loss to occur after a change is made to the database but before the record is marked in the WAL. This is because Postgres ensures that all changes are written to the WAL and flushed to disk before they are applied to the database. This write-ahead mechanism guarantees that even if a power outage occurs immediately after a change is applied, the transaction's record is already safely stored in the WAL.
Changing a PostgreSQL column type without following the author's instructions and just running the following command is VERY anti-pattern. Confused why people do this in the first place.
ALTER TABLE table_name
ALTER COLUMN column_name
[SET DATA] TYPE new_data_type
>you need to make sure the source system has enough disk space to hold the WAL files for a long enough time
if the asynchronous replication process has an external buffer instead of the WAL, then it addresses this issue
Because you lose a significant amount of performance if you start adding NULL and variable-length columns just because you're afraid of a table rewrite.
Because the resulting table will not have had 1 table of update-induced bloat at the end of the operation.
Because you can be sure the modification is applied atomically and you as the user can be sure the migration from A to B goes through as expected or has a graceful rollback to the old data, rather than getting stuck or failures halfway through the migration.
Because toasted data from DROP-ed columns is not removed from storage with the DROP COLUMN statement, but only after the row that refers to that toasted value is updated or deleted.
...
Every column you "DROP" remains in the catalogs to make sure old tuples' data can be read from disk. That's overhead you now will have to carry around until the table is dropped. I'm not someone who likes having to carry that bloat around.
>I don't understand why x-day free trials haven't been replaced with usage-based free trials.
Hmm I would say usage-based free trials are problematic because a small company might only use it 10 times but an enterprise might need to run 10k files to fully trial the product. So what usage level would you set it at? If you go too high the small companies can be on a free trial for years, effectively a freemium model.
Kind of unrelated but I'm curious if there are any really robust usage-based billing solutions out there. Curious how they're architected to solve usage-based billing across their customers/various use cases.
I'm always concerned about automating the billing process and risking accuracy/trust.
Zuora is a well-known Enterprise-scale commercial option that displaced many others as “SaaS” took off several years ago (and its associated accounting standards).
Depending on complexity, Netsuite can address some moderate scale use cases. Stripe, Chargebee, etc address more of the SMB-scale needs.
I was under the impression that workers earning less than $151,164 annually usually don't have noncompetes anyway? Sounds like a lot of people will get bucketed into "senior executives" group. At least new noncompetes can't be created.
Non competes are everywhere. Famous case with Prudential Security[0] where they had everyone sign non competes, that includes minimum wage workers, and they enforced them, which put an outsized strain on the minimum wage workers in particular.
That's the motivation behind this rule. About one in six food outlets were demanding non-compete terms in employment, to prevent their employees from quitting to work for higher-paying outlets.[1] (Not McD and Burger King; mostly the smaller ones.)
I absolutely agree, but I make it a point to mention their limits of enforceability whenever I can because it is information worth spreading for those worried about one.