This seems like an overly cynical take. Is there no value in empirically confirming an assumption? Especially in the exercise world where other long held assumptions ended up being bro-science nonsense?
> although inter-personal coefficient of variation is up to 28.3%
Why does that matter? Isn't the entire point of this study's design to eliminate the impact of the inherent variability between test subjects?
For starters, small companies are paying 15%, not 30%.
I'm also not sure where a small company can find a payment processor that will only charge 1%. Stripe charges 2.9% plus 30 cents per transaction.
If you have a $4.99 in-app purchase that will cost you 44 cents per transaction to use Stripe vs 75 cents to use Apple's IAP.
But Stripe does not act as a merchant of record so you are responsible for remitting sales tax yourself. Registering for and remitting sales tax in every jurisdiction where you have nexus adds huge administrative overhead to a small company.
If you want to avoid this overhead, Paddle will act as a merchant of record for you, but then you're paying 5% plus 50 cents which adds up to 75 cents on a $4.99 purchase anyway.
Taken all together, depending on their pricing structure, small companies may very well be financially better off sticking with IAP rather than linking to external payments anyway.
When talking in the grand scheme of this case, the 15% arose out of these proceedings. It was 30 back on in 2018.
But yes, overall most people will stick with Apple regardless. I still see it as a win that companies who want to put the work in to go around apple can. That simply seems reasonable in my eyes.
> When talking in the grand scheme of this case, the 15% arose out of these proceedings. It was 30 back on in 2018.
I'm not sure about the timeline, but in general the reduction to 15% for small developers was due to market signals as much as it was anything else. Both Apple and Google need small developers to continue to create new apps and if the 30% is onerous to small developers (which I think it probably is) they'll lower it to attract more products and services.
> But yes, overall most people will stick with Apple regardless. I still see it as a win that companies who want to put the work in to go around apple can. That simply seems reasonable in my eyes
When you think about it, there's maybe half a dozen companies that truly could put in similar work to Apple or Google in creating and maintaining these stores and platforms at the scale and with the features and security that they have built. Most people are going to stick with Apple and Google, except when one of those large competitors like Meta decides to bypass those stores and create its own and continue to nudge folks to their store for various features or downloads or whatever. It introduces friction for no obvious benefit to customers.
You can argue that 3rd party app stores will be more permissive in what they allow, but most of the things that people complain about "scary surveillance" or other onerous regulations for example have to also be followed by any legitimate App Store. So all you've really done is create worse versions of the Apple or Google App Store that siphon away applications. It reduces the profit margins of Apple or Google but it doesn't benefit customers.
I've always wanted to do some small business, maybe an app but to get started feels so daunting. This information you provided is great and makes me feel like there's room to know more.
Are there any good places to grow this kind of knowledge?
How to use payment processors?
How to actually setup a business and get paid yourself?
I don't want to get into the whole founder ethos, I just want to make something and get paid for it.
I think the author of the article really misses the point here. While "true multitasking" might be a neat technical feature, it's not something that the end user really cares about or would base a buying decision on, especially if running multiple apps in the background at the same time came at the expense of battery life. Those early versions of iOS employed a lot of tricks to squeeze performance and battery life out of underpowered devices.
I once did something similar with a recipe from a cookbook where the recipe started at the bottom of one page and continued onto the next page. It correctly identified the first few ingredients present in the photo of the first page but then proceeded to hallucinate another half-dozen or so ingredients in order to generate a complete recipe.
> If I try to do an UPDATE ... SET x = x + 1, that will always increment correctly in SQL. But if read x from an ORM object and write back x + 1, that looks like I'm just writing a constant, right?
This is not specific to ORMs... you can run into the same problem without one.
> Extra magic: if you've read a class from the db, pass it around, and then modify a field in that class, will that perform a db update: now? later? never?
In every ORM I've used you have specific control over when this happens.
I've never understood the ORM hate because a good ORM will get out of the way and let you write raw SQL when necessary while still offering all of the benefits you get out of an ORM when working with query results:
1. Mapping result rows back to objects, especially from joins where you will get back multiple rows per "object" that need to be collated.
2. Automatic handling of many-to-many relationships so you don't have to track which ids to add/remove from the join table yourself.
3. Identity mapping so if you query for the same object in different parts of your UI you always get the same underlying instance back.
4. Unit of work tracking so if you modify two properties of one object and one property of another the correct SQL is issued to only update those three particular columns.
5. Object change events so if you fetch a list of objects to display in the UI and some other part of your UI (or a background thread) add/updates/deletes an object, your list is automatically updated.
6. And finally in cases where your SQL is dynamic having a query builder is way cleaner than concatenating strings together.
For those who are against ORMs I am curious how you deal with these problems instead.
You are describing data mapper ORMs, a.k.a the good ORM. I think all the other ORM-loathing guys here had bad experiences with active record ORMs, a.k.a the bad ORM.
Also, infrastructure guys and DBA types tend not to like ORMs. But they are not the ones trying to manage the complexity in the business process. They just see our queries are not optimal, and it is everything to them.
Right! They should really be considered two different things. I've worked a lot with Django (the bad type) which people tend to love, but I've seen the horrors that it can produce. What they seem to love about it is being able to write ridiculously complicated SQL using ridiculously complicated Python. I don't get it. These types of ORMs don't even fully map to objects. The "objects" it gives you are nothing more than database rows, so it's all at the same abstraction level as SQL, but it just looks like Python. It's crazy.
SQLAlchemy is the real deal, but it's more difficult and people prefer easy.
Oh was I enthusiastic when I first got my hands on an active record ORM: "I can use all my usual objects and it'll manage the SQL for me? Wow!". That enthusiasm reached rock bottom rather quickly as soon as I wanted to fine tune things. Turns out I'm not a fan of mutating hierarchical objects and then calling a magical .commit()-method on it, or worse: letting the ORM do it implicitly. That abstraction is just not for me and I'd rather get my hands "dirty" writing SQL, I guess.
Yes I suspect a lot of the ORM hate comes 1) from people using poorly designed ones or 2) from people working on projects that don't really require the features I mentioned. Like if you are generating reports that just issue a bunch of queries and then dump the results to the screen you probably don't care that much about the lifetime of what you've retrieved. But just because an ORM might not be the right tool for your project doesn't make it a bad tool overall, that would be like saying hammers are bad tools because they can't be used to screw in screws.
Before becoming too overconfident in SQLite note that Rebello et al. (https://ramalagappan.github.io/pdfs/papers/cuttlefs.pdf) tested SQLite (along with Redis, LMDB, LevelDB, and PostgreSQL) using a proxy file system to simulate fsync errors and found that none of them handled all failure conditions safely.
In practice I believe I've seen SQLite databases corrupted due to what I suspect are two main causes:
1. The device powering off during the middle of a write, and
2. The device running out of space during the middle of a write.
I'm pretty sure that's not where I originally saw his comments. I remember his criticisms being a little more pointed. Although I guess "This is a bunch of academic speculation, with a total absence of real world modeling to validate the failure scenarios they presented" is pretty pointed.
I believe it is impossible to prevent dataloss if the device powers off during a write. The point about corruption still stands and appears to be used correctly from what I skimmed in the paper. Nice reference.
> I believe it is impossible to prevent dataloss if the device powers off during a write.
Most devices write sectors atomically, and so you can build a system on top of that that does not lose committed data. (Of course if the device powers off during a write then you can lose the uncommitted data you were trying to write, but the point is you don't ever have corruption, you get either the data that was there before the write attempt or the data that is there after).
Only way I know of is if you have e.g. a RAID controller with a battery-backed write cache. Even that may not be 100% reliable but it's the closest I know of. Of course that's not a software solution at all.
That's uh, not running out of power in the middle of the write. That's having extra special backup power to finish the write. If your battery dies mid cache-write-out, you're still screwed.
> In the case of Google OAuth, it's possible to forego this in order to allow any Google user from any Google workspace to login to your application.
There are plenty of use cases where this is appropriate. If you wanted to allow users to login to Hacker News with their Google accounts you would use this option because you do not care what workspace they belong to.
> Some applications (e.g. Tailscale) take advantage of the public Google OAuth API to provide private internal corporate accounts.
This is a misuse of the public Google OAuth API. Your first link clearly states: "A public application allows access to users outside of your organization (@your-organization.com). Access can be from consumer accounts, like @gmail.com, or other organizations, like @partner-organization.com." In other words it is intended for scenarios where you want to allow access to users outside your workspace.
> Instead, Google instructs you to look at the "hd" parameter, specific to Google, to determine the Google Workspace a given user belongs to for security purposes.
According to your second link the "hd" parameter only tells you what domain the user belongs to, it does not tell you what workspace the user belongs to.
> You can avoid this issue by using a custom Google OIDC IdP configured for internal access only in your applications, rather than using a pre-configured public Google OIDC IdP
So Google offers an OAuth integration option that actually restricts access to your specific workspace. Choosing to ignore this option and instead integrating with the option designed for public access from all Google accounts, and then calling it a vulnerability when someone can login with an account from another workspace, is frankly, absurd.
> This is a misuse of the public Google OAuth API. Your first link clearly states: "A public application allows access to users outside of your organization (@your-organization.com). Access can be from consumer accounts, like @gmail.com, or other organizations, like @partner-organization.com." In other words it is intended for scenarios where you want to allow access to users outside your workspace.
> According to your second link the "hd" parameter only tells you what domain the user belongs to, it does not tell you what workspace the user belongs to.
From the docs:
> If you need to validate that the ID token represents a Google Workspace or Cloud organization account, you can check the `hd` claim, which indicates the hosted domain of the user. This must be used when restricting access to a resource to only members of certain domains. The absence of this claim indicates that the account does not belong to a Google hosted domain.
Note also that even Google conflates "domains" with "Google hosted domains" with "Google Workspace or Cloud organization accounts."
> Choosing to ignore this option and instead integrating with the option designed for public access from all Google accounts, and then calling it a vulnerability when someone can login with an account from another workspace, is frankly, absurd.
At this point, if you still believe calling this a vulnerability is absurd, I don't think there's anything more I can say to convince you. Google paid out the bounty because they didn't believe it was abusrd.
I personally think that the best counterargument to calling it a vulnerability is: "well, sure, Google is reusing the Google Workspace identifier for different workspaces, which could be used to impersonate a user; but if you own the domain, you can also receive email as that user and reset the account that way."
I suppose this comes down to the interpretation of the documentation. Note that it only says "a workspace", not "a specific workspace" or "which workspace".
1) The "hd" claim tells you that the user is a member of a workspace. If the user is a member of a workspace it tells you the domain name of that workspace.
2) The "hd" claim tells you which specific workspace the user is a member of.
You are taking interpretation (2) whereas I am taking interpretation (1). I believe interpretation (1) is correct given the next sentence says you can use the "hd" claim to restrict access to only members of certain domains. If interpretation (2) was intended, they could have instead said you can use the "hd" claim to restrict access to only members of a certain workspace.
If Google is at fault for anything here it is for writing confusing documentation, however given the totality of the documentation where:
a) Google describes public applications as intended for logins from all Google accounts regardless of workspace, and
b) Google offers the internal application option for situations where you want to restrict logins to users of a specific workplace,
I'm going to stand by my conclusion that the real fault lies with service providers choosing the wrong integration option in the first place and then making invalid assumptions about what information the "hd" claim supplies in the public option.
> although inter-personal coefficient of variation is up to 28.3%
Why does that matter? Isn't the entire point of this study's design to eliminate the impact of the inherent variability between test subjects?