I'm thinking similarly, but not via PCIe, but via USB: There are plenty of USB->VGA and USB->HDMI adapters that contain a dumb graphics card. So, embedd one of these and grab the video signal internally.
Thereby, plugging in just a single USB cable would deliver the power needed, keyboard, video and mouse. And bonus for an emulated USB-Stick/DVD drive.
What I don't know if these USB video cards are initialized during early boot and usable during the UEFI/BIOS phase. Is that why they grab the HDMI?
The OS of PS4 and PS5 is apparently based on FreeBSD. Netflix uses FreeBSD for its CDN servers. pfSense and OPNsense are popular firewalls that are based on FreeBSD.
JunOS from Juniper is also based off of FreeBSD (I think they're moving to Linux, though) as (were?) NetApp filers (they made heavy use of the Berkley FFS snapshots back in the day).
FreeBSD was popular for many appliances, especially in the late 1990s and early 2000s, as it was generally rock-solid, had very mature networking, and the legal departments at the time liked the more permissive licence.
It's getting less and less common to see it, though. Sheer market share numbers mean performance, driver support, user familiarity, and companies no longer being afraid of the GPL mean that has Linux pretty much taken over.
It makes me a bit sad, but the OS on most Juniper gear is just a control plane for ASICs nowadays and NetApp has moved on to more advanced filesystems. Finding developers to write drivers/software for Linux is probably an order of magnitude easier.
That’s related to anti-tivolization in GPLv3 and basically Apple is forced to stop shipping things that’s updated to GPLv3. That’s not just about scared, assuming being scared means they are irrational and they could have adopted it if they want. Legally they cannot ship it unless they are changing their business model.
They could though. They're just being overly cautious.
All it forbids is blocking users from running modified FOSS code which macOS doesn't do. You can compile what you want and run it in Xcode. Even on iOS you can do this.
What TiVo did was shipping FOSS code but not giving users any access to their device.
But Apple enforces code-signing, and that prevents them to ship those binaries under GPLv3 in the OS. The users can always compile them on their own (or via their favorite package manager) but Apple just can't ship it (without changing how they operate fundamentally.)
There are a couple more among public Forgejo instances [0]. For instance, though it is not very clear what they offer exactly, CodeFloe [1] mentions priced tiers in their FAQ.
> The priced tiers only exist to cover additional hardware costs of individual users which require extended resources for storage and CI/CD (CPU/Mem) in a transparent manner.
Sadly, as soon as I open the site in a private window with access to Notifications denied, a full page error screen about "Odoo Client Error" appears asking you to report... something.... to someone...? Not a good look at all.
I think it's good advice to not pass secrets through environment variables. Env vars leak a lot. Think php_info, Sentry, java vm dumps, etc. Also, env vars leak into sub-processes if you don't pay extra attention. Instead, read secrets from a vault or from a file-system from _inside_ your process. See also [1] (or [2] which discusses [1]). Dotnet does this pretty good with user secrets [3].
> Instead, read secrets from a vault or from a file-system from _inside_ your process.
I’ve never liked making secrets available on the filesystem. Lots of security vulnerabilities have turned up over the years that let an attacker read an arbitrary file. If retrieving secrets is a completely different API from normal file IO (e.g. inject a Unix domain socket into each container, and the software running on that container sends a request to that socket to get secrets), that is much less likely to happen.
God this is such a prime example of how we just don't do security well enough industry wide, and then you end up with weird stupid stuff like encryption being an enterprise paid feature.
Secrets have to be somewhere. Environment variables are not a good place for them, but if you can't trust your filesystem to be secure, you're already screwed. There's no where else to go. The only remaining place is memory, and it's the same story.
If you can't trust memory isolation, you're screwed.
As a counterintuitive example from a former insider: virtually no one is storing secrets for financial software on an HSM. Almost no one does it, period.
> Secrets have to be somewhere. Environment variables are not a good place for them, but if you can't trust your filesystem to be secure, you're already screwed. There's no where else to go. The only remaining place is memory, and it's the same story.
There’s a whole class of security vulnerabilities that let you read from arbitrary files on the filesystem. So if you end up having of those vulnerabilities, and your secret is in a file, then the vulnerability lets the attacker read the secret. And on Linux, if you have such a vulnerability, you can use it to read /proc/PID/environ and get the environment variables, hence getting secrets in environment variables too.
However, the same isn’t necessarily true for memory. /proc/PID/mem isn’t an ordinary file, and naive approaches to reading it fail. You normally read a file starting at position 0; reading /proc/PID/mem requires first seeking to a mapped address (which you can get from /proc/PID/maps); if you just open the file and start reading it from the start, you’ll be trying to read the unmapped zero page, and you’ll get an IO error. Many (I suspect the majority) of arbitrary-file read vulnerabilities only let you read from the start of the file and won’t let you seek past the initial unreadable portion, so they won’t let you read /proc/PID/mem.
Additionally, there are hardening features to lock down access to /proc/PID/mem, such as kernel.yama.ptrace_scope, or prctl(PR_SET_DUMPABLE)-that kind of hardening can interfere with debugging, but one option is to leave it on most of the time and only temporarily disable it when you have an issue to diagnose
Also, memfd_secret supports allocating extra-special memory for secret storage, which the kernel can’t read, so it shouldn’t be accessible via /proc/PID/mem
>There’s a whole class of security vulnerabilities that let you read from arbitrary files on the filesystem.
This is maybe putting the cart before the horse a little bit. The reason there's a class of vulnerabilities that allow arbitrary read is that we've, as an industry, decided that we classify file access as a vulnerability. It's not that file access is somehow materially different or easier from any other security issue, it's just that we set that as one of the goals of an attack.
If you decide that an attack is successful when it reads a file, then you'll obviously get a clustering of successful attacks that read files.
It isn’t just about preventing vulnerabilities, it is also about limiting the damage they can cause. Suppose you have a web app, with customer data in a remote relational database. An arbitrary file read vulnerability, in itself, might not actually help an attacker in stealing your customer data, since it is in a remote DB not the web app’s filesystem. But if that vulnerability enables them to exfiltrate database credentials, that gets them one step closer to actually stealing your customer data, which can be an enormously costly legal and PR headache. (By itself, those credentials won’t be that useful, since hopefully your firewall will block direct public access to the DB - but a lot of successful attacks involve chaining multiple vulnerabilities/weaknesses - e.g. they compromise some employee laptop that lets them talk to the DB but they don’t have credentials, and now they have the credentials too.)
Whereas, if all they manage to steal using a file read vulnerability is the code (possibly even just the binaries if you are using a compiled language like Go or Java) of your web app - that’s not good either, but it is a lot smaller headache. You’d much rather be having to tell the CEO “attackers stole the binaries of our app” than “attackers stole all the PII of our customers”. Both are bad but the second is a lot worse. The first kind of attack you possibly won’t be obliged to disclose, the second you legally will be
It strikes me that those envs might be particularly prone to corporate inertia, ieg "the current way passed security audit, don't change it or we need to requalify"
It's possibly also harder to rely on a HSM when your software is in a container? ( I'm guessing here tho )
It's a useless, unproveable generalisation from a supposedly omniscient "insider". I know of at least one finance organisation using HSM as you'd expect.
Yeah, you don't have to trust me, there are plenty of software engineers working in finance who can tell you the same. Or they're using outdated ciphers, or they're storing information in plaintext or in logs, or they have no security playbooks.
It's irrelevant to me whether you believe it, it's happening today, and it happens with some of the top financial institutions and their subsidiaries and it's the same bureaucratic nonsense to move those teams to do something about it like it is anywhere else.
There isn't a right answer. It's just that people don't understand that one doesn't provide any meaningful benefit over the other (in the context of storing secrets), but the security "experts" are always eager to claim "X is insecure, do Y instead, it's best practice btw"
Unless I'm missing something, there are three scenarios where this comes up:
1. You are using a .env file to store secrets that will then be passed to the program through env vars. There's literally no difference in this case, you end up storing secrets in the FS anyway.
2. You are manually setting an env var with the secret when launching a program, e.g. SECRET=foo ./bar. The secret can still be easily obtained by inspecting /proc/PID/environ. It can't be read by other users, but so are the files in your user's directory (.env/secrets.json/whatever)
3. A program obtains the secret via some other means (network, user input, etc). You can still access /proc/PID/mem and extract the secret from process memory.
So I'm assuming that what people really want is passing the secret to a program and having that secret not be readable by anything other than that program. The proper way to do this is using some OS-provided mechanism, like memfd_secret in Linux. The program can ask for the secret on startup via stdin, then store that secret in the special memory region designed for storing secrets.
The main security benefit of byzantine paranoid security best practices is that they massively hinder productivity. If you can't make a system, the system will have no vulnerabilities.
I’d wager that–in the context of web apps–over time there have been many more (or more readily exploitable) arbitrary file read/directory traversal/file inclusion vulnerabilities than remote code execution ones, so the preference for having secrets in memory as env vars may stem from that. You’re also probably not reading from /proc/self/mem without code execution either.
Well, if there's an arbitrary file read, shouldn't the attacker be able to just read /proc/PID/environ anyway? It behaves like a regular file in that regard, unlike /proc/PID/mem, which requires seek operations to read data.
Well, I’d be the first to admit that we have a gap here, the solution that I personally would consider ideal doesn’t seem to actually exist, at least on the server-side.
If we are running under something like K8S or Docker, then I think there should be some component that runs on the host, that provides access to secrets over a Unix domain secret, and then we mount that socket into each container. (The reason I say a Unix domain socket, is so then the component can use SCM_CREDENTIALS/SO_PEERCRED/etc to authenticate the containers). I’d also suggest not using HTTP, to reduce the potential impact of any SSRF vulnerabilities (although maybe that’s less of a risk given many HTTP clients don’t work with Unix domain sockets-or at least not without special config). (Can we pass memfd_secret using SCM_RIGHTS?)
For desktop and native mobile, I think the best practice is to use the platform secret store (Keychain on macOS/iOS, Freedesktop Secret Service for desktop Linux, Android Keystore, Windows Credential Manager API, etc). But for server-side apps, those APIs generally aren’t available (Windows excepted). Server-side Linux often lacks desktop Linux components such as Freedesktop APIs (and even when they’re present, they aren’t the best fit for server-side use cases)
The problem with .env files is that you are doing both.
You have a .env file that is in the same directory as your code and you just copy to to env vars at some point. This does not even meet the security principles that dotenv is supposed to implement!
I think people are blindly following the advice "put secrets in env vars" without understanding that the point of it is to keep secrets outside files your app can read - because if you do a vulnerability or misconfiguration that lets people read those files leaks the secrets.
What you can do is have environment vars set outside your code, preferably by another user. You do it in your init system or process supervisor. Someone mentioned passing them in from outside a docker container in another comment.
> people are blindly following the advice "put secrets in env vars" without understanding that the point of it is to keep secrets outside files your app can read - because if you do a vulnerability or misconfiguration that lets people read those files leaks the secrets.
The problem with this is that, on Linux, the environment is a file, /proc/self/environ
And yes, as has been mentioned in some other comments, the process memory is also a file /proc/self/mem - but it is a special file that can only be read using special procedures, whereas /proc/self/environ behaves much more like a normal file, so a lot of vulnerabilities that enable reading /proc/self/environ wouldn’t enable reading /proc/self/mem
Technically one workaround on Linux is to not mount /proc (or at least not in your app’s container) - but doing that breaks a lot of things
I think dotenv would be fine as long as it doesn't raise exceptions if no .env file is found, i.e. if it works just as a helper for local dev and as a no-op for production
Not using env vars is security through obscurity. If someone has ssh access to your container, it doesn't matter whether the secrets are on a file or on memory. The attacker has as much access as the app itself.
On the other hand, using .env vars can leak in different ways like a developer mistakenly committing secrets to git or making this file available to the world wide web.
The filesystem is fine, but we really shouldn't be using .env files that get loaded into environment variables due to them leaking in a few different ways.
Disagree here. Basically if you use docker (which for most of the stuff you mention, you should), environment variables are pretty much how you configure your docker containers and a lot of sever software packaged up as docker containers expects to be configured this way.
Building a lot of assumptions into your containers about where and how they are being deployed kind of defeats the point of using containers. You should inject configuration, including secrets, from the outside. The right time to access secret stores is just before you start the container as part of the deploy process or vm startup in cloud environments. And then you use environment variables to pass the information onto the container.
Of course that does make some assumptions about the environment where you run your containers not being compromised. But then if that assumption breaks you are in big trouble anyway.
Of course this tool is designed for developer machines and for that it seems useful. But I hope to never find this in a production environment.
Bouncing things is often unacceptably expensive - caches, consensus and the cost of data redistribution etc are all good reasons to have hot configuration for secrets.
When you launch the docker containers you can pass in process env vars or do it via file. Nowadays people do this via kubernetes config yamls, which passes env to docker. Or rather they used to. Most people now use Helm charts which pass in the env to k8 yaml which pass them to docker. But then they feel its not secure enough... so a lot of people have the env split halfway between github actions secrets, and amazon secrets. The yaml for your github action config sends aws secret uri to the runner, which runs cdk which grabs the aws secret, and passes that to helm which makes k8 yamls, which passes the env to docker, which passes it to the process.
Then I killed myself and was reborn. Now I just use an env file.
Remember we are mainly talking about dev envs here. If you put the secret key in a file...where do you put the file? In a common location for all the dotenv instances? One per dotenv instance? What if people start putting it as a dotfile in the same project directory?
Secrets are nasty and there are tradeoffs in every direction.
Environment vars propagate from process to process _by design_ and generally last the entire process(es) lifetime. They are observable from many os tools unless you've hardened your config and they will appear in core files etc. Secrets imply scope and lifetime - so env variables feel very at odds. Alternatively Env variables are nearly perfect for config for the same reasons that they are concerning for secrets.
Tl/Dr; in low stakes environments the fact that secrets are a special type of config means you will see it being used with env vars which are great for most configs but are poor for secrets. And frankly if you can stomach the risks, it is not that bad.
Storing secrets on the filesystem - you immediately need to answer where on the filesystem and how to restrict access (and are your rules being followed). Is your media encrypted at rest? Do you have se Linux configured? Are you sure the secrets are cleaned after you no longer need them? Retrieving secrets or elevated perms via sockets / local ipc have very similar problems (but perhaps at this point your compartmentalizing all the secrets into a centralized, local point).
A step beyond this are secrets that are locked behind cloud key management APIs or systems like spiffe/spire. At this point you still have to tackle workload identity, which is also a very nuanced problem.
With secrets every solution has challenges and there the only clear answer is to have a threat model and design an architecture and appropriate mitigations that let you feel comfortable while acknowledging the cost, user, developer and operator experience balancing act.
Yes, and I like to combine two established concepts instead of rolling my own: URI and UUIDv7. So my IDs become `uri:customer_shortname:product_or_project_name:entity_type:uuid`. An example ID could be `uri:cust:super_duper_erp:invoice:018fe87b-b1fc-7b6f-a09c-74b9ef7f4196`.
It's even possible to cascade such IDs, for example: `uri:cust:super_duper_erp:invoice:018fe87b-b1fc-7b6f-a09c-74b9ef7f4196:line_item:018fe882-43b2-77bb-8050-a1139303bb65`.
It's immediately clear, when I see an ID in a log somewhere or when a customer sends me an ID to debug something, to which customer, system and entity such an ID belongs.
UUIDv7 is monotonic, so it's nice for the database. Those IDs are not as 'human-readable' for the average Joe, but for me as an engineer it's a bliss.
Often I also encode ID's I retrieve from external systems this way: `uri:3rd_party_vendor:system_name:entity_type:external_id` (e.g. `uri:ycombinator:hackernews:item:40580549:comment:40582365` might refer to this comment).
> It's even possible to cascade such IDs, for example: `uri:cust:super_duper_erp:invoice:018fe87b-b1fc-7b6f-a09c-74b9ef7f4196:line_item:018fe882-43b2-77bb-8050-a1139303bb65`.
> And another one! I actually used this regularly on my PC to remember places I'd gone to on trips, or back in time.
Me too. High-noon to engage Google Takeout [1] – so that I at least have a copy of all that data.
Off-topic: TIL, that Google Takeout can do regular backups automatically for up to one year when you link it to some cloud storage account (GDrive, Dropbox, One Drive, Box).
They didn't give me gpx files a few years ago either. I seem to remember it being a json or a csv, I can't remember which right now. Was easy to parse it to get the information I was after.
What the hell?? Seriously? There's no way to get my historical location data anymore? I always assumed I could rely on my google location history as a way to look back at where I've been throughout my life..
> EU will smack them with fines for it. And probably California, too.
Kind and respectful reminder that HN does not permit the users to delete their data or download it. It is against the law in EU (and probably California), but it hasn't been legally tested yet.
Hn is an interesting one because there's not even an email associated. Though I imagine they have logs of IPs and such. Still, an equivalent feels like ad networks that are everywhere though don't seem to have a way to delete data.
Yes, I know, Europeans think their laws automatically apply globally, but laws are meaningless without enforcement. Hence why 99% of US companies can safely ignore GDPR unless they're operating in the EU (which HN is not).
So, this begs the question when we'll see ML put in place to avoid AdBlocker detection. Or ads as we know them just disappear from the web and are replaced with other kinds of ML-enabled ads. I imagine deep-fake models used for interchangeable product placement in videos or pictures or so.