Well Mint messes up all the time, transactions are not updated even though Mint says its up to date, transactions are listed multiple times, etc. It's mildly inconvenient, but not really "risky." Is Plaid and other bank scraping services used for anything important?
> "I'm partial to Iran because a multi-lateral treaty was signed. Iran stopped nuclear enrichment. In exchange, western countries pledged to provide economic relief and stop sanctions."
Iran stopped its nuclear enrichment program, so the sanctions were wavered. Iran continued working on a ballistic missile program (capable of carrying nuclear warheads), and continued destabilizing countries throughout the middle east (through proxy terrorism) - so the sanctions came back.
In other words - temporarily halting the nuclear enrichment program alone should not excuse Iran from being held accountable.
Pot, kettle, black. The US are continuously destabilising countries they don’t like, Iran included; in fact, most poweful countries do, all the time, from Russia to France to Germany. That doesn’t justify breaking treaties on a whim, like Trump has done.
The US had built up a reputation as a violent and aggressive bully after 2003. Obama worked hard at correcting that, but the new guy has thrown it all away again. The US at the moment look seriously bipolar, from the outside.
> "The first fix was to avoid calling CommandLineToArgvW by manually parsing the command-line."
> "The second fix was to delay load shell32.dll."
If your build pipeline is continuously spawning processes all-over, to the point "delay loading" makes a significant difference - it's time to start re-evaluating the entire pipeline and the practices employed.
Do you know of a build system that can handle a source tree as large as an entire web browser without spawning a lot of processes?
It's hard to tell what, if anything, you are recommending here. Pass thousands of files to a single compiler invocation? Ignore the problems and stop trying to make process creation and clean-up faster?
> Are they employing caching of binaries/object files? Are they running a continuous build? Which parts of the build actually take the most time to go through? Can they benefit from building concurrently on multiple machines?
So you're suggesting things they already do.
And if you're using multiple machines, you probably want to max out each machine, so it matters a lot if there's code that falls over and dies when applied to a large number of cores.
> Pass thousands of files to a single compiler invocation?
Sure. Or pass it a file with all the filenames. Or have the compiler work as a server that takes compilation requests over a socket. It's not like passing thousands of filenames between two processes is a deep unsolved problem.
Technically, it doesn't has to. You can put your whole test suite into a single executable, making it run extremely fast for C/C++ projects where process startup is often much slower than running a single test unit. This approach is used in some OSS projects I've worked on but it also has its downsides.
Some tests (e.g. unit tests) can run like this, yes. Other tests, including some benchmarks, are not meaningful when wrapped in a single process. Invocation speed and specifics matter.
Did you read the entire article? The whole point was that "delay loading" a particular DLL prevents a static analysis in the compiler from inserting hooks to perform expensive operations.
> "Did you read the entire article? The whole point was that "delay loading" a particular DLL prevents a static analysis in the compiler from inserting hooks to perform expensive operations."
I actually have read the entire article. Have you?
Your explanation has absolutely nothing to do with the performance gains observed. Moreover, in the context of delay-loaded DLLs, your explanation actually makes no sense whatsoever.
Delay loaded DLLs, a linker/loader optimization Microsoft has offered since the days of C++ 6.0 (1998), simply means most process invocations in OP's case won't actually end up loading said DLLs, reducing the amount of time spent in DLL_PROCESS_ATTACH/DLL_PROCESS_DETACH (and specifically during destruction, in the Kernel).
You provide no evidence of it, and ample evidence to the contrary.
> Delay loaded DLLs, a linker/loader optimization Microsoft has offered since the days of C++ 6.0 (1998), simply means most process invocations in OP's case won't actually end up loading said DLLs, reducing the amount of time spent in DLL_PROCESS_ATTACH/DLL_PROCESS_DETACH.
It also avoids loading gdi.dll, which avoids creating a bunch gdi objects, which avoids taking the "destroy gdi object" codepath on process termination… which is the bit that is both slow and globally serialised.
TFA's final section even demonstrates the difference it makes: 30% increase in start time including 300% increase in lock contention time but 200% increase in shutdown time including 400% increase in lock contention. The process shutdown is almost entirely serialised due to (as TFA and its predecessor explain) a system-wide lock held during GDI cleanup.
> "SRI has been available for two years and it still isn't being used enough."
Two years is relatively fresh. There's probably a significant amount of customers with browsers that don't support this.
Can't we just implement a simple poor-man's SRI ourselves? Download the 3rd-party script, hash it, check hash, proceed to exec() if all is well? This should be supported by much older browsers.
> "A teacher (also called a school teacher or, in some contexts, an educator) is a person who helps others to acquire knowledge, competences or values."
The target audience is a crucial part of the feedback loop. Removing the students from the equation sounds counter intuitive. What are the proposed alternative metrics? Who delivers the feedback? And what is the feedback based on, if not on the direct opinions of the people on the receiving end of the service?
I think the main idea here is that student evaluations only make sense if we assume that students are actually interested in acquiring knowledge, competences, or values, however if their main aim in a course is a good grade then their evaluations suddenly hold little value since they will rate "easy" instructors highest.
When I was a college instructor I found that when I tried hard - putting a lot of thought into pedagogy, having weekly (open notes) quizzes, assigning challenging but fair homework - I got terrible evaluations, even though the average grade in my class was higher than other teachers who were teaching the same course. When I phoned it in, didn't really try, was very lax with the homework and often made the quizzes take-home, I got excellent reviews but the average grade in the class was worse.
> "When I was a college instructor I found that when I tried hard - putting a lot of thought into pedagogy, having weekly (open notes) quizzes, assigning challenging but fair homework - I got terrible evaluations, even though the average grade in my class was higher than other teachers who were teaching the same course. When I phoned it in, didn't really try, was very lax with the homework and often made the quizzes take-home, I got excellent reviews but the average grade in the class was worse."
It must've been a frustrating experience.
At the same time - you're making a very broad statement here based on a rather personal experience. You went from a certain regimen yielding certain results, to a different regimen yielding different results. There are way too many parameters here to draw conclusions.
Sure, but that's the sort of data we have to deal with in this arena. Who's going to run a large randomized trial where students are purposely assigned to different classes (keeping in mind that schedule conflicts already add additional constraints to this which may bias these assignments) and then, furthermore, have the instruction fixed apart from how easy the assignments are? Is it even fair to the students to knowingly assign students to relatively poorer teaching? Clearly it happens all the time, every department has that professor who is known for being a bad teacher yet they still have to assign classes.
In speaking with other grad students this seemed to be a well-known phenomenon, to the point that most other grad students intentionally didn't put much time into their teaching and basked in the positive reviews as a result. It was suggested many times to me that I was spending too much time thinking about my teaching. In my case, the lax teaching was not intentional, I simply was overcommitted that semester and had less time to prepare.
There are a bunch of ratings 1-10, and then a place to optionally add comments. It's hard to consolidate information from 200+ comments, but, then again, most students do not fill out the comment section anyway.
(In a sense, I guess the whole thing is optional - after all, it is anonymous, and we don't check that every student has filled the whole thing out. It also switched from in-class to an online form during the time I was teaching, and the response rate went down quite a bit as a result, but this was after the situation I mentioned above.)
Isn’t it up to the universities to provide a majority their value in the form of education rather than in the form of a credential?
Not only do such institutions seemed to be concerned about cancerously growing credentialism, they’ve embraced it as a money money making scheme. See e.g. the explosion of terminal masters programs.
> "isn’t it up to the universities to provide a majority their value in the form of education"
I'm not sure how this statement addresses my claims.
How do we measure/quantify the quality of education? I claim that the way students feel about the staff and the institution should be taken into account, rather than dismissed.
You say that the evaluations of students that are only looking for good grades are useless. What colleges are selling and what students are buying, at very high prices, are credentials.
If colleges, and those that work at them, want to be in the education business rather than the credential selling business they ought to take a good long look at where and how they went wrong instead of constantly trying to push the blame onto to people that, again, are just buying what they are selling.
> "gene is for CCR5, which currently doesn’t have known function"
It's in the article:
> "Even if editing worked perfectly, people without normal CCR5 genes face higher risks of getting certain other viruses, such as West Nile, and of dying from the flu. Since there are many ways to prevent HIV infection and it's very treatable if it occurs, those other medical risks are a concern, Musunuru said."
People with HIV just have to pop a couple pills a day now to take their viral loads to literally undetectable levels. With treatment, people with HIV can expect to live normal lives, and be healthy and active. It's also almost impossible to transmit HIV with such low viral loads.
Truvada which is a HIV prevention drug was priced at ~$1500/mo. I mention it because I know the chemicals in Truvada are similar but IDK the price comparison. Most people on HIV meds are probably hitting their Out-of-pocket max pretty early in the year.
I have had two high deductible plans in the last two year (work changed providers) and here were the things I remember:
Old Insurance:
Deductible: $3,000
Tier: 2
Price: 100%/$60
New Insurance:
Deductible: $3,000
Tier: 3 (maybe 4, I can't remember)
Price: 100%/25%
It was effectively free to me on my old insurance with the copay card (covered $3600) because I would hit my deductible and then have enough to cover the co-pays after. On my new insurance it's not feasible for me to pay ~$400/mo until I hit my OOP ($6000). You might be asking why I didn't look at a PPO plan instead of high deductible, because of the drug tier there was no difference in the price.
Let's just say you want to have insurance. ARVs are something that needs to be free, even moreso that many other health treatments. There is a huge negative externality involved in having untreated HIV since it can be transmitted. If we got everyone tested consistently, and got anyone with HIV on immediate treatment, infection rates would plummet.
HIV tests are still not included in routine bloodwork, even though other diseases like hepatitis are.
If you're on modern treatments life expectancy is near that of a non-HIV person.
The biggest problem is modern medicines aren't available to everyone across the world. So it's still a much more serious problem in developing nations.
According to the OED[0], "treatable" is a boolean that means "able to be cured". "Very" is superfluous here - something is able to be cured or not, I'm not sure what room there is for degree. As far as I know, there is no cure for HIV (but I'd be happy to learn otherwise).
So unless "very treatable" is medical jargon with a different meaning from the how the laity use it, calling HIV "very treatable" does seem to be a stretch.
codewarrior
5a8eb7db8f0e
162.243.103.238
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=123 time=0.668 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms
162.243.103.238 is a DigitalOcean address. My server's log indicates the curl command actually pulled the file. Please secure your services or they will be abused by wrong doers. In all honesty, I would advise to take the entire service down until this is fully mitigated.
For anyone else who runs into this. You can restricted a set of capabilities each container can use. This, for example can deny mount operations, socket access, etc. You can do this via "docker run" --cap-add or --cap-drop [2]. This type of stuff is great for running docker-in-docker for these types of learning tools or Jenkins builds. You'll need to play around with it though to make sure it'll work for you.
Responsible disclosure is meant when it can jeopardize user data or user devices. It’s reasonable to assume none of that apply for a brand new service. Specially now everyone is learning from it.
Seems like a lot of damage could have been obviously omitted by just removing anything os.system(), which for the purpose (not effect here) of Duolingo style education should have been just fine.
I get your point and the other guy’s too. I line up on the side that disclosures should be messy and embarrassing sometimes, as incentive to really think about what you are doing. The danger here is low.
Chances are, they are a beginner. This is someone’s side project not a site backed by a huge corporation. They deserve responsible disclosure so they can learn from it.
On the flip side, if such an embarrassing exploit is found in the first hour, maybe it’s the equivalent to making the student read their note out to the entire class as punishment?
All code is executed in a docker container and destroyed after runtime completes (or times out). It's also contained on a remote server completely unrelated to the functioning of the web app itself. Over the past year and a half I've hired two developers familiar with how docker works to find security exploits. None could find any. Can you still access the file you created?
It's obviously running in a container. I'm not sure your code really shows anything too concerning if they are taking precautions outside of the container to mitigate things like DDoS etc.
I'm just not sure what you are alleging? Just because you have full "shell" access to the container doesn't necessarily imply any thing needs to be mitigated.
What specifically are your concerns? What about what you've learned will create an exorbitant bill?
> "What specifically are your concerns? What about what you've learned will create an exorbitant bill?"
Abusing the containers to send large amounts of outgoing traffic would do just that. Downloading files would do that too. How about sending a "while(true) { }" to hog some CPU? It doesn't take much to cause significant monetary damage.
Depending on their set-up, those containers could contain credentials or some other means to compromise the rest of the website. Perhaps it is possible to re-use the containers across different "sessions", serving multiple clients with malicious traffic. Those are plausible scenarios.
I'm not carrying out a full PT right now. Demonstrating the platform has been compromised is more than enough. Any other questions?
Running ps shows the timeout command as PID 1, and evidently an infinite loop gets killed after some point. In fact, there aren't any other processes besides sh, node, and the Python interpreter, and I'm not familiar with containers to know how this is possibly implemented (because obviously, timeout cannot be PID 1, so ps is wrong here).
But you don't know that they aren't using cgroups or a proxy to throttle traffic or cpu access, right? To me, it seemed that your message was overly dramatic when you didn't really prove anything. Depending on their set-up indeed. I just don't see the compromise in your analysis.
You're right but the fact that he was able to curl a file from the outside does seem pretty bad. It means that you can effectively proxy traffic through the website and use it to target 3rd parties.
Just because you have full "shell" access to the container doesn't necessarily imply any thing needs to be mitigated.
What do you mean? It's an arbitrary RCE - a scenario that's generally treated as game over. What specifically are the concerns you don't have if that happens to a system of yours?
In a very carefully configured container regime, you might not be getting anything other than the access the API already had to run code submitted by users. You'd have RCE, but your code can only do the limited set of things permitted by the container; it might not have access to any meaningful filesystem, or to the network, or to the container engine.
It's very tricky to create and maintain those kinds of container systems, but there are services (for instance, the cloud CI providers) that do it.
On applications like these, where the premise is that you give them some kind of code and they evaluate it for you, escaping the "user interface" sandbox isn't game over; you'd have to finish the exercise of escalating to the container host or getting access to an internal network with internal APIs on it.
It's a little like getting SQL injection, but confined to some kind of SQL view. Chances are the application is doomed, but you still have to prove it.
Right, I understand that but you'd not pick a container as your primary line of isolation for running arbitrary Go code. The GP is saying - it's cool, because containers. Yes, perhaps in exceptionally skilled hands, maybe this is sane but generally it's not. The official go playground's first sandbox is Native Client.
remote code execution doesn't really mean much in an un-privileged container. They could be using cgroup limits, capability drops, MAC, seccomp, etc etc
Now, I'm not saying that containers are super tight by default. It is entirely possible this particular container env is wide open, but I didn't really see anything too concerning from the parents analysis.
remote code execution doesn't really mean much in an un-privileged container.
It means pretty much everything. Have you ever heard of someone reporting an RCE in a major service and it being treated as no big deal? They're invariably treated as catastrophic compromise because it is. The jump from RCE to privilege escalation, escape, etc is nothing compared to the actual RCE-ing.
when the platform is itself about RCE... I think you have to evaluate what specific dangerous code can be executed before you start lighting the torches.
Breaking out of a container isn't the only security worry.
They have a "Register" button at the top right where you can enter a username and password. If you have shell access to the box (no matter how virtual the box is), there's a good chance you can alter the site's code and capture the passwords people enter. And knowing that, in the real world, people do reuse passwords, this could easily lead to compromising accounts on other sites.
They also have a privacy policy (linked at the bottom) in which they make all kinds of promises about not leaking your personal data. If someone can take over their machine, and they know it, and they don't shut it down, it seems like that would violate the promises made in that privacy policy.
Also, of course, an attacker could alter the site to exploit any vulnerabilities in the users' browsers, so it opens up an attack vector there. Obviously users need to keep browsers patched, but people expect the risk to be lower when visiting legitimate sites.
Even if the containers were running on the same physical machine as the webapp you'd have to break out of the container your code is in first... If you know of a container breakout exploit then you should definitely publish it!
I didn't mention anything about docker, seeing that containers are a linux kernel feature, but if you know of container escape vulnerabilities in the kernel you should publish them.
To others thinking about doing this: keep in mind this is against the law in the United States. Even if it's for a "good cause", you can't just "pen-test" (hack) anyone you want.
Granted, I really doubt anyone would prosecute over something like this, but a bigger company? Absolutely possible.
I wonder if WebAssembly could help accomplish that.
Still, even Rust has a compile-and-execute web service call accessible from the rust-lang home page. If Rust people (who tend to emphasize security) feel it is possible to secure that web service, then I'm inclined to believe them. It may be difficult though.
Actually I think the Rust folks would appreciate any feedback on the security of the service, as long as you don't DoS it or use it to do anything bad. :-)
> "He needs to be running a client-side service, not running whatever someone enters on his machine"
I would say that for the most part, websites such as this don't actually need a real, full-blown %s-lang compiler/VM that actually executes real code on a backend server. It would be enough to tokenize and parse things on the client's side and validate ABNF via JS. This would reduce the costs involved with running such a website, and the attack surface. If you want to get fancy, you could host an in-browser Python VM - but that's an overkill for a website such as this. Also, they're trying to support a fair bit of languages here, not all of which have browser-targeted tooling that could compile and run the code.
Then you end up with a system where someone can arrive at the right answer via AST that you didn't expect, which was a frustration when I helped students with a service like (IIRC) CodeCademy.
in this case, `redacted` itself is the tld. the company that owns `.redacted` (Redacted, Inc) has chosen to serve A records for it, which is rather uncommon.
For a while, the owners of `.ai` had a similar arrangement, but it seems to have been since taken down.
Maybe I don't see your point, but an iOS app could execute code locally. The only risk is the device owner could compromise the device. There is no [additional] risk of another user doing so.
Because javascript run locally can connect to the internet, and if it put into the console within the page on a domain that is storing secrets in local storage/cookies, it can scoop up all your credentials or other private information and send them to some other server. Unrestricted local execution can give up full access to local user's accounts, so is not good. Server execution can do that and also maybe impact other users.
Apple forbids that because it wants to be able to control and validate applications on their store. If they allowed self-modifying code apps could auto-update and change their features post-install. This is not really related.
Except that hasn't been true for years in certain circumstances, particularly where the value of an app running user-created code is educational in nature. See Pythonista, Codea, Swift Playgrounds, or hell, Shortcuts.
This has much less to do with C, than it has to do with the fact that sqlite is a huge codebase.
Software can be vulnerable regardless of the programming languages used.