Hacker Newsnew | past | comments | ask | show | jobs | submit | nemothekid's commentslogin

I think I should write more about but I have been feeling very similar. I've been recently exploring using claude code/codex recently as the "default", so I've decided to implement a side project.

My gripe with AI tools in the past is that the kind of work I do is large and complex and with previous models it just wasn't efficient to either provide enough context or deal with context rot when working on a large application - especially when that application doesn't have a million examples online.

I've been trying to implement a multiplayer game with server authoritative networking in Rust with Bevy. I specifically chose Bevy as the latest version was after Claude's cut off, it had a number of breaking changes, and there aren't a lot of deep examples online.

Overall it's going well, but one downside is that I don't really understand the code "in my bones". If you told me tomorrow that I had optimize latency or if there was a 1 in 100 edge case, not only would I not know where to look, I don't think I could tell you how the game engine works.

In the past, I could not have ever gotten this far without really understanding my tools. Today, I have a semi functional game and, truth be told, I don't even know what an ECS is and what advantages it provides. I really consider this a huge problem: if I had to maintain this in production, if there was a SEV0 bug, am I confident enough I could fix it? Or am I confident the model could figure it out? Or is the model good enough that it could scan the entire code base and intuit a solution? One of these three questions have to be answered or else brain atrophy is a real risk.


I'm worried about that too. If the error is reproducible, the model can eventually figure it out from experience. But a ghost bug that I can't pattern? The model ends up in a "you're absolutely right" loop as it incorrectly guesses different solutions.

Are ghost bugs even real?

My first job had the Devs working front-line support years ago. Due to that, I learnt an important lessons in bug fixing.

Always be able to re-create the bug first.

There are no such thing as ghost bugs, you just need to ask the reporter the right questions.

Unless your code is multi-threaded, to which I say, good luck!


They're real at scale. Plenty of bugs don't suface until you're running under heavy load on distributed infrastructure. Often the culprit is low in the stack. Asking the reporter the right questions may not help in this case. You have full traces, but can't reproduce in a test environment.

When the cause is difficult to source or fix, it's sometimes easier to address the effect by coding around the problem, which is why mature code tends to have some unintuitive warts to handle edge cases.


> Unless your code is multi-threaded, to which I say, good luck!

What isn't multi-threaded these days? Kinda hard to serve HTTP without concurrency, and practically every new business needs to be on the web (or to serve multiple mobile clients; same deal).

All you need is a database and web form submission and now you have a full distributed system in your hands.


nginx is single–threaded, but you're absolutely right — any concurrency leads to the same ghost bugs.

nginx is also from the era when fast static file serving was still a huge challenge, and "enough to run a business" for many purposes -- most software written has more mutable state, and much more potential for edge cases.

Only superficially so, await/async isn't usually like the old spaghetti multi-threaded code people used to write.

You mean in a single-threaded context like Javascript? (Or with Python GIL giving the impression of the same.) That removes some memory corruption races, but leaves all the logical problems in place. The biggest change is that you only have fixed points where interleaving can happen, limiting the possibilities -- but in either scenario, the number of possible paths is so big it's typically not human-accessible.

Webdevs not aware of race conditions -> complex page fails to load. They're lucky in how the domain sandboxes their bugs into affecting just that one page.


Historically I would have agreed with you. But since the rise of LLM-assisted coding, I've encountered an increasing number of things I'd call clear "ghost bugs" in single threaded code. I found a fun one today where invoking a process four times with a very specific access pattern would cause a key result of the second invocation to be overwritten. (It is not a coincidence, I don't think, that these are exactly the kind of bugs a genAI-as-a-service provider might never notice in production.)

> I've been trying to implement a multiplayer game with server authoritative networking in Rust with Bevy. I specifically chose Bevy as the latest version was after Claude's cut off, it had a number of breaking changes, and there aren't a lot of deep examples online.

I am interested in doing something similar (Bevy. not multiplayer).

I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?

I'm also curious how you test if the game is, um... fun? Maybe it doesn't apply so much for a multiplayer game, I'm thinking of stuff like the enemy patterns and timings in a soulslike, Zelda, etc.

I did use ChatGPT to get some rendering code for a retro RCT/SimCity-style terrain mesh in Bevy and it basically worked, though several times I had to tell it "yeah uh nothing shows up", at which point is said "of course! the problem is..." and then I learned about mesh winding, fine, okay... felt like I was in over my head and decided to go to a 2D game instead so didn't pursue that further.


>I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?

I've found that there are two issues that arise that I'm not sure how to solve. You can give it docs and point to it and it can generally figure out syntax, but the next issue I see is that without examples, it kind of just brute forces problems like a 14 year old.

For example, the input system originally just let you move left and right, and it popped it into an observer function. As I added more and more controls, it began to litter with more and more code, until it was ~600 line function responsible for a large chunk of game logic.

While trying to parse it I then had it refactor the code - but I don't know if the current code is idiomatic. What would be the cargo doc or rust-analyzer equivalent for good architecture?

Im running into this same problem when trying to claude code for internal projects. Some parts of the codebase just have really intuitive internal frameworks and claude code can rip through them and provide great idiomatic code. Others are bogged down by years of tech debt and performance hacks and claude code can't be trusted with anything other than multi-paragraph prompts.

>I'm also curious how you test if the game is, um... fun?

Lucky enough for me this is a learning exercise, so I'm not optimizing for fun. I guess you could ask claude code to inject more fun.


> What would be the cargo doc or rust-analyzer equivalent for good architecture?

Well, this is where you still need to know your tools. You should understand what ECS is and why it is used in games, so that you can push the LLM to use it in the right places. You should understand idiomatic patterns in the languages the LLM is using. Understand YAGNI, SOLID, DDD, etc etc.

Those are where the LLMs fall down, so that's where you come in. The individual lines of code after being told what architecture to use and what is idiomatic is where the LLM shines.


What you describe is how I use LLM tools today, but the reason I am approaching my project in this way is because I feel I need to brace myself for a future where developers are expected to "know your tools"

When I look around today - its clear more and more people are diving in head first into fully agentic workflows and I simply don't believe they can churn out 10k+ lines of code today and be intimately familiar with the code base. Therefore you are left with two futures:

* Agentic-heavy SWEs will eventually blow up under the weight of all their tech debt

* Coding models are going to continue to get better where tech debt wont matter.

If the answer if (1), then I do not need to change anything today. If the answer is (2), then you need to prepare for a world where almost all code is written by an agent, but almost all responsibility is shouldered by you.

In kind of an ignorant way, I'm actually avoiding trying to properly learn what an ECS is and how the engine is structured, as sort of a handicap. If in the future I'm managing a team of engineers (however that looks) who are building a metaphorical tower of babel, I'd like to develop to heuristic in navigating that mountain.


I ran into similar issues with context rot on a larger backend project recently. I ended up writing a tool that parses the AST to strip out function bodies and only feeds the relevant signatures and type definitions into the prompt.

It cuts down the input tokens significantly which is nice for the monthly bill, but I found the main benefit is that it actually stops the model from getting distracted by existing implementation details. It feels a bit like overengineering but it makes reasoning about the system architecture much more reliable when you don't have to dump the whole codebase into the context window.


> I don't really understand the code "in my bones".

Man, I absolutely hate this feeling.


>Even if it is, do we really need to tolerate it?

People not only tolerate, but I'd argue most people prefer it. I think, unlike Singapore or Tokyo, Americans, in cities, largely prefer a little lived in grime.

The Mission Bay is a relatively new neighborhood in San Francisco - mostly free of graffiti and is pretty much sterile, and most people would prefer to live in the Mission rather than Mission Bay. OpenAI likely pays a huge premium to HQ in the mission rather than settling in the more corporate offices of Mission Bay or even the Financial District.

I also noticed the same in Berlin - Kreuzberg, Neukolln, and other neighborhoods in East Berlin attract the most people, despite being drenched in graffiti.

If ever move to a city in America and tell people you live in the generally clean, spick and span, neighborhood in that city, half the people will look at you like you have 3 heads or simply assume you have no personality. Graffiti has largely become an accepted, or even valued, feature of a neighborhood. I believe internally it separates the "cool" city inhabitants from the "losers" out in the suburbs.

Edit: I just looked through all the images in the OP and one of them is a banksy. It's been there for over a decade. Graffiti isn't just tolerated, its practically protected.


What do you mean? OpenAI's main offices have been in Mission Bay since 2024

Can someone explain how protobuf ended up in the middle here? I'm just totally confused; the C ABI exists in almost every language, why did they need protobuf here?

I don't know, but I have a guess. Someone didn't want to deal with unsafety of dealing with memory allocated in C code. Serialize/deserialize makes it easy, no need for unsafe, no need to learn all the quirks of the C-library allocating the memory.

I had experience with writing safe bindings to structures created in C library, and it is a real pain. You spend a lot of times reverse engineering C code to get an idea of the intent of those who had wrote the code. You need to know which pointers can address the same memory. You need to know which pointers can be NULL or just plain invalid. You need to know which pointers you get from C code or pass to it along with ownership, and which are just borrowed. It maybe (and often is) unclear from the documentation, so you are going to read a lot of C code, trying to guess what the authors were thinking when writing it. Generating hypotheses about the library behavior (like 'library never does THIS with the pointer') and trying to prove them by finding all the code dealing with the pointer.

It can be easy in easy situations, or it can be really tricky and time consuming. So it can make sense to just insert serialization/deserialization to avoid dealing with C code.


The problem is, as I understand it, is this hypothetical network where there is a NAT but no firewall just does not exist.

>In commercial grade routers, the same applies except even if the external IP knew to direct the router to the right internal IP, or if the route knew to direct the traffic to the right external IP for outbound connections, unless you configure a default route, or a more explicit route, it won't forward such traffic.

This is typically handled by the firewall, not the NAT. You can easily come up with scenarios that without the firewall, the NAT could be trivially defeated, e.g. by port scanning.


It is not, you guys are talking from a specific american ISP perspective where you have these modem+router+gateway+firewall combo devices. Not everyone gets that.

Many get just a modem and buy a cheap router which may not have a firewall. MANY more get just a modem and their laptops are directly exposed to the internet (!!!), those you can't do much about, but many put a "router" that's just a cheap wifi access point with layer 3 routing and NAT. If you chose to "bridge" a device (like those internet exposed laptops) or port-forward, it will just work (even with ISP routers!!) there is no firewall rule change required.

I've worked in this space supporting consumer grade routers, and then worked in enterprise networking. But don't take my word for it, you all can take a trip to shodansafari, how many devices are listening port 3389 and 445 with consumer grade laptop names?

But it isn't a popular thing to say for whatever reason. I guess IPv6 is a political ideology now lol.


>Many get just a modem and buy a cheap router which may not have a firewall

What cheap router are you buying that doesn't have a firewall. I think the problem is when people hear "firewall" they think the router is running pfSense or something. Even cheap routers will have a basic, non-configurable, firewall that will block inbound connections. That is separate from NAT and has nothing to do with IPv4/IPv6.


what most people call "router" in that context are APs. Good ones are proper router/AP/Firewall combos, but my cheap ones don't.

Here is a good example with the user guide: https://www.tp-link.com/us/document/107360/

It's an AP that serves DHCP addresses on the lan port. that's it. It has some port forwarding too if you set it up, no firewalling there. For modems, most cable ISPs let you buy a DOCSIS modem, there is no router, whatever device you connect gets a DHCP lease right on the internet (and ipv6), most people buy cheap "routers" like that one to add "wifi" to it, and it works great for the money. And honestly, I have yet to see one that does have a firewall, but then again I've never tried the $500 router options or seen someone who did.

These devices are not meant to firewall, they have no need to firewall. if you do "bridge" or "portforward" they assume you want everything forwarded, they don't let you configure any firewalling by design, and they don't have any firewalling because it isn't needed. They have a dedicated WAN port, the management interface doesn't listen on that port and LAN devices are NAT'ed with IPv4 so there is no need to firewall anything even behind the scenes. Their main use is to either extend wifi coverage or add wifi capability to modems.

Most people with fiber or *DSL get an ISP provided gateway which has a firewall,that's not the same as what I'm talking about.

I hate to complain about downvotes, but you all need to realize that it is the poorest and most vulnerable around the world that get hurt over this stuff. yes, ipv6 can cause unintended internet exposure of internal devices. period. that's not a dismissal or disapproval of ipv6, it is what it is, and that needs to be considered when deploying it. It assumes you'll configure your network properly, unfortunately the people who made ipv6 didn't consider consumers or people who screw up, they wanted to force people to configure firewalls, that works for corporations (until it doesn't) but not for most regular internet users.


The Archer BE3600 Pro you linked definitely has a stateful packet inspection firewall (SPI) https://www.tp-link.com/us/home-networking/wifi-router/arche... and the capabilities go well beyond state tracking (HomeShield Security enabled more on it + a few userspace tools). The Archer BE3600 Pro is also not a particularly cheap device in the first place, certainly well out of reach of the poorest in the world, it's just low cost for having such a high speed

Regardless, even with actually cheap devices, you'll find they also have the same. This is because nearly everyone, particularly the cheapest piece of crap CPU forwarding 100 Mbit routers, implement NAT using netfilter https://www.netfilter.org/ on Linux. Netfilter is most commonly known for being the firewall backend of iptables/nftables rules, but the conntrack database of nf_netfilter is also what drives the NAT state of nf_nat. It's a similar story in BSD, but it's all contained in what's called "pf" (packet filter) instead of netfilter.

I.e. one, literally, cannot implement NAT on these types of devices without first invoking a firewall and populating said firewall with the connection state. The _only_ difference in defaults between IPv4 and IPv6, on even the cheapest home routers, is whether or not the NAT is enabled on top of the stateful firewall. In no case is NAT able to be enabled on these types of boxes without having the stateful firewall functionality in place. The port forwarding is also done via netfilter. I.e., an entry in the firewall.

High end devices (most people in the US do not have home routers better than the one you linked) tend to have hardware offloads for these systems (i.e. the netfilter rules can be accelerated by dedicated hardware in either the SoC or the NIC) but otherwise are identical in implementation to the cheap ones, barring the additional crap they might bundle with the device too. It's not until you get into enterprise firewalls from companies like Fortinet you start seeing truly unique custom implementations, and even then they build it the same way at the end of the day (because why would you implement state tracking twice just to be able to build NAT with less security than normal?).

There is a common conflation that a firewall is this big beefy high end appliance which has all sorts of high end features and a dedicated config interface because it's so honkin' complex. The reality is a firewall is just a network stack implementation which tracks connection state and lets you perform actions on that (drop, deny, rewrite, send to userspace for an app to handle). NAT relies on the rewrite capabilities combined with the state table, and ticking NAT just implements a bunch of firewall rules on your behalf. Similarly, a port forward is just another rule entry which gets added to the firewall. The same ruleset which gets you NAT on home routers, minus the address & port rewriting, is what gets you a normal firewall which denies inbound.


It's possible to do NAT without firewalling in netfilter. I gave the rules for it in this comment: https://news.ycombinator.com/item?id=46709150 -- you literally only need the first one for NAT. Inserting it will make netfilter track connections, but you need the other, separate rules to do firewalling based on that state.

Most home routers will ship with those firewall rules in place, because not doing so is a security vulnerability, so in practice you're going to have the firewall, but it's not a strict requirement and routers have been discovered to not have them in the past.

At least with v6 it's more obvious that you need these rules, so it's more likely people will be checking for them.


> It's possible to do NAT without firewalling in netfilter.

That's not the claim I was making, which is that if you have netfilter/pf you are already using a device which ships a stateful firewall (and if you have NAT on a cheap home router you have netfilter/pf). This is in response to GP's claim there are cheap home routers which can NAT but not be configured as a stateful firewall, whereas your response seems to be more about how NAT can be configured.

Whether or not netfilter/pf is configured with NATs, port forwards, or block entries is a separate topic all together, somewhat split between vendor default config and what the user has changed. Regardless of what rules it's configured with at a given moment, netfilter/pf doesn't stop having the capabilities of a stateful firewall already bundled.


The nat is a belt and braces approach - especially when combined with rpf. How will your packet reach 192.168.0.1 from the internet without having a nat rule to translate the packet, even if there is a firewall rule allowing all traffic

(If you control the next hop and the router doesn't have rpf checks on the wan interfaces you can forge a packet with a destination of 192.168.0.1 and route it via the public IP of 40.50.60.70)


The trigger architecture is actually quite interesting, especially because cleanup is relatively cheap. As far as compliance goes, it's also simply to declare that "after 45 days, deletions are permanent" as a catch all, and then you get to keep restores. For example, I think (IANAL), the CCPA gives you a 45 day buffer for right to erasure requests.

Now instead of chasing down different systems and backups, you can simply set ensure your archival process runs regularly and you should be good.


The subcommunity that would have tweets on HN has stayed on Twitter. There are entire separate subcommunities on Twitter that have just died in the past year.

It's like saying you don't see any Instagram posts on HN, so Instagram must be tiny. Its more likely the subcommunities that post on Threads don't have overlap with HN.


I think if you are on PC. But on Mac with the Photos App

Cmd+A > File > Export Unmodified Originals


There is no straight way to download photos from iCloud to Mac.


The comment you're replying to just gave one, I did it recently for 70k photos. select all > export. That's it. It just took 30 hours.


Photos > Settings > iCloud > Download Originals to this Mac.


Given the context of the article, I think "Rust specific" here means that "it couldn't be done in python".

For example "No interpreter startup" is not specific to Rust either.


>We don't have to wait for singular companies or foundations to fix ecosystem problems.

Geohot has been working on this for about a year, and every roadblock he's encountered he has had to damn near pester Lisa Su about getting drivers fixed. If you want the CUDA replacement that would work on AMD, you need to wait on AMD. If there is a bug in the AMD microcode, you are effectively "stopped by AMD".


We have to platform and organize people, not rely on lone individuals. If there is a deep well of aligned interest, that interest needs a way to represent itself so that AMD has something to talk to, on a similar footing as a B2B relationship. When you work with other companies with hundreds and thousands of employees, it's natural that emails from individuals get drowned out or misunderstood as circulated around.


Geohot isn't working by himself - it's part of his B2B company, tinygrad, that sells AMD systems and is VC funded.

https://tinygrad.org/#tinybox

You can see in his table he calls out his AMD system as having "Good" GPU support, vs. "Great" for nvidia. So, yes, I would argue he is doing the work to platform and organize people, on a professional level to sell AMD systems in a sustainable manner - everything you claim that needs to be done and he is still bottlenecked by AMD.


> everything you claim that needs to be done

A single early-stage company is not ecosystem-scale organization. It is instead the legacy benchmark to beat. This is what we do today because the best tools in our toolbox are a corporation or a foundation.

Whether AMD stands to benefit from doing more or less, we are likely in agreement that Tinygrad is a small fraction of the exposed interest and that if AMD were in conversation with a more organized, larger fraction of that interest, that AMD would do more.

I'm not defending AMD doing less. I am insisting that ecosystems can do more and that the only reason they don't is because we didn't properly analyze the problems or develop the tools.


I'm very familiar with the stack and the pain of trying to livestream video to a browser. If JPEG screenshots work for your clients, then I would just stick with that.

The problem with wolf, gstreamer, moonlight, $third party, is you need to be familiar with how the underlying stack handles backpressure and error propagation, or else things will just "not work" and you will have no idea why. I've worked on 3 projects in the last 3 years where I started with gstreamer, got up and running - and while things worked in the happy path, the unhappy path was incredibly brittle and painful to debug. All 3 times I opted to just use the lower level libraries myself.

Given all of OPs requirements, I think something like NVIDIA Video Codec SDK to a websocket to MediaSource Extensions.

However, given that even this post seems to be LLM generated, I don't think the author would care to learn about the actual internals. I don't think this is a solution that could be vibe coded.


This is where LLMs shine, where you need to dip your toes into really complex systems but basically just to do one thing with pretty straightforward requirements.


The peak of irony, because you know how these people arrived at their 40 Mbit bitrate H264 and their ineffective tinkering with the same in the first place is guaranteed to be some LLMs expert suggestions. As is often the case, because they had no understanding of the really complex system subject matter whatsoever, they were unable to guide the LLM and ended up with .. slop. Which then turned into a slop blog post.

God knows what process led them to do video streaming for showing their AI agent work in the first place. Some fool must have put "I want to see video of the agent working" in.. and well, the LLM obliged!


  > God knows what process led them to do video streaming for showing their AI agent work in the first place.

This was my first thought, too.


How else are they going to sell it to all those micromanagers who micromanage things?


>As is often the case, because they had no understanding of the really complex system subject matter whatsoever

Something I want to harp on because people keep saying this:

Video streaming is not complicated. Every youtuber and twitch streamer and influencer can manage it. By this I mean the actual act of tweaking your encoding settings to get good quality for low bitrate.

In 3 months with an LLM, they learned less about video streaming than you can learn from a 12 year old's 10 minute youtube video about how to set up Hypercam2

Millions and millions of literal children figured this out.

Keep this in mind next time anyone says LLMs are good for learning new things!


... have you ever tried to do anything with ffmpeg? tried to backup a DVD to a compressed file?

Video codecs are some of the most complex software I've ever encountered with the most number and the most opaque options.

It's easy for streamers because they don't have options, twitch et al give you about three total choices, there's nothing to figure out.


Video Streaming has surprising little overlap with Video Codecs. Once you choose input/output options, then there's little to change about the codec. The vast majority of options available to ffmpeg aren't supported in the browser. Streamers don't have options for precisely the same reason OP doesn't have options - you are limited entirely into what the browser supports.

I've built the exact pipeline OP has done - Video, over TCP, over Websockets, precisely because I had to deliver video to through a corporate firewall. Wolf, Moonlight and maybe even gstreamer just shows they didn't even try to understand what they were doing, and just threw every buzzword into an LLM.

To give you some perspective 40Mbps is an incredible amount of bandwidth. Blu ray is 40mbps. This video, in 8K on Youtube is 20Mbps: https://www.youtube.com/watch?v=1La4QzGeaaQ

There's really no explanation for this.


I have done a bit with ffmpeg and video encoding. I've been encoding videos using ffmpeg (from a GUI) since I was a child. I hate ffmpeg though, the UX is just insane, so I tend more towards tools that produce the arcane command structures for me.

I had a situation where I wanted to chop one encoded video into multiple parts without re-encoding (I had a deadline) and the difficulty getting ffmpeg to do sensible things in that context was insane. One way of splitting the video without re-encoding just left the first GOP without a I frame, so the first seconds of video were broken. Then another attempt left me with video that just got re-timed, and the audio was desynced entirely. I know encoding some frames will be necessary to fix where cuts would break P and B frames, but why is it so hard to get it to "smartly" encode only those broken GOPs when trying to splice and cut video? Clearly I was missing some other parameters or knowledge or incantation that would have done exactly that.

The few knobs that actual video encoder users need to tweak are clearly exposed and usable in every application I have ever used.

>twitch et al give you about three total choices

You don't configure your video encoding through twitch, you do it in OBS. OBS has a lot of configuration available. Also, those three options (bitrate type, bitrate value, profile, "how much encoding time to take" and """quality""" magic number) are the exact knobs they should have been tweaking to come up with an intuition about what was happening.

Regardless, my entire point is that they were screwing around with video encoding pipelines despite having absolutely no intuition at all about video encoding.

They weren't even using FFMPEG. They were using an open source implementation of a video game streaming encoder. Again, they demonstrably have no freaking clue even the basics of the space. Even that encoder should be capable of better than what they ended up with.

We've been doing this exact thing for decades. None of this is new. None of this is novel. There's immense literature and expertise and tons of entry level content to build up intuition and experience with what you should expect encoded video to take bandwidth wise. Worse, Microsoft RDP and old fashioned X apps were doing this over shitty dial up connections decades ago, mostly by avoiding video encoding entirely. Like, we made video with readable text work off CDs in a 2x drive!

Again, Twitch has a max bandwidth much lower than 40mb/s and people stream coding on it all the time with no issue. That they never noticed how obscenely off the mark they are is sad.

It would be like if a car company wrote a blog post about how "We replaced tires on our car with legs and it works so much better" and they mention all the trouble they had with their glass tires in the blog.

They are charging people money for this, and don't seem to have any desire to fix massive gaps in their knowledge, or even wonder if someone else has done this before. It's lame. At any point, did they even say "Okay, we did some research and in the market we are targeting we should expect a bandwidth budget of X mb/s"?

"AI" people often say they are super helpful for research, and then stuff like this shows up.


...and apparently waste 3 months doing it wrong thanks to it without doing anything as basic as "maybe fix your bitrate, it's far higher than any gameplay streaming site and that's for video game, stuff with more movement"

40Mbit is 1080p bluray bitrate level


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: