Do you use TCP or UDP?
We just do the standard Cloudflare API call to purge an outdated file, so it's TCP.
Also if files are missing when they reach the node, why do you not have any validation to check all changed files have been received before allowing a user to use that node?
We cannot check the node, we just make a very simple API call to tell Cloudflare "these files are outdated, please resync", and it's then all done automatically by Cloudflare. OF COURSE we have an hash validation check to verify all files has reached the server, but the only thing we see is our origin server, not what's on the nodes.
In fact, I oversimplified, but the structure is not so simple as "our server->local node", there are servers IN BETWEEN that also need to be replicated as well, this is how really works:
https://developers.cloudflare.com/_astro/tiered_cache_topology.sy3gfwwc_ZdD2IV.webpSo, nobody downloads from our server directly except the main Upper Tier Cloudflare servers, which are the first getting the updated files. However, they won't download them *immediately*, they will just know (because of the API call we did with a list of outdated file) that, in case somebody needs a file, that file is outdated, so it must be downloaded again from our origin server.
So, when an user in a random place from his local Cloudflare node request a file, the local node should know it's outdated, so it will check the Upper Tier server on Cloudflare to see if the new version is there. If it's there, the local node will download it from the Upper Tier server and will serve to users of that local node. If it's not, the Upper Tier server which should also know that file is outdated, will finally download it from our server, the local node will download it from the now updated Upper Tier server, and so on.
This is a black box for us, the only thing we do, is provide a list of outdated files, the rest is done automatically by Cloudflare. It should be clear that, with that system in place, updates cannot be "immediate", there always be some kind of replication delay, because Upper Tier servers and Local nodes both host thousands if not millions of websites.
It has already happened that, when Cloudflare went down, millions of website were affected:
2019:
https://www.bbc.com/news/technology-488418152022:
https://www.techmonitor.ai/hardware/cloud/cloudflare-outage-disrupts-sites-google-aws-twitterThis outage affected Google, AWS and Twitter, so much for "Switch to Google or AWS"...
2024:
https://www.bleepingcomputer.com/news/technology/cloudflare-outage-cuts-off-access-to-websites-in-some-regions/Note that, we have a Pro account, so we don't have the "regional Tier" layer of servers, so there's just one in between "origin server (FSDT)->Upper Tier server->Lower Tier (your local node)", the Regional Tier is reserved to Cloudflare Enterprise account which *start* at $5000/months, so there's no way we could possibly afford this.
As I've said, the only way to really TEST what would happen on user's systems, is to use a VPN, select a different country and just use the FSDT Installer and see if newer files are up.
Which I obviously do, but it's not I could possibly test each and every country in the world, if I test a couple of countries in different continents (Europe, America and Asia), and I see the newer files are up, I would be sure that at least Cloudflare got the CORRECT list of outdated files, so it surely synced at least the Upper Tier servers and the local node I tested but again, it's not really feasible to test all of them, see my previous example of the user having a problem in Melbourne: I tried ALL nodes in Australia, and Melbourne was the ONLY one not updated yet, but of course it got updated a few hours later.
Sure there are the odd occurrences in deployment but with FSDT there are so many updates that have issues it is the norm and not the exception. So if Cloudflare aren't able to acknowledge and solve your problems then maybe move to Azure or Amazon or Google?
We already use S3, but without Cloudflare in between as a CDN, the bandwidth bill would be outrageous. I happened a few years ago: because I didn't knew that Cloudflare never caches files larger than 500MB, we had a very old version of the full GSX Installer as a single .EXE, which was about 3GB. In a month, we spent $5000 of AWS bandwidth fees, so out of the ordinary, that I even had an Amazon representative calling me by phone to talk what they must have imagined a "new large enterprise customer", when in fact it was just my fault not having read the fine lines in the Cloudflare docs that files larger than 500MB are never cached. That's why the GSX full installer downloads in several parts of 490MB each...
That's a side effect of GSX installer always being freely available to anybody, without any account or registration, with no limit on downloads, no limits on bandwidth, and we NEVER, EVER experienced a loss of access to our server collapsing after a big release, not even on GSX for MSFS release day (something happened to many other devs), because we spread the load over the big Cloudflare networks.
Also, what Cloudflare also does, is prevent DDoS attacks and other threats on our servers. I just entered the Cloudflare stats page now and, in the last 24 hours, they have blocked more than 8000 Threats (7,137 from China, 767 from Singapore, 158 from Vietnam, if you like statistics), just 4.970 threats in the past 6 hours.
Now, while we DO NOT store any of our customers data on our servers (we don't process payments, and we don't handle the activation servers ourselves), there's at least this forum where we have at least emails for all forum users so, Cloudflare is helping protecting this data as well and it's another reason why we manually approve new forum registration: we have a plugin in the forum software which has a database of known spammers, so each new registration is checked manually, and *still* somebody managed to get through, by manually creating new email accounts to post Spam here.
That is to say, we take great care about users security and privacy, and Cloudflare is helping doing that, without having to hire dedicated people just to defend our servers from all kind of attacks, so even if it has annoyances, it works.