The original impetus for making a change here was a typo in --add-header
causing the script to fail. However, upon inspection, I was alarmed that
we were making a --recursive upload to the *root directory* of
ziglang.org. This could result in garbage files being uploaded to the
website, or important files being overwritten. As I addressed this concern,
I decided to take on file compression as well.
Removed compression prior to sending to S3. I am vetoing pre-compressing
objects for the following reasons:
* It prevents clients from working which do not support gzip encoding.
* It breaks a premise that objects on S3 are stored 1-to-1 with what is
on disk.
* It prevents Cloudflare from using a more efficient encoding, such as
brotli, which they have started doing recently.
These systems such as Cloudflare or Fastly already do compression on
the fly, and we should interop with these systems instead of fighting them.
Cloudfront has an arbitrary limit of 9.5 MiB for auto-compression. I looked
and did not see a way to increase this limit. The data.js file is currently
16 MiB. In order to fix this problem, we need to do one of the following things:
* Reduce the size of data.js to less than 9.5 MiB.
* Figure out how to adjust the Cloudfront settings to increase the max size
for auto-compressed objects.
* Migrate to Fastly. Fastly appears to not have this limitation. Note
that we already plan to migrate to Fastly for the website.
* CMakeLists: pass `-Dstrip` for release zig builds
* pass -target and -mcpu to zig1. works around llvm on freebsd
incorrectly detecting "freestanding" instead of "freebsd" for the
native OS.
* ci.ziglang.org is now responsible for creating aarch64-macos tarballs
rather than Azure.
This is a simplification of the cmake build script which introduces a
new "stage3" target that is built by default, which builds and installs
a stage3 zig.
It greatly simplifies the build instructions for Zig, making it conform
to the regular cmake routine, while still producing a stage3 artifact.
Empirically, the ReleaseSmall std lib tsets take about 55 minutes on the
CI, and is the bottleneck causing timeouts. So this commit disables full
coverage in favor of running a smaller set of ReleaseSmall std lib tests.