Some users are hitting this limit. I think it's primarily due to not
deduplicating (solved in the previous commit) but this seems like a
better limit regardless.
The zig way is to let the compiler provide errors, rather than trying to
implement the compiler in the standard library.
I played around with this and found the compile errors to be easier to
comprehend without this logic.
1. Entirely rewrote frexp with generics, reducing the implementation to a single function and enabling parameters of types f80 and f16
2. Expanded upon the tests, making them more descriptive and comprehensive, and automatically generating the test bodies for each floating point type
3. Added a doctest for frexp
Symmetry with parse_float and to hide the implementation from the user.
Additionally, we expose the entire namespace and provide some aliases so
everything is available to a user.
Closes#19366
netbsd fix:
- `Futex.zig:542:56: error: expected error union type, found 'c_int'`
openbsd fix:
- `emutls.zig:10:21: error: root struct of file 'os' has no member named 'abort'`
- `Thread.zig:627:22: error: expected 6 argument(s), found 5`
This was a mistake from day one. This is the wrong abstraction layer to
do this in.
My alternate plan for this is to make all I/O operations require an IO
interface parameter, similar to how allocations require an Allocator
interface parameter today.
A pointer type already has an alignment, so this information does not
need to be duplicated on the function type. This already has precedence
with addrspace which is already disallowed on function types for this
reason. Also fixes `@TypeOf(&func)` to have the correct addrspace and
alignment.
This adds std.debug.SafetyLock and uses it in std.HashMapUnmanaged by
adding lockPointers() and unlockPointers().
This provides a way to detect when an illegal modification has happened and
panic rather than invoke undefined behavior.
The error unions for WindowsDynLib and ElfDynLib do not contain all the possible errors.
So user code that relies on DynLib.Error will fail to compile.
This implementation is now a direct replacement for the `kernel32` one.
New bitflags for named pipes and other generic ones were added based on
browsing the ReactOS sources.
`UNICODE_STRING.Buffer` has also been changed to be nullable, as
this is what makes the implementation work.
This required some changes to places accesssing the buffer after a
`SUCCESS`ful return, most notably `QueryObjectName` which even referred
to it being nullable.
* io_uring: ring mapped buffers
Ring mapped buffers are newer implementation of ring provided buffers, supported
since kernel 5.19. Best described in Jens Axboe [post](https://github.com/axboe/liburing/wiki/io_uring-and-networking-in-2023#provided-buffers)
This commit implements low level io_uring_*_buf_ring_* functions as mostly
direct translation from liburing. It also adds BufferGroup abstraction over those
low level functions.
* io_uring: add multishot recv to BufferGroup
Once we have ring mapped provided buffers functionality it is possible to use
multishot recv operation. Multishot receive is submitted once, and completions
are posted whenever data arrives on the socket. Received data are placed in a
new buffer from buffer group.
Reference: [io_uring and networking in 2023](https://github.com/axboe/liburing/wiki/io_uring-and-networking-in-2023#multi-shot)
Getting NOENT for cancel completion result, meaning:
-ENOENT
The request identified by user_data could not be located.
This could be because it completed before the cancelation
request was issued, or if an invalid identifier is used.
https://man7.org/linux/man-pages/man3/io_uring_prep_cancel.3.htmlhttps://github.com/ziglang/zig/actions/runs/6801394000/job/18492139893?pr=17806
Result in cancel/recv cqes are different depending on the kernel.
on older kernel (tested with v6.0.16, v6.1.57, v6.2.12, v6.4.16)
cqe_cancel.err() == .NOENT
cqe_crecv.err() == .NOBUFS
on kernel (tested with v6.5.0, v6.5.7)
cqe_cancel.err() == .SUCCESS
cqe_crecv.err() == .CANCELED
There is no reason to perform this detection during semantic analysis.
In fact, doing so is problematic, because we wish to utilize detection
of existing decls in a namespace in incremental compilation.
It's been seen on Windows 11 (22H2) Build 22621.3155 that NtCreateFile
will return the OBJECT_NAME_INVALID error code with certain path names.
The path name we saw this with started with `C:Users` (rather than
`C:\Users`) and also contained a `$` character. This PR updates our
OpenFile wrapper to propagate this error code as `error.BadPathName`
instead of making it `unreachable`.
see https://github.com/marler8997/zigup/issues/114#issuecomment-1994420791
I think of lerp() as a way to change coordinate systems, essentially
remapping the input numberline onto a shifted+rescaled numberline. In
my mind the full numberline is remapped, not just the 0-1 segment.
An example of how this is useful: in a game, you can write:
`myPos = lerp(pos0, pos1, easeOutBack(u))`
for some `u` that changes from 0 to 1 over time.
(see https://easings.net/#easeOutBack)
This will animate `myPos` between `pos0` and `pos1`, overshooting the
goal position `pos1` in a nicely-animated way.
`easeOutBack(float)->float` is a pure function that overshoots 1,
and by combining it with `lerp()` we can remap coordinates in other
coordinate systems, making them overshoot in the same way.
However, this overshooting is only possible because `easeOutBack(t)`
sometimes exceeds the range 0-1 (e.g. `easeOutBack(0.5)` is 1.0877),
which is not allowed by the current `math.lerp` implementation.
This commit removes the asserts that prevented this use-case. Now, any
value can be inputted for t. For example, `lerp(10,20, 2.0)` will now
return 30, instead of throwing an assert error.
In the example from the issue #19052 to_read holds 213_315_584
uncompressed bytes. Calling read with small output results in many
shifts of that big buffer.
This removes need to shift to_read after each read.
These systems write the number of *bits* of their inputs as a u64.
However if `@sizeOf(usize) == 4`, an input message or associated data
whose size is > 512 MiB could overflow.
On 64-bit systems, it is safe to assume that no machine has more than
2 EiB of memory.
This takes the code that was previously in src/Compilation.zig to turn resinator diagnostics into Zig error bundles and puts it in resinator/main.zig, and then makes resinator emit the resulting error bundles via std.zig.Server (which is used by the build runner, etc). Also adds support for turning Aro diagnostics into ErrorBundles.
This moves .rc/.manifest compilation out of the main Zig binary, contributing towards #19063
Also:
- Make resinator use Aro as its preprocessor instead of clang
- Sync resinator with upstream
Make std.tar look better in docs. Remove from public interface what is
not necessary. Add comment to the public methods. Add doctest as usage
examples for iterator and pipeToFileSystem.
Problem was manifested only on windows with target `-target
aarch64-windows-gnu`.
I was creating new files but not closing any of them. Tmp dir cleanup
hangs looping in deleteTree forever.
Fixing ci error: error:
'tar.test.test.pipeToFileSystem' failed: slices differ. first difference occurs at index 2 (0x2)
============ expected this output: ============= len: 9 (0x9)
2E 2E 2F 61 2F 66 69 6C 65 ../a/file
============= instead found this: ============== len: 9 (0x9)
2E 2E 5C 61 5C 66 69 6C 65 ..\a\file
After #19136 dir.symlink changes path separtors to \ on windows.
Fixing error from ci:
std/tar.zig:423:54: error: expected type 'usize', found 'u64'
std/tar.zig:423:54: note: unsigned 32-bit int cannot represent all possible unsigned 64-bit values
for the then end users:
1. Don't require user to call file.skip() on file returned from
iterator.next if file is not read. Iterator will now handle this.
Previously that returned header parsing error, without knowing some tar
internals it is hard to understand what is required from user.
2. Use iterator.File.kind enum which is similar to fs.File.Kind,
something familiar. Internal Header.Kind has many types which are not
exposed but the user needs to have else in kind switch to cover those
cases.
3. Add reader interface to the iterator.File.
Tar header stores name in max 256 bytes and link name in max 100 bytes.
Those are minimums for provided buffers. Error is raised during iterator
init if buffers are not long enough.
Pax and gnu extensions can store longer names. If such extension is
reached during unpack and don't fit into provided buffer error is
returned.
Fixes compilation errors in functions that are syntaxic sugar
to operate on serialized scalars.
Also make it explicit that square roots in fields whose size is
not congruent to 3 modulo 4 are not an error, they are just
not implemented yet.
Reported by @vitalonodo - Thanks!
A lot of these "shorthand" doc comments were redundant, low quality
filler content. Better to let the actual modules speak for themselves
with top level doc comments rather than trying to document their
aliases.
ML-KEM is the Kyber post-quantum secure key encapsulation mechanism,
as being standardized by NIST.
Too bad, they decided to rename it; the "Kyber" name was so much
better!
This implements the current draft (NIST FIPS-203), which is already
being deployed even though the specification is not finalized.
This replaces the errol backend with one based on ryu. The 128-bit
backend only is implemented. This supports all floating-point types and
does not use fp logic to print.
Closes#1181.
Closes#1299.
Closes#3612.
* `linux.IO_Uring` -> `linux.IoUring` to align with naming conventions.
* All functions `io_uring_prep_foo` are now methods `prep_foo` on `io_uring_sqe`, which is in a file of its own.
* `SubmissionQueue` and `CompletionQueue` are namespaced under `IoUring`.
This is a breaking change.
The new file and namespace layouts are more idiomatic, and allow us to
eliminate one more usage of `usingnamespace` from the standard library.
2 remain.
Some of the structs I shuffled around might be better namespaced under
CONTEXT, I'm not sure. However, for now, this approach preserves
backwards compatibility.
Eliminates one more usage of `usingnamespace` from the standard library.
3 remain.
Thanks to Zig's lazy analysis, it's fine for these symbols to be
declared on platform they won't exist on. This is already done in
several places in this file; e.g. `pthread` functions are declared
unconditionally.
Eliminates one more usage of `usingnamespace` from the standard library.
4 remain.
Searching GitHub indicated that the only use of these types in the wild is
support in getty-zig, and testing for that support. This eliminates 3 more uses
of usingnamespace from the standard library, and removes some unnecessarily
complex generic code.
This usage of `usingnamespace` was removed fairly trivially - the
resulting code is, IMO, more clear.
Eliminates one more usage of `usingnamespace` from the standard library.
This is a trivial change - this code did `usingnamespace` into an
otherwise empty namespace, so the outer namespace was just unnecessary.
Eliminates one more usage of `usingnamespace` from the standard library.
This fixes an issue with the implementation of #18816. Consider the
following code:
```zig
pub fn Wrap(comptime T: type) type {
return struct {
pub const T1 = T;
inner: struct { x: T1 },
};
}
```
Previously, the type of `inner` was not considered to be "capturing" any
value, as `T1` is a decl. However, since it is declared within a generic
function, this decl reference depends on the context, and thus should be
treated as a capture.
AstGen has been augmented to tunnel references to decls through closure
when the decl was declared in a potentially-generic context (i.e. within
a function).
This changes the representation of closures in Zir and Sema. Rather than
a pair of instructions `closure_capture` and `closure_get`, the system
now works as follows:
* Each ZIR type declaration (`struct_decl` etc) contains a list of
captures in the form of ZIR indices (or, for efficiency, direct
references to parent captures). This is an ordered list; indexes into
it are used to refer to captured values.
* The `extended(closure_get)` ZIR instruction refers to a value in this
list via a 16-bit index (limiting this index to 16 bits allows us to
store this in `extended`).
* `Module.Namespace` has a new field `captures` which contains the list
of values captured in a given namespace. This is initialized based on
the ZIR capture list whenever a type declaration is analyzed.
This change eliminates `CaptureScope` from semantic analysis, which is a
nice simplification; but the main motivation here is that this change is
a prerequisite for #18816.
fill(0) will fill all bytes in bit reader. If bit reader is aligned to
the byte, as it is at the end of the stream this ensures no overshoot
when reading footer. Footer is 4 bytes (zlib) or 8 bytes (gzip). For
zlib we will use 4 bytes BitReader and 8 for gzip. After align and fill
we will read those bytes and leave BitReader empty after that.
* Introduce `-Ddebug-extensions` for enabling compiler debug helpers
* Replace safety mode checks with `std.debug.runtime_safety`
* Replace debugger helper checks with `!builtin.strip_debug_info`
Sometimes, you just have to debug optimized compilers...
Thanks to jacobly0 for figuring this out. The chain of events causing
the failure this triggered is as follows.
* As of a recent commit, certain bodies no longer emit a redundant
`block`, meaning there are more likely to be "interesting"
instructions (i.e. not blocks) at the end of parent GenZir scopes.
* When emitting the first `dbg_stmt` in such a body, the elision logic
incorrectly looks at a tag from an instruction in an enclosing scope.
* The tag of this instruction may be `undefined`, meaning that in unsafe
builds it may be incorrectly identified as a `dbg_stmt` instruction.
* This instruction from another body is clobbered rather than emitting
an actual `dbg_stmt` instruction. Note that this does not produce
invalid ZIR, since the creator of the undefined instruction replaces
the previously-undefined payload later.
Now, all the tests that use `testWithAllSupportedPathTypes` will also run each test with both `/` and `\` as the path separator on Windows.
Also, removes the now-redundant "Dir.symLink with relative target that has a / path separator" since the same thing is now tested in the "Dir.readLink" test
In the code `if (cond) { ... }`, the "then body" of the `if` is
technically a block. However, we don't need to emit a real ZIR `block`
corresponding to it, because we are already within a condbr body; we
have a separate gz, and appropriate scoping for allocs and debug
variables. In this case, and many like it, we can trivially elide the
block here, instead emitting the block statements directly into the
current `GenZir`. This results in a significant decrease in ZIR bytes
for real code.
Just to pass ci of regression fix#19126.
I'll return to this later.
Currently can't reproduce on my Windows wm, here I'm failing on symlink creation
in ci fails later in the process.
In the iterator function which is the low-level API, don't depend on
`std.fs.MAX_PATH_BYTES` because this is not defined on all operating
systems, such as freestanding.
However in such environments it still makes sense to be able to extract
from a tar file.
An even more flexible solution would be to accept the buffers as
arguments to iterator() which I think is a good idea, but for now let's
just set the same upper limmit across all operating systems.
Part of #19063.
Primarily, this moves Aro from deps/ to lib/compiler/ so that it can be
lazily compiled from source. src/aro_translate_c.zig is moved to
lib/compiler/aro_translate_c.zig and some of Zig CLI logic moved to a
main() function there.
aro_translate_c.zig becomes the "common" import for clang-based
translate-c.
Not all of the compiler was able to be detangled from Aro, however, so
it still, for now, remains being compiled with the main compiler
sources due to the clang-based translate-c depending on it. Once
aro-based translate-c achieves feature parity with the clang-based
translate-c implementation, the clang-based one can be removed from Zig.
Aro made it unnecessarily difficult to depend on with these .def files
and all these Zig module requirements. I looked at the .def files and
made these observations:
- The canonical source is llvm .def files.
- Therefore there is an update process to sync with llvm that involves
regenerating the .def files in Aro.
- Therefore you might as well just regenerate the .zig files directly
and check those into Aro.
- Also with a small amount of tinkering, the file size on disk of these
generated .zig files can be made many times smaller, without
compromising type safety in the usage of the data.
This would make things much easier on Zig as downstream project,
particularly we could remove those pesky stubs when bootstrapping.
I have gone ahead with these changes since they unblock me and I will
have a chat with Vexu to see what he thinks.
InvalidHandle in OpenError is no longer a possible error on any platform. In the past it was able to be returned in `openOptionsFromFlagsWasi`, but the implementation was changed in 7680c5330c to make it no longer possible.
InvalidHandle in RealPathError was a holdover from before d5312d53a0, which made realpath a compile error on WASI. However, InvalidHandle was also a possible error in the FreeBSD fallback implementation added in 537624734c. This commit changes the FreeBSD fallback implementation to return FileNotFound instead of InvalidHandle which matches how EBADF is handled in all the other `realpath` implementations (including the FreeBSD non-fallback implementation).
Closes#19084
* Support different keep alive defaults with different http versions.
* Fix incorrect usage of `copyBackwards`, which copies in a backwards
direction allowing data to be moved forward in a buffer, not
backwards in a buffer.
Follow up to #19079, which made test names fully qualified.
This fixes tests that now-redundant information in their test names. For example here's a fully qualified test name before the changes in this commit:
"priority_queue.test.std.PriorityQueue: shrinkAndFree"
and the same test's name after the changes in this commit:
"priority_queue.test.shrinkAndFree"
* make test names contain the fully qualified name
* make test filters match the fully qualified name
* allow multiple test filters, where a test is skipped if it does not
match any of the specified filters
Found while fuzzing. Previously 1.1897314953572317650857593266280070162E4932
was parsed as +inf, which caused issues for round-trip serialization of
floats. Only f128 had issues, but have added other tests for all
floating point large normals.
The max_exponent for f128 was wrong, it is subtly different in the
decimal code-path as it is based on where the decimal digit should go.
This needs to be 2 greater than the max exponent (e.g. 308 or 4932) to
work correctly (greater by 1, then we use a >= comparision).
In addition, I've removed the redundant `optimize` constant which was only
use for testing the slow path locally.
std.heap.c_allocator was already doing this, however,
std.heap.raw_c_allocator, which asserts no allocations more than 16
bytes aligned, was not.
The zig compiler uses std.heap.raw_c_allocator, so it is affected by
this.
Like in issue #18089, this tar contains, same file name in two case
sensitive name version. Unpack should fail on case insensitive file
systems and succeed on case sensitive.
$ tar tvf 18089.tar
18089/
18089/alacritty/
18089/alacritty/darkermatrix.yml
18089/alacritty/Darkermatrix.yml
Windows paths now use WTF-16 <-> WTF-8 conversion everywhere, which is lossless. Previously, conversion of ill-formed UTF-16 paths would either fail or invoke illegal behavior.
WASI paths must be valid UTF-8, and the relevant function calls have been updated to handle the possibility of failure due to paths not being encoded/encodable as valid UTF-8.
Closes#18694Closes#1774Closes#2565
Ill-formed UTF-8 byte sequences are replaced by the replacement character (U+FFFD) according to "U+FFFD Substitution of Maximal Subparts" from Chapter 3 of the Unicode standard, and as specified by https://encoding.spec.whatwg.org/#utf-8-decoder
ie.
C:\zig\current\lib\std\debug.zig:726:23: error: no field or member function named 'getDwarfInfoForAddress' in 'dwarf.DwarfInfo'
if (try module.getDwarfInfoForAddress(unwind_state.debug_info.allocator, unwind_state.dwarf_context.pc)) |di| {
~~~~~~^~~~~~~~~~~~~~~~~~~~~~~
C:\zig\current\lib\std\dwarf.zig:663:23: note: struct declared here
pub const DwarfInfo = struct {
^~~~~~
referenced by:
next_internal: C:\zig\current\lib\std\debug.zig:737:29
next: C:\zig\current\lib\std\debug.zig:654:31
remaining reference traces hidden; use '-freference-trace' to see all reference traces
C:\zig\current\lib\std\debug.zig:970:31: error: no field or member function named 'getSymbolAtAddress' in 'dwarf.DwarfInfo'
const symbol_info = module.getSymbolAtAddress(debug_info.allocator, address) catch |err| switch (err) {
~~~~~~^~~~~~~~~~~~~~~~~~~
C:\zig\current\lib\std\dwarf.zig:663:23: note: struct declared here
pub const DwarfInfo = struct {
Found by fuzzing. Previous numeric function assumed that is is getting
buffer of size 12, mode is size 8. Fuzzing found overflow.
Fixing and adding test cases.