Clarify that `astgen.advanceSourceCursor` already increments absolute
values of the line and columns numbers; i.e., `GenZir.calcLine` is thus
not only obsolete but wrong by design.
Incidentally, this clean up allows for specifying the `FnDecl` line
numbers for DWARF use correctly as relative values with respect to
the start of the parent `Decl`. This `Decl` in turn has its line number
information specified relatively to its parent `Decl`, and so on, until
we reach the global scope.
This commit fixes two related things:
1. If the loop goes all the way through the slice without a match, on
the last iteration `mid == symbols.len - 1` which causes
`&symbols[mid + 1]` to be out of bounds. End one step before that
instead.
2. If the address we're looking for is greater than the address of the
last symbol in the slice, we now match it to that symbol. Previously,
we would miss this case since we only matched if the address was _in
between_ the address of two symbols.
I hit the "quotes in an RSP file" issue when trying to compile gRPC using
"zig cc". As a fun exercise, I decided to see if I could fix it myself.
I'm fully open to this code being flat-out rejected. Or I can take feedback
to fix it up.
This modifies (and renames) _ArgIteratorWindows_ in process.zig such that
it works with arbitrary strings (or the contents of an RSP file).
In main.zig, this new _ArgIteratorGeneral_ is used to address the "TODO"
listed in _ClangArgIterator_.
This change closes#4833.
**Pros:**
- It has the nice attribute of handling "RSP file" arguments in the same way it
handles "cmd_line" arguments.
- High Performance, minimal allocations
- Fixed bug in previous _ArgIteratorWindows_, where final trailing backslashes
in a command line were entirely dropped
- Added a test case for the above bug
- Harmonized the _ArgIteratorXxxx._initWithAllocator()_ and _next()_ interface
across Windows/Posix/Wasi (Moved Windows errors to _initWithAllocator()_
rather than _next()_)
- Likely perf benefit on Windows by doing _utf16leToUtf8AllocZ()_ only once
for the entire cmd_line
**Cons:**
- Breaking Change in std library on Windows: Call
_ArgIterator.initWithAllocator()_ instead of _ArgIterator.init()_
- PhaseMage is new with contributions to Zig, might need a lot of hand-holding
- PhaseMage is a Windows person, non-Windows stuff will need to be double-checked
**Testing Done:**
- Wrote a few new test cases in process.zig
- zig.exe build test -Dskip-release (no new failures seen)
- zig cc now builds gRPC without error
fixes https://github.com/ziglang/zig/issues/10719
compiler_rt already provides __muloti4 but libc++ is also providing it and when linking libc++ it causes a crash on my windows x86_64 machine.
* comptime known 0 as a numerator returns comptime 0 independent of
denominator.
* negative numerator and denominator are allowed when the remainder is
zero because that means the modulus would be also zero.
* organize math behavior tests
The buffer `buf` contains N (= `slice_sizes.len`) slices followed by the
N null-terminated arguments. The N null-terminated arguments are stored
in the `contents` array list. Thus, `buf` size should be:
@sizeOf([]u8) * slice_sizes.len + contents_slice.len
Instead of:
@sizeOf([]u8) * slice_sizes.len + contents_slice.len + slice_sizes.len
This bug was found thanks to the gpa allocator which checks if freed
size matches allocated sizes for large allocations.
stage1: fix issue with to_twos_complement that caused a compile error when performing wrapping addition on two signed ints, both of which have the minimum possible value
I found that after switching from my custom Guid parser to the one in std that it increased zigwin32 build times substantially (from 40 seconds to over 10 minutes). More information can be found in the benchmark PR I created here: https://github.com/ziglang/gotta-go-fast/pull/21 . This PR ports my GUID parser to std so all projects can leverage the faster comptime performance.
This commit fixes an out of bounds write that can occur when
formatting certain float values. The write messes up the stack and
causes incorrect results, segfaults, or nothing at all, depending on the
optimization mode used.
The `errol` function writes the digits of the float into `buffer`
starting from index 1, leaving index 0 untouched, and returns `buffer[1..]`
and the exponent. This is because `roundToPrecision` relies on index 0 being
unused in case the rounding adds a digit (e.g rounding 999.99
to 1000.00). When this happens, pointer arithmetic is used
[here](0e6d2184ca/lib/std/fmt/errol.zig (L61-L65))
to access index 0 and put the ones digit in the right place.
However, `errol3u` contains two special cases: `errolInt` and `errolFixed`,
which return from the function early. For these two special cases
index 0 was never reserved, and the return value contains `buffer`
instead of `buffer[1..]`. This causes the pointer arithmetic in
`roundToPrecision` to write out of bounds, which in the case of
`std.fmt.formatFloatDecimal` messes up the stack and causes undefined behavior.
The fix is to move the slicing of `buffer` to `buffer[1..]` from `errol3u`
to `errol` so that both the default and the special cases operate on the sliced
buffer.
This is a patch to glibc features.h which makes
_DYNAMIC_STACK_SIZE_SOURCE undefined unless the version is >= 2.34.
This feature was introduced with glibc 2.34 and without this patch, code
built against these headers but then run on an older glibc will end up
making a call to sysconf() that returns -1 for the value of SIGSTKSZ
and MINSIGSTKSZ.
Closes#10713
Instead of defaulting to false, just keep the option as optional to
communicate default to the build system.
Fixes one problem with building the compiler for single-threaded
targets.
LLVM bitcast wants integers that match the number of bits. So the const
bitcast has to use an i80, not an i128.
This commit makes the behavior tests fail for me, so it seems I did not
correctly construct the type. But it gets rid of the LLVM segfault.
I noticed that the strategy of memcpy the buf worked if I simply did an
LLVMConstTrunc() on the i128 to make it into an i80 before the
LLVMConstBitCast().
But is that correct in the face of different endianness? I'm not sure.