Age | Commit message (Collapse) | Author |
|
Fixes performance regression introduced by
https://github.com/denoland/deno/pull/20223 and
https://github.com/denoland/deno/pull/20314. It's enough to have one
"shared" buffer per socket
and no locking mechanism is required.
|
|
Fixes https://github.com/denoland/deno/issues/19816
In that issue, I suggest switching over the other brotli functionality
to the Rust API provided by the `brotli` crate. Here, I only do that
with the `brotli_decompress` function to fix the bug with buffers longer
than 4096 bytes.
|
|
Extracted from https://github.com/denoland/deno/pull/17815
Optimise Buffer's string operations, most significantly when dealing
with ASCII and UTF-16. Base64 and HEX encodings are affected to much
lesser degrees.
## Performance
### String length 15
With very small strings we're at break-even or sometimes even lose a tad
bit of performance from creating a `DataView` that ends up not paying
for itself.
**This PR:**
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string 1.15 µs/iter 871,388.6 (728.78 ns … 1.56 µs) 1.23 µs 1.56 µs 1.56 µs
Buffer.from base64 string 1.63 µs/iter 612,790.9 (1.31 µs … 1.96 µs) 1.77 µs 1.96 µs 1.96 µs
Buffer.from utf16 string 1.41 µs/iter 707,396.3 (915.24 ns … 1.93 µs) 1.61 µs 1.93 µs 1.93 µs
Buffer.from hex string 1.87 µs/iter 535,357.9 (1.56 µs … 2.19 µs) 2 µs 2.19 µs 2.19 µs
Buffer.toString ascii string 154.58 ns/iter 6,469,162.8 (149.69 ns … 198 ns) 154.51 ns 182.89 ns 191.91 ns
Buffer.toString base64 string 161.65 ns/iter 6,186,189.6 (150.91 ns … 181.15 ns) 165.18 ns 171.87 ns 174.94 ns
Buffer.toString utf16 string 292.74 ns/iter 3,415,959.8 (285.43 ns … 312.47 ns) 295.25 ns 310.47 ns 312.47 ns
Buffer.toString hex string 89.61 ns/iter 11,159,315.6 (81.09 ns … 123.77 ns) 91.09 ns 113.62 ns 119.28 ns
```
**Main:**
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string 1.26 µs/iter 794,875.8 (1.07 µs … 1.46 µs) 1.31 µs 1.46 µs 1.46 µs
Buffer.from base64 string 1.65 µs/iter 607,853.3 (1.38 µs … 2.01 µs) 1.69 µs 2.01 µs 2.01 µs
Buffer.from utf16 string 1.34 µs/iter 744,894.6 (1.09 µs … 1.55 µs) 1.45 µs 1.55 µs 1.55 µs
Buffer.from hex string 2.01 µs/iter 496,345.8 (1.54 µs … 2.6 µs) 2.26 µs 2.6 µs 2.6 µs
Buffer.toString ascii string 150.16 ns/iter 6,659,630.5 (144.99 ns … 166.68 ns) 152.4 ns 157.26 ns 159.14 ns
Buffer.toString base64 string 164.73 ns/iter 6,070,692.0 (158.77 ns … 185.63 ns) 168.48 ns 175.74 ns 176.68 ns
Buffer.toString utf16 string 150.61 ns/iter 6,639,864.0 (148.2 ns … 168.29 ns) 150.93 ns 157.21 ns 168.15 ns
Buffer.toString hex string 94.21 ns/iter 10,614,972.9 (86.21 ns … 98.75 ns) 95.43 ns 97.99 ns 98.21 ns
```
### String length 1500
With moderate lengths we already see great upsides for `Buffer.from()`
with ASCII and UTF-16.
**This PR:**
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string 5.79 µs/iter 172,562.6 (4.72 µs … 4.71 ms) 5.04 µs 10.3 µs 11.67 µs
Buffer.from base64 string 5.08 µs/iter 196,678.9 (4.97 µs … 5.76 µs) 5.08 µs 5.76 µs 5.76 µs
Buffer.from utf16 string 9.68 µs/iter 103,316.5 (7.14 µs … 3.44 ms) 10.32 µs 13.42 µs 15.21 µs
Buffer.from hex string 53.7 µs/iter 18,620.2 (49.37 µs … 2.2 ms) 54.74 µs 72.2 µs 81.07 µs
Buffer.toString ascii string 6.63 µs/iter 150,761.3 (5.59 µs … 1.11 ms) 6.08 µs 15.68 µs 24.77 µs
Buffer.toString base64 string 460.57 ns/iter 2,171,224.4 (448.33 ns … 511.73 ns) 465.05 ns 495.54 ns 511.73 ns
Buffer.toString utf16 string 6.52 µs/iter 153,287.0 (6.47 µs … 6.66 µs) 6.53 µs 6.66 µs 6.66 µs
Buffer.toString hex string 3.68 µs/iter 271,965.4 (3.64 µs … 3.82 µs) 3.68 µs 3.82 µs 3.82 µs
```
**Main:**
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string 11.46 µs/iter 87,298.1 (8.53 µs … 834.1 µs) 9.61 µs 83.31 µs 87.3 µs
Buffer.from base64 string 5.4 µs/iter 185,027.8 (5.07 µs … 7.49 µs) 5.44 µs 7.49 µs 7.49 µs
Buffer.from utf16 string 20.3 µs/iter 49,270.8 (13.55 µs … 649.11 µs) 18.8 µs 113.93 µs 125.17 µs
Buffer.from hex string 52.03 µs/iter 19,218.9 (48.74 µs … 2.59 ms) 52.84 µs 67.05 µs 73.56 µs
Buffer.toString ascii string 6.46 µs/iter 154,822.5 (6.32 µs … 6.69 µs) 6.52 µs 6.69 µs 6.69 µs
Buffer.toString base64 string 440.19 ns/iter 2,271,764.6 (427 ns … 490.77 ns) 444.74 ns 484.64 ns 490.77 ns
Buffer.toString utf16 string 6.89 µs/iter 145,106.7 (6.81 µs … 7.24 µs) 6.91 µs 7.24 µs 7.24 µs
Buffer.toString hex string 3.66 µs/iter 273,456.5 (3.6 µs … 4.02 µs) 3.64 µs 4.02 µs 4.02 µs
```
### String length 2^20
With massive lengths we the difference in ASCII and UTF-16 parsing
performance is enormous.
**This PR:**
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string 4.1 ms/iter 243.7 (2.64 ms … 6.74 ms) 4.43 ms 6.26 ms 6.74 ms
Buffer.from base64 string 3.74 ms/iter 267.6 (2.91 ms … 4.92 ms) 3.96 ms 4.31 ms 4.92 ms
Buffer.from utf16 string 7.72 ms/iter 129.5 (5.91 ms … 11.03 ms) 7.97 ms 11.03 ms 11.03 ms
Buffer.from hex string 35.72 ms/iter 28.0 (34.71 ms … 38.42 ms) 35.93 ms 38.42 ms 38.42 ms
Buffer.toString ascii string 78.92 ms/iter 12.7 (42.72 ms … 94.13 ms) 91.64 ms 94.13 ms 94.13 ms
Buffer.toString base64 string 833.62 µs/iter 1,199.6 (638.05 µs … 5.97 ms) 826.86 µs 2.45 ms 2.48 ms
Buffer.toString utf16 string 79.35 ms/iter 12.6 (69.72 ms … 88.9 ms) 86.66 ms 88.9 ms 88.9 ms
Buffer.toString hex string 31.04 ms/iter 32.2 (4.3 ms … 46.9 ms) 37.21 ms 46.9 ms 46.9 ms
```
**Main:**
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string 18.66 ms/iter 53.6 (15.61 ms … 23.26 ms) 20.62 ms 23.26 ms 23.26 ms
Buffer.from base64 string 4.7 ms/iter 212.9 (2.94 ms … 9.07 ms) 4.65 ms 9.06 ms 9.07 ms
Buffer.from utf16 string 33.49 ms/iter 29.9 (31.24 ms … 35.67 ms) 34.08 ms 35.67 ms 35.67 ms
Buffer.from hex string 39.38 ms/iter 25.4 (38.66 ms … 42.36 ms) 39.58 ms 42.36 ms 42.36 ms
Buffer.toString ascii string 77.68 ms/iter 12.9 (67.46 ms … 95.68 ms) 84.71 ms 95.68 ms 95.68 ms
Buffer.toString base64 string 825.53 µs/iter 1,211.3 (655.38 µs … 6.69 ms) 816.62 µs 3.07 ms 3.13 ms
Buffer.toString utf16 string 76.54 ms/iter 13.1 (66.9 ms … 85.26 ms) 83.63 ms 85.26 ms 85.26 ms
Buffer.toString hex string 38.56 ms/iter 25.9 (33.83 ms … 46.56 ms) 45.33 ms 46.56 ms 46.56 ms
```
|
|
|
|
|
|
Adds support for AES-GCM 128/256 bit keys in `node:crypto` and
`setAAD()`, `setAuthTag()` and `getAuthTag()`
Uses https://github.com/littledivy/aead-gcm-stream
Fixes https://github.com/denoland/deno/issues/19836
https://github.com/denoland/deno/issues/20353
|
|
(#20378)
Fixes #20373
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
When trying to run
```
deno run -A --unstable npm:astro dev
```
in my Astro project it fails with:
```
Node.js v18.12.1 is not supported by Astro!
Please upgrade Node.js to a supported version: ">=18.14.1"
```
My current version is:
```
~ ❯ node --version
v20.5.1
```
Bumping the version to the latest stable Release of node in
`ext/node/polyfills/_process/process.ts` fixes this.
I don't know if this causes any conflicts, so please feel free to
correct me here.
|
|
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
|
|
`getpriority` and `setpriority` on musl libc accepts `int` / `c_int` /
`i32` as the first argument, not `u32`.
Since the `PRIO_PROCESS` constant is imported from the same crate (libc)
as the `getpriority` and `setpriority` functions, this type cast seems
to be completely unnecessary here.
It was introduced in aa8078b6888ee4d55ef348e336e076676dffc25f by
@crowlKats.
Relevant sources:
-
https://github.com/rust-lang/libc/blob/835661543db1ec42a6d9a809d69c3c5b5b978b81/src/unix/linux_like/linux/musl/mod.rs#L739-L740
- https://git.musl-libc.org/cgit/musl/tree/src/misc/setpriority.c
- https://git.musl-libc.org/cgit/musl/tree/src/misc/getpriority.c
Co-authored-by: Aapo Alasuutari <aapo.alasuutari@gmail.com>
|
|
Closes https://github.com/denoland/deno/issues/19828
|
|
|
|
The fix for #20188 was not entirely correct -- we were unlocking the
global buffer incorrectly. This PR introduces a lock state that ensures
we only unlock a lock we have taken out.
|
|
`Transfer-Encoding: chunked` (#20127)
Fix #20063.
|
|
<!--
Before submitting a PR, please read https://deno.com/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
As the title.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Fixes https://github.com/denoland/deno/issues/19002
|
|
Reference from the FreeBSD port
https://github.com/freebsd/freebsd-ports/blob/3afa24c6e301832f76304bb55f4e9ee72858c254/www/deno/files/patch-ext_node_ops_os.rs
|
|
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
Reported in #20188
This was caused by re-use of a global buffer `BUF` during simultaneous
async reads.
|
|
|
|
Closes https://github.com/denoland/deno/issues/20186
|
|
Closes #19922
|
|
The goal of this PR is to address issue #20106 where a `TypeError`
occurs when the variables `uid` and `gid` from `userInfo()` in `node:os`
are reassigned if the user is on Windows. Both `uid` and `gid` are
marked as `const` therefore producing a `TypeError` when the two are
reassigned.
This PR achieves that goal by marking `uid` and `gid` as `let`
|
|
To fix bugs around detection of when node emulation is required, we will
just eagerly initialize it. The improvements we make to reduce the
impact of the startup time:
- [x] Process stdin/stdout/stderr are lazily created
- [x] node.js global proxy no longer allocates on each access check
- [x] Process checks for `beforeExit` listeners before doing expensive
shutdown work
- [x] Process should avoid adding global event handlers until listeners
are added
Benchmarking this PR (`89de7e1ff`) vs main (`41cad2179`)
```
12:36 $ third_party/prebuilt/mac/hyperfine --warmup 100 -S none './deno-41cad2179 run ./empty.js' './deno-89de7e1ff run ./empty.js'
Benchmark 1: ./deno-41cad2179 run ./empty.js
Time (mean ± σ): 24.3 ms ± 1.6 ms [User: 16.2 ms, System: 6.0 ms]
Range (min … max): 21.1 ms … 29.1 ms 115 runs
Benchmark 2: ./deno-89de7e1ff run ./empty.js
Time (mean ± σ): 24.0 ms ± 1.4 ms [User: 16.3 ms, System: 5.6 ms]
Range (min … max): 21.3 ms … 28.6 ms 126 runs
```
Fixes https://github.com/denoland/deno/issues/20142
Fixes https://github.com/denoland/deno/issues/15826
Fixes https://github.com/denoland/deno/issues/20028
|
|
This PR optimizes Node's `IncomingMessageForServer.headers` by replacing
`Object.fromEntries()` with a loop and `headers.entries` with
`headersEntries` which returns the internal array directly instead of an
iterator
## Benchmarks
Using `wrk` with 5 headers
```
wrk -d 10s --latency -H "X-Deno: true" -H "Accept: application/json" -H "X-Foo: bar" -H "User-Agent: wrk" -H "Accept-Encoding: gzip, br" http://127.0.0.1:3000
```
**this PR**
```
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 167.53us 136.89us 2.75ms 97.33%
Req/Sec 31.98k 1.38k 36.39k 70.30%
Latency Distribution
50% 134.00us
75% 191.00us
90% 234.00us
99% 544.00us
642548 requests in 10.10s, 45.96MB read
Requests/sec: 63620.36
Transfer/sec: 4.55MB
```
**main**
```
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 181.31us 132.54us 3.79ms 97.13%
Req/Sec 29.21k 1.45k 32.93k 79.21%
Latency Distribution
50% 148.00us
75% 198.00us
90% 261.00us
99% 545.00us
586939 requests in 10.10s, 41.98MB read
Requests/sec: 58114.01
Transfer/sec: 4.16MB
```
```js
import express from "npm:express";
const app = express();
app.get("/", function (req, res) {
req.headers;
res.end();
});
app.listen(3000);
```
|
|
|
|
This PR adds caching to node's `req.headers`
```js
import express from "npm:express";
const app = express();
app.get("/", function (req, res) {
const ua = req.header("User-Agent");
const auth = req.header("Authorization");
const type = req.header("Content-Type");
const ip = req.header("X-Forwarded-For");
res.end();
});
app.listen(3000);
```
**this PR**
```
wrk -d 10s --latency http://127.0.0.1:3000
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 155.64us 152.14us 5.74ms 97.39%
Req/Sec 35.00k 1.97k 39.10k 80.69%
Latency Distribution
50% 123.00us
75% 172.00us
90% 214.00us
99% 563.00us
703420 requests in 10.10s, 50.31MB read
Requests/sec: 69648.45
Transfer/sec: 4.98MB
```
**main**
```
wrk -d 10s --latency http://127.0.0.1:3000
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 217.95us 786.89us 26.26ms 98.23%
Req/Sec 32.32k 2.54k 37.19k 87.13%
Latency Distribution
50% 130.00us
75% 191.00us
90% 232.00us
99% 1.88ms
649411 requests in 10.10s, 46.45MB read
Requests/sec: 64300.44
Transfer/sec: 4.60MB
```
|
|
Closes https://github.com/denoland/deno/issues/19983
Closes https://github.com/denoland/deno/issues/18303
Closes https://github.com/denoland/deno/issues/16681
Closes https://github.com/denoland/deno/issues/19978
|
|
Fixes https://github.com/denoland/deno/issues/19540
|
|
Fixes https://github.com/denoland/deno/issues/19935
|
|
Closes https://github.com/denoland/deno/issues/20075
|
|
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
Co-authored-by: littledivy <littledivy@users.noreply.github.com>
|
|
Closes https://github.com/denoland/deno/issues/20076
|
|
Rename some of the helper methods on the Fs trait to be suffixed with
`_sync` / `_async`, in preparation of the introduction of more async
methods for some helpers.
Also adds a `read_text_file_async` helper to complement the renamed
`read_text_file_sync` helper.
|
|
(#20093)
This is for when semiColons: false
Closes #20089
|
|
Closes #19399 (running without snapshots at all was suggested as an
alternative solution).
Adds a `__runtime_js_sources` pseudo-private feature to load extension
JS sources at runtime for faster development, instead of building and
loading snapshots or embedding sources in the binary. Will only work in
a development environment obviously.
Try running `cargo test --features __runtime_js_sources
integration::node_unit_tests::os_test`. Then break some behaviour in
`ext/node/polyfills/os.ts` e.g. make `function cpus() {}` return an
empty array, and run it again. Fix and then run again. No more build
time in between.
|
|
Closes https://github.com/denoland/deno/issues/19777
|
|
Ref https://github.com/denoland/deno/issues/19733
|
|
This is a prerequisite for fast streams work -- this particular resource
used a custom `mpsc`-style stream, and this work will allow us to unify
it with the streams in `ext/http` in time.
Instead of using Option as an internal semaphore for "correctly
completed EOF", we allow code to propagate errors into the channel which
can be picked up by downstream sinks like Hyper. EOF is signalled using
a more standard sender drop.
|
|
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
|
|
Also removed some noisy output that caused test flakiness.
|
|
|
|
This commit provides basic polyfill for "node:test" module. Currently
only top-level "test" function is polyfilled, all remaining functions from
that module throw not implemented errors.
|
|
|
|
Takes #4202 over
Closes #17850
---------
Co-authored-by: ecyrbe <ecyrbe@gmail.com>
|
|
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
|
|
|
|
extensions (#19966)
postcss was importing `./index.js` from `./index.d.mts` where there also
existed a `./index.d.ts`.
Closes #19575
|