Age | Commit message (Collapse) | Author |
|
managed `CliNpmResolver` (#20777)
Part of https://github.com/denoland/deno/issues/18967
|
|
The `process` global is not defined in this file.
Fixes #20441
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
This PR optimizes `DOMException` constructor increasing performance of
all Web APIs that throw a `DOMException` (ie: `AbortSignal`)
**main**
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.37.1 (x86_64-unknown-linux-gnu)
new DOMException() 9.66 µs/iter 103,476.8 (8.47 µs … 942.71 µs) 9.62 µs 11.29 µs 14.04 µs
abort writeTextFileSync 16.45 µs/iter 60,775.5 (13.65 µs … 1.33 ms) 16.39 µs 20.59 µs 24.12 µs
abort readFile 16.25 µs/iter 61,542.2 (15.12 µs … 621.14 µs) 16.18 µs 19.59 µs 22.33 µs
```
**this PR**
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.37.1 (x86_64-unknown-linux-gnu)
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
new DOMException() 2.37 µs/iter 421,657.0 (2.33 µs … 2.58 µs) 2.37 µs 2.58 µs 2.58 µs
abort writeTextFileSync 7.1 µs/iter 140,760.1 (6.94 µs … 7.68 µs) 7.13 µs 7.68 µs 7.68 µs
abort readFile 5.48 µs/iter 182,648.2 (5.3 µs … 5.69 µs) 5.56 µs 5.69 µs 5.69 µ
```
```js
Deno.bench("new DOMException()", () => {
new DOMException();
});
Deno.bench("abort writeTextFileSync", () => {
const ac = new AbortController();
ac.abort();
try {
Deno.writeTextFileSync("/tmp/out", "x", { signal: ac.signal });
} catch {}
});
Deno.bench("abort readFile", async () => {
const ac = new AbortController();
ac.abort();
try {
await Deno.readFile("/tmp/out", { signal: ac.signal });
} catch {}
});
```
|
|
Use the `core.byteLength` op for string utf8 length calculation in
`node:buffer`
```
# This patch
file:///Users/divy/gh/deno/buffer.mjs
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------- -----------------------------
Buffer#from 272.11 ns/iter 3,675,029.3 (268.41 ns … 301.15 ns) 271.62 ns 295.5 ns 301.15 ns
# Deno 1.37.1
file:///Users/divy/gh/deno/buffer.mjs
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------- -----------------------------
Buffer#from 411.28 ns/iter 2,431,428.8 (393.82 ns … 439.92 ns) 418.85 ns 434.4 ns 439.92 ns
```
|
|
|
|
fixes #20454
Current KV queues implementation assumes that `enqueue` and
`listenQueue` are called on the same instance of `Deno.Kv`. It's
possible that the same Deno process opens multiple KV instances pointing
to the same fs path, and in that case `listenQueue` should still get
notified of messages enqueued through a different KV instance.
|
|
This is required from BYONM (bring your own node_modules).
Part of #18967
|
|
|
|
For the following example, if I set the encoding to `base64url`, it'll
throw an unexpected TypeError:
```ts
import { Buffer } from "node:buffer";
Buffer.from("IntcImhlbGxvXCI6XCJoZGQvZStpXCJ9Ig", "base64url").toString();
// error: Uncaught TypeError: src.subarray is not a function
// const buf = Buffer.from(
// ^
// at blitBuffer (ext:deno_node/internal/buffer.mjs:1779:15)
// at Uint8Array.base64urlWrite (ext:deno_node/internal/buffer.mjs:691:10)
// at Object.write (ext:deno_node/internal/buffer.mjs:2195:11)
// at Uint8Array.write (ext:deno_node/internal/buffer.mjs:794:14)
// at fromString (ext:deno_node/internal/buffer.mjs:214:22)
// at _from (ext:deno_node/internal/buffer.mjs:119:12)
// at Function.from (ext:deno_node/internal/buffer.mjs:157:10)
// at file:///Users/foodieats/temp/buffer1.ts:3:20
```
The error caused by `base64urlWrite` function, it should call
`forgivingBase64UrlDecode` not `forgivingBase64UrlEncode`
Also fixed #20563 .
|
|
This is the release commit being forwarded back to main for 1.37.1
Co-authored-by: littledivy <littledivy@users.noreply.github.com>
|
|
This patch adds support for [key
expiration](https://docs.deno.com/kv/manual/key_expiration) in the
remote backend.
|
|
This fixes the `TypeError: Database closed` error during shutdown.
|
|
|
|
|
|
Use the [scopeguard](https://docs.rs/scopeguard/) defer macro to run
cleanup code for `new_slab_future`.
This means it can be a single async function, avoiding the need to
create a struct and implement `PinnedDrop`
Async cleanup in Rust is awkward because async functions may be
cancelled at any await point when their Future is dropped.
The scopeguard approach comes from the following articles:
* [How to think about `async`/`await` in
Rust](http://cliffle.com/blog/async-inversion/)
* [Async Cancellation
I](https://blog.yoshuawuyts.com/async-cancellation-1/) (Reddit
[discussion](https://www.reddit.com/r/rust/comments/qrhg39/blog_post_async_cancellation/))
|
|
(#20641)
Builds on top of #20622 to fix #10854
|
|
|
|
|
|
`Array.from` has optional second argument. Calling `map` is not required
for this case.
|
|
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
|
|
resourceForReadableStream (#20622)
We can go one level down in abstraction and avoid using the public
`ReadableStream` APIs.
This patch ~5% perf boost on small ReadableStream:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 148.32us 108.95us 3.88ms 95.71%
Req/Sec 33.24k 2.68k 37.94k 73.76%
668188 requests in 10.10s, 77.74MB read
Requests/sec: 66162.91
Transfer/sec: 7.70MB
```
main:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 150.23us 67.61us 4.39ms 94.80%
Req/Sec 31.81k 1.55k 35.56k 83.17%
639078 requests in 10.10s, 74.36MB read
Requests/sec: 63273.72
Transfer/sec: 7.36MB
```
|
|
|
|
Fixes https://github.com/denoland/deno/issues/20634
|
|
|
|
Fixes #20558
Implementation: when package.json `exports` field is `null`, treat it as
if it was not set
|
|
This PR optimizes `fromInner*` methods of `Request` / `Header` /
`Response` used by `Deno.serve` and `fetch` by using `new` instead of
`ObjectCreate` from `createBranded`.
The "brand" is created by passing `webidl.brand` to the constructor
instead.
https://github.com/denoland/deno/blob/142449ecab20006c5cfd15462814650596bc034d/ext/webidl/00_webidl.js#L1001-L1005
### Benchmark
```js
const createBranded = Symbol("create branded");
const brand = Symbol("brand");
class B {
constructor(init) {
if (init === createBranded) {
this[brand] = brand;
}
}
}
Deno.bench("Object.create(protoype)", () => {
Object.create(B.prototype);
});
Deno.bench("new Class", () => {
new B(createBranded);
});
```
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.37.0 (x86_64-unknown-linux-gnu)
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
Object.create(protoype) 8.74 ns/iter 114,363,610.3 (7.32 ns … 26.02 ns) 8.65 ns 13.39 ns 14.47 ns
new Class 3.05 ns/iter 328,271,012.2 (2.78 ns … 9.1 ns) 3.06 ns 3.46 ns 3.5 ns
```
|
|
|
|
Fixes https://github.com/denoland/deno/issues/20590
|
|
ReadableStream (#20570)
Fixes: #20569 by introducing a custom replacement for the tokio mpsc
channel that is byte-size backpressure-aware.
Using the testcase in the linked bug, we see all the small writes
aggregated into a single packet and HTTP frame.
```
10:39 $ nc localhost 8000
GET / HTTP/1.1
HTTP/1.1 200 OK
content-type: text/plain
vary: Accept-Encoding
transfer-encoding: chunked
date: Tue, 19 Sep 2023 16:39:13 GMT
A
0
1
2
3
4
```
This patch:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 157.47us 194.89us 9.53ms 98.97%
Req/Sec 31.37k 1.56k 34.73k 85.15%
630407 requests in 10.10s, 73.35MB read
Requests/sec: 62428.12
Transfer/sec: 7.26MB
```
main:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 343.75us 200.48us 10.41ms 98.25%
Req/Sec 14.64k 806.52 16.98k 84.65%
294018 requests in 10.10s, 39.82MB read
Requests/sec: 29109.91
Transfer/sec: 3.94MB
```
---------
Co-authored-by: Bert Belder <bertbelder@gmail.com>
|
|
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: David Sherret <dsherret@gmail.com>
|
|
getOwnPropertyDescriptor('constructor') doesn't break Deno.inspect (#20568)
Fixes #20561
|
|
This PR optimizes `ReadableStream` async iterator
### Benchmarks
```js
Deno.bench("Stream - iterator", async () => {
const stream = new ReadableStream({
start(controller) {
controller.enqueue(new Uint8Array([97]));
controller.enqueue(new Uint8Array([97]));
controller.close();
},
});
for await (const chunk of stream) {}
});
```
**main**
`2 chunks`
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.36.4 (x86_64-unknown-linux-gnu)
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator 12.45 µs/iter 80,295.5 (10.5 µs … 281.12 µs) 12.13 µs 26.71 µs 33.63 µs
```
`20 chunks`
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator 32.99 µs/iter 30,312.2 (28.13 µs … 1.21 ms) 31.8 µs 81.82 µs 179.93 µs
```
---
**this PR**
`2 chunks`
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.36.4 (x86_64-unknown-linux-gnu)
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator 9.37 µs/iter 106,700.8 (8.35 µs … 730.71 µs) 9.15 µs 13.12 µs 18.17 µs
```
`20 chunks`
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator 16.59 µs/iter 60,270.0 (12.08 µs … 1.37 ms) 15.06 µs 83.03 µs 123.52 µs
```
|
|
This PR introduces an optimization to `set_response` to reduce the
overhead for responses with a payload size less than 64 bytes.
Performance gains are more noticeable when `is_request_compressible`
enters the slow path, ie: `-H 'Accept-Encoding: unknown'`
### Benchmarks
```js
Deno.serve({ port: 3000 }, () => new Response("hello"));
```
```
wrk -d 10s --latency -H 'Accept-Encoding: slow' http://127.0.0.1:3000
```
---
**main**
```
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 44.72us 28.12us 3.10ms 97.95%
Req/Sec 112.73k 8.25k 123.66k 91.09%
2264092 requests in 10.10s, 308.77MB read
Requests/sec: 224187.08
Transfer/sec: 30.57MB
```
**this PR**
```
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 42.91us 20.57us 2.04ms 97.36%
Req/Sec 116.61k 7.95k 204.81k 88.56%
2330970 requests in 10.10s, 317.89MB read
Requests/sec: 230806.32
Transfer/sec: 31.48MB
```
|
|
This commit improves async op sanitizer speed by only delaying metrics
collection if there are pending ops. This
results in a speedup of around 30% for small CPU bound unit tests.
It performs this check and possible delay on every collection now,
fixing an issue with parent test leaks into steps.
|
|
This commit adds "deno jupyter" subcommand which
provides a Deno kernel for Jupyter notebooks.
The implementation is mostly based on Deno's REPL and
reuses large parts of it (though there's some clean up that
needs to happen in follow up PRs). Not all functionality of
Jupyter kernel is implemented and some message type
are still not implemented (eg. "inspect_request") but
the kernel is fully working and provides all the capatibilities
that the Deno REPL has; including TypeScript transpilation
and npm packages support.
Closes https://github.com/denoland/deno/issues/13016
---------
Co-authored-by: Adam Powers <apowers@ato.ms>
Co-authored-by: Kyle Kelley <rgbkrk@gmail.com>
|
|
This commit improves compatibility of "node:http2" module by polyfilling
"connect" method and "ClientHttp2Session" class. Basic operations like
streaming, header and trailer handling are working correctly.
Refing/unrefing is still a TODO and "npm:grpc-js/grpc" is not yet working
correctly.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Fixes #20502 -- ensure that Hyper errors make it through to JS.
|
|
|
|
|
|
Fixes https://github.com/denoland/deno/issues/20414
|
|
|
|
Bump deno_core, pulling in new rusty_v8. Requires some op2/deprecation
fixes.
|
|
|
|
|
|
|
|
This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.
We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.
In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.
Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
Functions should generally be annotated with `#[allow]` blocks rather
than using inner `#![allow]` annotations.
|
|
Fixes performance regression introduced by
https://github.com/denoland/deno/pull/20223 and
https://github.com/denoland/deno/pull/20314. It's enough to have one
"shared" buffer per socket
and no locking mechanism is required.
|