Age | Commit message (Collapse) | Author |
|
|
|
This commit refactors how we access "core", "internals" and
"primordials" objects coming from `deno_core`, in our internal JavaScript code.
Instead of capturing them from "globalThis.__bootstrap" namespace, we
import them from recently added "ext:core/mod.js" file.
|
|
This commit stabilizes "Deno.HttpServer.shutdown" API as well as
Unix socket support in "Deno.serve" API.
---------
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
|
|
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
|
|
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
|
|
Rust 1.74 may have made this code temporarily valid in [#113126 Replace
old private-in-public diagnostic with type privacy
lints](https://github.com/rust-lang/rust/pull/113126), so we didn't
catch it at build time.
It fails in 1.73 and +nightly, however.
|
|
|
|
Follow-up to #20822. cc @lrowe
The `httpServerExplicitResourceManagement` tests were randomly failing
on CI because of a race.
The `drain` waker was missing wakeup events if the listeners shut down
after the last HTTP response finished. If we lost the race (rare), the
server Rc would be dropped and we wouldn't poll it again.
This replaces the drain waker system with a signalling Rc that always
resolves when the refcount is about to become 1.
Fix verified by running serve tests in a loop:
```
for i in {0..100}; do cargo run --features=__http_tracing -- test
-A --unstable '/Users/matt/Documents/github/deno/deno/cli/tests/unit/ser
ve_test.ts' --filter httpServerExplicitResourceManagement; done;
```
|
|
Syncs the changes to main for a deno_http version bump we needed to do.
`deno_http` v1.20 was released from the v1.38 branch.
|
|
Fixes #21250
We were attempting to recycle dropped resource responses too early.
|
|
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
|
|
|
|
Fixes #21121 and #19498
Migrates fully to rustls_tokio_stream. We no longer need to maintain our
own TlsStream implementation to properly support duplex.
This should fix a number of errors with TLS and websockets, HTTP and
"other" places where it's failing.
|
|
completion (#20822)
Use HttpRecord as response body so requests can be tracked all the way
to response body completion.
This allows Request properties to be accessed while the response body is
streaming.
Graceful shutdown now awaits a future instead of async spinning waiting
for requests to finish.
On the minimal benchmark this refactor improves performance an
additional 2% over pooling alone for a net 3% increase over the previous
deno main branch.
Builds upon https://github.com/denoland/deno/pull/20809 and
https://github.com/denoland/deno/pull/20770.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Reuse existing existing allocations for HttpRecord and response
HeaderMap where possible.
At request end used allocations are returned to the pool and the pool
and the pool sized to 1/8th the current number of inflight requests.
For http1 hyper will reuse the response HeaderMap for the following
request on the connection.
Builds upon https://github.com/denoland/deno/pull/20770
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Makes the JavaScript Request use a v8:External opaque pointer to
directly refer to the Rust HttpRecord.
The HttpRecord is now reference counted. To avoid leaks the strong count
is checked at request completion.
Performance seems unchanged on the minimal benchmark. 118614 req/s this
branch vs 118564 req/s on main, but variance between runs on my laptop
is pretty high.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
This is the release commit being forwarded back to main for 1.38.1
Co-authored-by: Divy Srivastava <dj.srivastava23@gmail.com>
Co-authored-by: littledivy <littledivy@users.noreply.github.com>
|
|
We can move all promise ID knowledge to deno_core, allowing us to better
experiment with promise implementation in deno_core.
`{un,}refOpPromise(promise)` is equivalent to
`{un,}refOp(promise[promiseIdSymbol])`
|
|
not a Response class (#21099)
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
This commit implements Symbol.dispose and Symbol.asyncDispose for
the relevant resources.
Closes #20839
---------
Signed-off-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
I'm not sure what was the purpose of trying to be so clever with the
args were (maybe an optimization?), but it breaks variadic args as
pointed out in #20054.
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
|
|
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
Towards https://github.com/denoland/deno/issues/20779.
|
|
Otherwise you can not return `Deno.Server` from async functions.
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
|
|
|
|
This is the release commit being forwarded back to main for 1.37.1
Co-authored-by: littledivy <littledivy@users.noreply.github.com>
|
|
Use the [scopeguard](https://docs.rs/scopeguard/) defer macro to run
cleanup code for `new_slab_future`.
This means it can be a single async function, avoiding the need to
create a struct and implement `PinnedDrop`
Async cleanup in Rust is awkward because async functions may be
cancelled at any await point when their Future is dropped.
The scopeguard approach comes from the following articles:
* [How to think about `async`/`await` in
Rust](http://cliffle.com/blog/async-inversion/)
* [Async Cancellation
I](https://blog.yoshuawuyts.com/async-cancellation-1/) (Reddit
[discussion](https://www.reddit.com/r/rust/comments/qrhg39/blog_post_async_cancellation/))
|
|
(#20641)
Builds on top of #20622 to fix #10854
|
|
|
|
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: David Sherret <dsherret@gmail.com>
|
|
This PR introduces an optimization to `set_response` to reduce the
overhead for responses with a payload size less than 64 bytes.
Performance gains are more noticeable when `is_request_compressible`
enters the slow path, ie: `-H 'Accept-Encoding: unknown'`
### Benchmarks
```js
Deno.serve({ port: 3000 }, () => new Response("hello"));
```
```
wrk -d 10s --latency -H 'Accept-Encoding: slow' http://127.0.0.1:3000
```
---
**main**
```
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 44.72us 28.12us 3.10ms 97.95%
Req/Sec 112.73k 8.25k 123.66k 91.09%
2264092 requests in 10.10s, 308.77MB read
Requests/sec: 224187.08
Transfer/sec: 30.57MB
```
**this PR**
```
Running 10s test @ http://127.0.0.1:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 42.91us 20.57us 2.04ms 97.36%
Req/Sec 116.61k 7.95k 204.81k 88.56%
2330970 requests in 10.10s, 317.89MB read
Requests/sec: 230806.32
Transfer/sec: 31.48MB
```
|
|
Fixes #20502 -- ensure that Hyper errors make it through to JS.
|
|
|
|
Bump deno_core, pulling in new rusty_v8. Requires some op2/deprecation
fixes.
|
|
This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.
We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.
In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.
Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
|
|
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
|
|
When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
|
|
<!--
Before submitting a PR, please read https://deno.com/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
As the title.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
alive (#20206)
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e3c4b0827940853c53c5ee2ede5ec9f, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
|
|
Extracted from fast streams work.
This is a resource wrapper for `ReadableStream`, allowing us to treat
all `ReadableStream` instances as resources, and remove special paths in
both `fetch` and `serve`.
Performance with a ReadableStream response yields ~18% improvement:
```
return new Response(new ReadableStream({
start(controller) {
controller.enqueue(new Uint8Array([104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100]));
controller.close();
}
})
```
This patch:
```
12:36 $ third_party/prebuilt/mac/wrk http://localhost:8080
Running 10s test @ http://localhost:8080
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 99.96us 100.03us 6.65ms 98.84%
Req/Sec 47.73k 2.43k 51.02k 89.11%
959308 requests in 10.10s, 117.10MB read
Requests/sec: 94978.71
Transfer/sec: 11.59MB
```
main:
```
Running 10s test @ http://localhost:8080
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 163.03us 685.51us 19.73ms 99.27%
Req/Sec 39.50k 3.98k 66.11k 95.52%
789582 requests in 10.10s, 82.83MB read
Requests/sec: 78182.65
Transfer/sec: 8.20MB
```
|
|
|
|
This PR improves performance of `Deno.Serve` when providing `info`
argument by creating `ServeHandlerInfo` class instead of creating an
object literal with a getter on every request.
```js
Deno.serve((_req, info) => new Response(info.remoteAddr.transport) });
```
### Benchmarks
```
wrk -d 10s --latency http://127.0.0.1:4500
Running 10s test @ http://127.0.0.1:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 42.34us 16.30us 1.66ms 95.88%
Req/Sec 118.17k 2.95k 127.38k 76.73%
Latency Distribution
50% 38.00us
75% 41.00us
90% 56.00us
99% 83.00us
2375298 requests in 10.10s, 319.40MB read
Requests/sec: 235177.04
Transfer/sec: 31.62MB
```
**main**
```
wrk -d 10s --latency http://127.0.0.1:4500
Running 10s test @ http://127.0.0.1:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 78.86us 211.06us 3.58ms 96.52%
Req/Sec 105.90k 4.35k 117.41k 78.22%
Latency Distribution
50% 41.00us
75% 53.00us
90% 62.00us
99% 1.18ms
2127534 requests in 10.10s, 286.09MB read
Requests/sec: 210647.49
Transfer/sec: 28.33MB
```
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.36.0 (x86_64-unknown-linux-gnu)
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------- -----------------------------
new ServeHandlerInfo 3.43 ns/iter 291,508,889.3 (3.07 ns … 12.21 ns) 3.42 ns 3.84 ns 3.87 ns
{} with getter 133.84 ns/iter 7,471,528.9 (92.9 ns … 458.95 ns) 132.45 ns 364.96 ns 429.43 ns
```
----
### Drawbacks:
`.remoteAddr` is now not enumerable
```
ServeHandlerInfo {}
```
vs
```
{ remoteAddr: [Getter] }
```
It'll break any code trying to iterate through `info` keys (Doubt
there's anyone doing it though)
```js
Deno.serve((req, info) => {
console.log(Object.keys(info).length === 0) // true;
return new Response("yes");
});
|