Age | Commit message (Collapse) | Author |
|
When the response has been successfully send, we abort the
`Request.signal` property to indicate that all resources associated with
this transaction may be torn down.
|
|
Adds an `addr` field to `HttpServer` to simplify the pattern
`Deno.serve({ onListen({ port } => listenPort = port })`. This becomes:
`const server = Deno.serve({}); port = server.addr.port`.
Changes:
- Refactors `serve` overloads to split TLS out (in preparation for
landing a place for the TLS SNI information)
- Adds an `addr` field to `HttpServer` that matches the `addr` field of
the corresponding `Deno.Listener`s.
|
|
Pass the certificates and key files as CPPGC objects.
Towards #23233
|
|
First pass of migrating away from `Deno.core.ensureFastOps()`.
A few "tricky" ones have been left for a follow up.
|
|
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Co-authored-by: Asher Gomez <ashersaupingomez@gmail.com>
|
|
Closes https://github.com/denoland/deno/issues/21828.
This API is a huge footgun. And given that "Deno.serveHttp" is a
deprecated API that is discouraged to use (use "Deno.serve()"
instead); it makes no sense to keep this API around.
This is a step towards fully migrating to Hyper 1.
|
|
|
|
Deno v1.39 introduces `vm.runInNewContext`. This may cause problems when
using `Object.prototype.isPrototypeOf` to check built-in types.
```js
import vm from "node:vm";
const err = new Error();
const crossErr = vm.runInNewContext(`new Error()`);
console.assert( !(crossErr instanceof Error) );
console.assert( Object.getPrototypeOf(err) !== Object.getPrototypeOf(crossErr) );
```
This PR changes to check using internal slots solves them.
---
current:
```
> import vm from "node:vm";
undefined
> vm.runInNewContext(`new Error("message")`)
Error {}
> vm.runInNewContext(`new Date("2018-12-10T02:26:59.002Z")`)
Date {}
```
this PR:
```
> import vm from "node:vm";
undefined
> vm.runInNewContext(`new Error("message")`)
Error: message
at <anonymous>:1:1
> vm.runInNewContext(`new Date("2018-12-10T02:26:59.002Z")`)
2018-12-10T02:26:59.002Z
```
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
|
|
This commit refactors how we access "core", "internals" and
"primordials" objects coming from `deno_core`, in our internal JavaScript code.
Instead of capturing them from "globalThis.__bootstrap" namespace, we
import them from recently added "ext:core/mod.js" file.
|
|
completion (#20822)
Use HttpRecord as response body so requests can be tracked all the way
to response body completion.
This allows Request properties to be accessed while the response body is
streaming.
Graceful shutdown now awaits a future instead of async spinning waiting
for requests to finish.
On the minimal benchmark this refactor improves performance an
additional 2% over pooling alone for a net 3% increase over the previous
deno main branch.
Builds upon https://github.com/denoland/deno/pull/20809 and
https://github.com/denoland/deno/pull/20770.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Makes the JavaScript Request use a v8:External opaque pointer to
directly refer to the Rust HttpRecord.
The HttpRecord is now reference counted. To avoid leaks the strong count
is checked at request completion.
Performance seems unchanged on the minimal benchmark. 118614 req/s this
branch vs 118564 req/s on main, but variance between runs on my laptop
is pretty high.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
We can move all promise ID knowledge to deno_core, allowing us to better
experiment with promise implementation in deno_core.
`{un,}refOpPromise(promise)` is equivalent to
`{un,}refOp(promise[promiseIdSymbol])`
|
|
not a Response class (#21099)
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
This commit implements Symbol.dispose and Symbol.asyncDispose for
the relevant resources.
Closes #20839
---------
Signed-off-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
I'm not sure what was the purpose of trying to be so clever with the
args were (maybe an optimization?), but it breaks variadic args as
pointed out in #20054.
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
Otherwise you can not return `Deno.Server` from async functions.
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
|
|
This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.
We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.
In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.
Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
|
|
alive (#20206)
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e3c4b0827940853c53c5ee2ede5ec9f, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
|
|
Extracted from fast streams work.
This is a resource wrapper for `ReadableStream`, allowing us to treat
all `ReadableStream` instances as resources, and remove special paths in
both `fetch` and `serve`.
Performance with a ReadableStream response yields ~18% improvement:
```
return new Response(new ReadableStream({
start(controller) {
controller.enqueue(new Uint8Array([104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100]));
controller.close();
}
})
```
This patch:
```
12:36 $ third_party/prebuilt/mac/wrk http://localhost:8080
Running 10s test @ http://localhost:8080
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 99.96us 100.03us 6.65ms 98.84%
Req/Sec 47.73k 2.43k 51.02k 89.11%
959308 requests in 10.10s, 117.10MB read
Requests/sec: 94978.71
Transfer/sec: 11.59MB
```
main:
```
Running 10s test @ http://localhost:8080
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 163.03us 685.51us 19.73ms 99.27%
Req/Sec 39.50k 3.98k 66.11k 95.52%
789582 requests in 10.10s, 82.83MB read
Requests/sec: 78182.65
Transfer/sec: 8.20MB
```
|
|
This PR improves performance of `Deno.Serve` when providing `info`
argument by creating `ServeHandlerInfo` class instead of creating an
object literal with a getter on every request.
```js
Deno.serve((_req, info) => new Response(info.remoteAddr.transport) });
```
### Benchmarks
```
wrk -d 10s --latency http://127.0.0.1:4500
Running 10s test @ http://127.0.0.1:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 42.34us 16.30us 1.66ms 95.88%
Req/Sec 118.17k 2.95k 127.38k 76.73%
Latency Distribution
50% 38.00us
75% 41.00us
90% 56.00us
99% 83.00us
2375298 requests in 10.10s, 319.40MB read
Requests/sec: 235177.04
Transfer/sec: 31.62MB
```
**main**
```
wrk -d 10s --latency http://127.0.0.1:4500
Running 10s test @ http://127.0.0.1:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 78.86us 211.06us 3.58ms 96.52%
Req/Sec 105.90k 4.35k 117.41k 78.22%
Latency Distribution
50% 41.00us
75% 53.00us
90% 62.00us
99% 1.18ms
2127534 requests in 10.10s, 286.09MB read
Requests/sec: 210647.49
Transfer/sec: 28.33MB
```
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.36.0 (x86_64-unknown-linux-gnu)
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------- -----------------------------
new ServeHandlerInfo 3.43 ns/iter 291,508,889.3 (3.07 ns … 12.21 ns) 3.42 ns 3.84 ns 3.87 ns
{} with getter 133.84 ns/iter 7,471,528.9 (92.9 ns … 458.95 ns) 132.45 ns 364.96 ns 429.43 ns
```
----
### Drawbacks:
`.remoteAddr` is now not enumerable
```
ServeHandlerInfo {}
```
vs
```
{ remoteAddr: [Getter] }
```
It'll break any code trying to iterate through `info` keys (Doubt
there's anyone doing it though)
```js
Deno.serve((req, info) => {
console.log(Object.keys(info).length === 0) // true;
return new Response("yes");
});
|
|
Ref #19915
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
|
|
This PR fixes #19818. The problem was that the new InnerRequest class does not initialize the fields urlList and urlListProcessed that are used during a request clone. The solution aims to be straightforward by simply initializing the missing properties during the clone process. I also implemented a "cache" to the url getter of the new InnerRequest, avoiding the cost of calling op_http_get_request_method_and_url.
|
|
Throws an error when user code attempts to use unsupported options (may
help reduce confusion when migrating to Deno.serve)
|
|
This commit stabilizes "Deno.serve()", which becomes the
preferred way to create HTTP servers in Deno.
Documentation was adjusted for each overload of "Deno.serve()"
API and the API always binds to "127.0.0.1:8000" by default.
|
|
promise rejections (#19691)
Fixes #19687 by adding a rejection handler to the write inside the
setTimeout. There is a small window where the promise is actually not
awaited and may reject without a handler.
|
|
Closes #19470.
|
|
(#19485)
We have a bunch of these to clean up after we changed the API.
|
|
|
|
Further improves preact SSR and express benches by about 2k RPS.
Ref https://github.com/denoland/deno/issues/19451
|
|
|
|
|
|
This commit adds basic support for "node:http2" module. Not
all APIs have been yet implemented, but this change already
allows to use this module for some basic functions.
The "grpc" package is still not working, but it's a good stepping
stone.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
Startup benchmark shows no changes (within 1ms, identical system/user
times).
|
|
|
|
This PR attempts to resolve the first item on the list from
https://github.com/denoland/deno/issues/19330 which is about using a
flat list of interleaved key/value pairs, instead of a nested array of
tuples.
I can tackle some more if you can provide a quick example of using raw
v8 arrays, cc @mmastrac
|
|
For the first implementation of node:http2, we'll use the internal
version of `Deno.serve` which allows us to listen on a raw TCP
connection rather than a listener.
This is mostly a refactoring, and hooking up of `op_http_serve_on` that
was never previously exposed (but designed for this purpose).
|
|
Under heavy load, we often have requests queued up that don't need an
async call to retrieve. We can use a fast path sync op to drain this set
of ready requests, and then fall back to the async op once we run out of
work.
This is a .5-1% bump in req/s on an M2 mac. About 90% of the handlers go
through this sync phase (based on a simple instrumentation that is not
included in this PR) and skip the async machinery entirely.
|
|
Add `ref` and `unref` to return value from `Deno.serve`. Unblocks #3326.
|
|
Necessary for #3326.
Requested in #10214 as well.
|
|
This commit changes the return type of an unstable `Deno.serve()` API
to instead return a `Deno.Server` object that has a `finished` field.
This change is done in preparation to be able to ref/unref the HTTP
server.
|
|
Fixes for various `Attemped to access invalid request` bugs (#19058,
#15427, #17213).
We did not wait for both a drop event and a completion event before
removing items from the slab table. This ensures that we do so.
In addition, the slab methods are refactored out into `slab.rs` for
maintainability.
|
|
Merges `op_http_upgrade_next` and `op_ws_server_create`, significantly
simplifying websocket construction in ext/http (next), and removing one
JS -> Rust call. Also WS server now doesn't bypass
`HttpPropertyExtractor`.
|
|
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
|
|
Migrates some of existing async ops to generated wrappers introduced in
https://github.com/denoland/deno/pull/18887. As a result "core.opAsync2"
was removed.
I will follow up with more PRs that migrate all the async ops to
generated wrappers.
|
|
I would like to get this change into Deno before merging
https://github.com/denoland/deno_lint/pull/1152
|
|
Performance:
```
async_ops.js: 760k -> 1030k (!)
async_ops_deferred.js: 730k -> 770k
Deno.serve bench: 118k -> 124k
WS test w/ third_party/prebuilt/mac/load_test 100 localhost 8000 0 0: unchanged
Startup time: approx 0.5ms slower (13.7 -> 14.2ms)
```
|