summaryrefslogtreecommitdiff
path: root/cli/cache/cache_db.rs
AgeCommit message (Collapse)Author
2024-11-01fix: improved support for cjs and cts modules (#26558)David Sherret
* cts support * better cjs/cts type checking * deno compile cjs/cts support * More efficient detect cjs (going towards stabilization) * Determination of whether .js, .ts, .jsx, or .tsx is cjs or esm is only done after loading * Support `import x = require(...);` Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
2024-08-02docs: fix typos (#24820)Andreas Deininger
This PR fixes various typos I spotted in the project.
2024-05-29fix: bump cache sqlite dbs to v2 for WAL journal mode change (#24030)David Sherret
In https://github.com/denoland/deno/pull/23955 we changed the sqlite db journal mode to WAL. This causes issues when someone is running an old version of Deno using TRUNCATE and a new version because the two fight against each other.
2024-05-23perf(startup): use WAL journal for sqlite databases in DENO_DIR (#23955)Bert Belder
While investigating poor cold start performance on my GCP VM (32 cores, 130GB SSD), I found that writing to the various sqlite databases in DENO_DIR was quite slow. The slowness seems to primarily be caused by excessive latency from a number of `fsync()` calls. The performance difference is best demonstrated by deleting the sqlite databases from DENO_DIR while leaving the downloaded sources in place. The benchmark (see notes below): ``` piscisaureus@bert-us:~/erofs/source$ export DENO_DIR=./.deno piscisaureus@bert-us:~/erofs/source$ hyperfine --warmup 3 \ --prepare "rm -rf .deno/*_v1*" \ "deno run -A --cached-only demo.ts" \ "eatmydata deno run -A --cached-only demo.ts" \ "~/deno/target/release/deno run -A --cached-only demo.ts" Benchmark 1: deno run -A --cached-only demo.ts Time (mean ± σ): 1.174 s ± 0.037 s [User: 0.153 s, System: 0.184 s] Range (min … max): 1.104 s … 1.212 s 10 runs Benchmark 2: eatmydata deno run -A --cached-only demo.ts Time (mean ± σ): 265.5 ms ± 3.6 ms [User: 138.5 ms, System: 135.1 ms] Range (min … max): 260.6 ms … 271.2 ms 11 runs Benchmark 3: ~/deno/target/release/deno run -A --cached-only demo.ts Time (mean ± σ): 226.2 ms ± 9.2 ms [User: 136.7 ms, System: 93.3 ms] Range (min … max): 218.8 ms … 247.1 ms 13 runs Summary ~/deno/target/release/deno run -A --cached-only demo.ts ran 1.17 ± 0.05 times faster than eatmydata deno run -A --cached-only demo.ts 5.19 ± 0.27 times faster than deno run -A --cached-only demo.ts ``` Notes: * Benchmark 1: unmodified Deno 1.43.6 * Benchmark 2: unmodified Deno 1.43.6 wrapped with `eatmydata` (which is a tool to neuter `fsync()` calls) * Benchmark 3: this PR applied on top of Deno 1.43.6 The script that got benchmarked: ```typescript // demo.ts import * as express from "npm:express@4.16.3"; import * as postgres from "https://deno.land/x/postgres/mod.ts"; let _dummy = [express, postgres]; // Force use of imports. console.log("hello world"); ```
2024-04-01fix: prevent cache db errors when deno_dir not exists (#23168)David Sherret
Closes #20202
2024-01-01chore: update copyright to 2024 (#21753)David Sherret
2023-08-25chore(cli): remove atty crate (#20275)Matt Mastracci
Removes a crate with an outstanding vulnerability.
2023-08-23fix(ext/web): add stream tests to detect v8slice split bug (#20253)Matt Mastracci
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
2023-05-30fix: do not show cache initialization errors if stderr is piped (#18920)David Sherret
Closes #18918
2023-05-25fix(compile): handle when DENO_DIR is readonly (#19257)David Sherret
Closes #19253
2023-05-14refactor(core): bake single-thread assumptions into spawn/spawn_blocking ↵Matt Mastracci
(#19056) Partially supersedes #19016. This migrates `spawn` and `spawn_blocking` to `deno_core`, and removes the requirement for `spawn` tasks to be `Send` given our single-threaded executor. While we don't need to technically do anything w/`spawn_blocking`, this allows us to have a single `JoinHandle` type that works for both cases, and allows us to more easily experiment with alternative `spawn_blocking` implementations that do not require tokio (ie: rayon). Async ops (+~35%): Before: ``` time 1310 ms rate 763358 time 1267 ms rate 789265 time 1259 ms rate 794281 time 1266 ms rate 789889 ``` After: ``` time 956 ms rate 1046025 time 954 ms rate 1048218 time 924 ms rate 1082251 time 920 ms rate 1086956 ``` HTTP serve (+~4.4%): Before: ``` Running 10s test @ http://localhost:4500 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 68.78us 19.77us 1.43ms 86.84% Req/Sec 68.78k 5.00k 73.84k 91.58% 1381833 requests in 10.10s, 167.36MB read Requests/sec: 136823.29 Transfer/sec: 16.57MB ``` After: ``` Running 10s test @ http://localhost:4500 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 63.12us 17.43us 1.11ms 85.13% Req/Sec 71.82k 3.71k 77.02k 79.21% 1443195 requests in 10.10s, 174.79MB read Requests/sec: 142921.99 Transfer/sec: 17.31MB ``` Suggested-By: alice@ryhl.io Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
2023-03-28fix(core): restore cache journal mode to TRUNCATE and tweak tokio test in ↵Matt Mastracci
CacheDB (#18469) Fast-follow on #18401 -- the reason that some tests were panicking in the `CacheDB` `impl Drop` was that the cache itself was being dropped during panic and the runtime may or may not still exist at that point. We can reduce the actual tokio runtime testing to where it's needed. In addition, we return the journal mode to `TRUNCATE` to avoid the risk of data corruption.
2023-03-27feat(core): initialize SQLite off-main-thread (#18401)Matt Mastracci
This gets SQLite off the flamegraph and reduces initialization time by somewhere between 0.2ms and 0.5ms. In addition, I took the opportunity to move all the cache management code to a single place and reduce duplication. While the PR has a net gain of lines, much of that is just being a bit more deliberate with how we're recovering from errors. The existing caches had various policies for dealing with cache corruption, so I've unified them and tried to isolate the decisions we make for recovery in a single place (see `open_connection` in `CacheDB`). The policy I chose was: 1. Retry twice to open on-disk caches 2. If that fails, try to delete the file and recreate it on-disk 3. If we fail to delete the file or re-create a new cache, use a fallback strategy that can be chosen per-cache: InMemory (temporary cache for the process run), BlackHole (ignore writes, return empty reads), or Error (fail on every operation). The caches all use the same general code now, and share the cache failure recovery policy. In addition, it cleans up a TODO in the `NodeAnalysisCache`.