astral-uv:charlie/pre

Last commit made on 2024-02-04
Get this branch:
git clone -b charlie/pre https://git.launchpad.net/astral-uv

Branch merges

Branch information

Name:
charlie/pre
Repository:
lp:astral-uv

Recent commits

945326d... by Charlie Marsh <email address hidden>

Pre-fetch

586eeb6... by Andrew Gallant <email address hidden>

tests: update snapshot for new `pip` release (#1245)

See: https://pypi.org/project/pip/#history

0199ad6... by konstin

Fix bootstrap install script on windows (#1162)

Windows doesn't support symlinks, doesn't use a `bin` directory and all
pythons are called `python.exe`.

Note that this is still broken, `.\bin\python3.10.13` is missing its
.exe extension and renaming it to `.\bin\python3.10.13.exe` makes it
complain about not finding python310.dll.

bc2f8f5... by Zanie Blue <email address hidden>

Fix direct use of `range.simplify` (#1236)

6db9db0... by Zanie Blue <email address hidden>

Invert display of "no versions" incompatibilities with multiple ranges (#1233)

Closes #884

e.g.

```
❯ cargo run -q -- pip compile --python-version 3.12 requirements.in
  × No solution found when resolving dependencies:
  ╰─▶ Because the requested Python version (3.12) does not satisfy Python>=3.6,<3.10 and recommenders==1.0.0 depends on Python>=3.6,<3.9, we can conclude that recommenders==1.0.0 cannot be used.
      And because only the following versions of recommenders are available:
          recommenders<=0.7
          recommenders==1.0.0
          recommenders==1.1.0
          recommenders==1.1.1
      we can conclude that recommenders>0.7,<1.1.0 cannot be used. (1)

      Because the requested Python version (3.12) does not satisfy Python>=3.6,<3.10 and recommenders>=1.1.0 depends on Python>=3.6,<3.10, we can conclude that recommenders>=1.1.0 cannot be used.
      And because we know from (1) that recommenders>0.7,<1.1.0 cannot be used, we can conclude that recommenders>0.7 cannot be used.
      And because you require recommenders>0.7, we can conclude that the requirements are unsatisfiable.
```

e3c1586... by konstin

gitignore packse scenarios folder (#1232)

Remove a directory i accidentally committed and gitignore it for the
future.

f10f902... by konstin

Yield after channel send and move cpu tasks to thread (#1163)

## Summary

Previously, we were blocking operations that could run in parallel. We
would send request through our main requests channel, but not yield so
that the receiver could only start processing requests much later than
necessary. We solve this by switching to the async
`tokio::sync::mpsc::channel`, where send is an async functions that
yields.

Due to the increased parallelism cache deserialization and the
conversion from simple api request to version map became bottlenecks, so
i moved them to `spawn_blocking`. Together these result in a 30-60%
speedup for larger warm cache resolution. Small cases such as black
already resolve in 5.7 ms on my machine so there's no speedup to be
gained, refresh and no cache were to noisy to get signal from.

Note for the future: Revisit the bounded channel if we want to produce
requests from `process_request`, too, (this would be good for
prefetching) to avoid deadlocks.

## Details

We can look at the behavior change through the spans:

```
RUST_LOG=puffin=info TRACING_DURATIONS_FILE=target/traces/jupyter-warm-branch.ndjson cargo run --features tracing-durations-export --bin puffin-dev --profile profiling -- resolve jupyter 2> /dev/null
```

Below, you can see how on main, we have discrete phases: All (cached)
simple api requests in parallel, then all (cached) metadata requests in
parallel, repeat until done. The solver is mostly waiting until it has
it's version map from the simple API query to be able to choose a
version. The main thread is blocked by process requests.

In the PR branch, the simple api requests succeeds much earlier,
allowing the solver to advance and also to schedule more prefetching.
Due to that `parse_cache` and `from_metadata` became bottlenecks, so i
moved them off the main thread (green color, and their spans can now
overlap because they can run on multiple threads in parallel). The main
thread isn't blocked on `process_request` anymore, instead it has
frequent idle times. The spans are all much shorter, which indicates
that on main they could have finished much earlier, but a task didn't
yield so they weren't scheduled to finish (though i haven't dug deep
enough to understand the exact scheduling between the process request
stream and the solver here).

**main**

![jupyter-warm-main](https://github.com/astral-sh/puffin/assets/6826232/693c53cc-1090-41b7-b02a-a607fcd2cd99)

**PR**

![jupyter-warm-branch](https://github.com/astral-sh/puffin/assets/6826232/33435f34-b39b-4b0a-a9d7-4bfc22f55f05)

## Benchmarks

```
$ hyperfine --warmup 3 "target/profiling/main-dev resolve jupyter" "target/profiling/branch-dev resolve jupyter"
Benchmark 1: target/profiling/main-dev resolve jupyter
  Time (mean ± σ): 29.1 ms ± 0.7 ms [User: 22.9 ms, System: 11.1 ms]
  Range (min … max): 27.7 ms … 32.2 ms 103 runs

Benchmark 2: target/profiling/branch-dev resolve jupyter
  Time (mean ± σ): 18.8 ms ± 1.1 ms [User: 37.0 ms, System: 22.7 ms]
  Range (min … max): 16.5 ms … 21.9 ms 154 runs

Summary
  target/profiling/branch-dev resolve jupyter ran
    1.55 ± 0.10 times faster than target/profiling/main-dev resolve jupyter

$ hyperfine --warmup 3 "target/profiling/main-dev resolve meine_stadt_transparent" "target/profiling/branch-dev resolve meine_stadt_transparent"
Benchmark 1: target/profiling/main-dev resolve meine_stadt_transparent
  Time (mean ± σ): 37.8 ms ± 0.9 ms [User: 30.7 ms, System: 14.1 ms]
  Range (min … max): 36.6 ms … 41.5 ms 79 runs

Benchmark 2: target/profiling/branch-dev resolve meine_stadt_transparent
  Time (mean ± σ): 24.7 ms ± 1.5 ms [User: 47.0 ms, System: 39.3 ms]
  Range (min … max): 21.5 ms … 28.7 ms 113 runs

Summary
  target/profiling/branch-dev resolve meine_stadt_transparent ran
    1.53 ± 0.10 times faster than target/profiling/main-dev resolve meine_stadt_transparent

$ hyperfine --warmup 3 "target/profiling/main pip compile scripts/requirements/home-assistant.in" "target/profiling/branch pip compile scripts/requirements/home-assistant.in"
Benchmark 1: target/profiling/main pip compile scripts/requirements/home-assistant.in
  Time (mean ± σ): 229.0 ms ± 2.8 ms [User: 197.3 ms, System: 63.7 ms]
  Range (min … max): 225.8 ms … 234.0 ms 13 runs

Benchmark 2: target/profiling/branch pip compile scripts/requirements/home-assistant.in
  Time (mean ± σ): 91.4 ms ± 5.3 ms [User: 289.2 ms, System: 176.9 ms]
  Range (min … max): 81.0 ms … 104.7 ms 32 runs

Summary
  target/profiling/branch pip compile scripts/requirements/home-assistant.in ran
    2.50 ± 0.15 times faster than target/profiling/main pip compile scripts/requirements/home-assistant.in
```

3771f66... by konstin

Allow additional assertions on command output (#1226)

In the scenario tests, we want to make sure we're actually conforming to
the scenario's expectations, so we now have an extra assertion on
whether resolution failed or succeeded as well as that it includes the
given packages.

Closes #1112
Closes #1030

b16422a... by konstin

Remove insta_cmd (#1225)

We need more flexible filters than those `inta` offers, and `insta_cmd`
makes it impossible to plug in programmatic filters. At the same time we
use barely any of `insta_cmd`'s features. We can replace the subset we
need in about 50 loc.

0925e44... by konstin

Refactor remaining integration tests (#1220)

Mostly a mechanical refactor to use the `puffin_snapshot!` and
`TestContext` infrastructure in the add, remove, venv and pip uninstall
tests, in preparation for adding programmatic windows testing filters.
The is only one remaining usage of `assert_cmd_snapshot!` now in the
`puffin_snapshot!` macro.