Merge ~lvoytek/ubuntu/+source/psycopg3:update-to-3.1.8 into ubuntu/+source/psycopg3:ubuntu/devel
- Git
- lp:~lvoytek/ubuntu/+source/psycopg3
- update-to-3.1.8
- Merge into ubuntu/devel
Proposed by
Lena Voytek
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | git-ubuntu bot | ||||
Approved revision: | not available | ||||
Merged at revision: | f1ad1f321f07e5f156e569e45ebd789b6c02c233 | ||||
Proposed branch: | ~lvoytek/ubuntu/+source/psycopg3:update-to-3.1.8 | ||||
Merge into: | ubuntu/+source/psycopg3:ubuntu/devel | ||||
Diff against target: |
2143 lines (+699/-372) 45 files modified
.github/workflows/lint.yml (+1/-0) .github/workflows/packages-bin.yml (+1/-72) .github/workflows/packages-pool.yml (+9/-17) .github/workflows/packages-src.yml (+66/-0) .github/workflows/tests.yml (+69/-108) debian/changelog (+14/-0) debian/control (+2/-1) docs/api/cursors.rst (+9/-0) docs/news.rst (+9/-0) psycopg/psycopg/_adapters_map.py (+8/-2) psycopg/psycopg/_typeinfo.py (+81/-47) psycopg/psycopg/cursor.py (+14/-19) psycopg/psycopg/cursor_async.py (+3/-3) psycopg/psycopg/pq/pq_ctypes.py (+8/-6) psycopg/psycopg/version.py (+1/-1) psycopg/setup.cfg (+2/-0) psycopg/setup.py (+8/-5) psycopg_c/MANIFEST.in (+3/-0) psycopg_c/build_backend/cython_backend.py (+38/-0) psycopg_c/psycopg_c/_psycopg.pyx (+6/-0) psycopg_c/psycopg_c/_psycopg/waiting.pyx (+3/-9) psycopg_c/psycopg_c/pq/pgconn.pyx (+12/-6) psycopg_c/psycopg_c/types/array.pyx (+1/-1) psycopg_c/psycopg_c/version.py (+1/-1) psycopg_c/pyproject.toml (+12/-2) psycopg_c/setup.cfg (+2/-1) psycopg_pool/psycopg_pool/version.py (+1/-1) psycopg_pool/pyproject.toml (+3/-0) psycopg_pool/setup.cfg (+2/-0) tests/constraints.txt (+13/-6) tests/crdb/test_cursor.py (+10/-1) tests/crdb/test_cursor_async.py (+10/-1) tests/fix_crdb.py (+4/-0) tests/scripts/bench-411.py (+123/-37) tests/test_client_cursor.py (+10/-2) tests/test_client_cursor_async.py (+10/-2) tests/test_cursor.py (+4/-2) tests/test_cursor_async.py (+4/-2) tests/test_pipeline.py (+2/-1) tests/test_pipeline_async.py (+2/-1) tests/test_typeinfo.py (+49/-8) tests/types/test_net.py (+3/-0) tests/types/test_numeric.py (+1/-1) tools/build/ci_install_libpq.sh (+47/-0) tools/bump_version.py (+18/-6) |
||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
git-ubuntu bot | Approve | ||
Athos Ribeiro (community) | Approve | ||
Canonical Server | Pending | ||
Canonical Server Reporter | Pending | ||
Review via email:
|
Commit message
Description of the change
Update from 3.1.7 to 3.1.8.
This is a bug fix release that brings psycopg up to a point where it's compatible with Django 4.2. If an FFe is still needed though I can create one.
See https:/
This update will also help fix the autopkgtest issues with python-
PPA: https:/
To post a comment you must log in.
Revision history for this message
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Athos Ribeiro (athos-ribeiro) wrote : | # |
Uploaded
Revision history for this message
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
git-ubuntu bot (git-ubuntu-bot) wrote : | # |
Approvers: athos-ribeiro, lvoytek
Uploaders: athos-ribeiro
MP auto-approved
review:
Approve
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml |
2 | index 4527551..2160326 100644 |
3 | --- a/.github/workflows/lint.yml |
4 | +++ b/.github/workflows/lint.yml |
5 | @@ -15,6 +15,7 @@ concurrency: |
6 | jobs: |
7 | lint: |
8 | runs-on: ubuntu-latest |
9 | + if: true |
10 | |
11 | steps: |
12 | - uses: actions/checkout@v3 |
13 | diff --git a/.github/workflows/packages.yml b/.github/workflows/packages-bin.yml |
14 | similarity index 67% |
15 | rename from .github/workflows/packages.yml |
16 | rename to .github/workflows/packages-bin.yml |
17 | index 18a2817..f8283e9 100644 |
18 | --- a/.github/workflows/packages.yml |
19 | +++ b/.github/workflows/packages-bin.yml |
20 | @@ -1,4 +1,4 @@ |
21 | -name: Build packages |
22 | +name: Build binary packages |
23 | |
24 | on: |
25 | workflow_dispatch: |
26 | @@ -7,77 +7,6 @@ on: |
27 | |
28 | jobs: |
29 | |
30 | - sdist: # {{{ |
31 | - runs-on: ubuntu-latest |
32 | - if: true |
33 | - |
34 | - strategy: |
35 | - fail-fast: false |
36 | - matrix: |
37 | - include: |
38 | - - {package: psycopg, format: sdist, impl: python} |
39 | - - {package: psycopg, format: wheel, impl: python} |
40 | - - {package: psycopg_c, format: sdist, impl: c} |
41 | - |
42 | - steps: |
43 | - - uses: actions/checkout@v3 |
44 | - |
45 | - - uses: actions/setup-python@v4 |
46 | - with: |
47 | - python-version: 3.9 |
48 | - |
49 | - - name: Create the sdist packages |
50 | - run: |- |
51 | - python ${{ matrix.package }}/setup.py sdist -d `pwd`/dist/ |
52 | - if: ${{ matrix.format == 'sdist' }} |
53 | - |
54 | - - name: Create the wheel packages |
55 | - run: |- |
56 | - pip install wheel |
57 | - python ${{ matrix.package }}/setup.py bdist_wheel -d `pwd`/dist/ |
58 | - if: ${{ matrix.format == 'wheel' }} |
59 | - |
60 | - - name: Install the Python package and test requirements |
61 | - run: |- |
62 | - pip install `ls dist/*`[test] |
63 | - pip install ./psycopg_pool |
64 | - if: ${{ matrix.package == 'psycopg' }} |
65 | - |
66 | - - name: Install the C package and test requirements |
67 | - run: |- |
68 | - pip install dist/* |
69 | - pip install ./psycopg[test] |
70 | - pip install ./psycopg_pool |
71 | - if: ${{ matrix.package == 'psycopg_c' }} |
72 | - |
73 | - - name: Test the sdist package |
74 | - run: pytest -m 'not slow and not flakey' --color yes |
75 | - env: |
76 | - PSYCOPG_IMPL: ${{ matrix.impl }} |
77 | - PSYCOPG_TEST_DSN: "host=127.0.0.1 user=postgres" |
78 | - PGPASSWORD: password |
79 | - |
80 | - - uses: actions/upload-artifact@v3 |
81 | - with: |
82 | - path: ./dist/* |
83 | - |
84 | - services: |
85 | - postgresql: |
86 | - image: postgres:14 |
87 | - env: |
88 | - POSTGRES_PASSWORD: password |
89 | - ports: |
90 | - - 5432:5432 |
91 | - # Set health checks to wait until postgres has started |
92 | - options: >- |
93 | - --health-cmd pg_isready |
94 | - --health-interval 10s |
95 | - --health-timeout 5s |
96 | - --health-retries 5 |
97 | - |
98 | - |
99 | - # }}} |
100 | - |
101 | linux: # {{{ |
102 | runs-on: ubuntu-latest |
103 | if: true |
104 | diff --git a/.github/workflows/packages-pool.yml b/.github/workflows/packages-pool.yml |
105 | index e9624e7..fdbb9e1 100644 |
106 | --- a/.github/workflows/packages-pool.yml |
107 | +++ b/.github/workflows/packages-pool.yml |
108 | @@ -14,8 +14,8 @@ jobs: |
109 | fail-fast: false |
110 | matrix: |
111 | include: |
112 | - - {package: psycopg_pool, format: sdist, impl: python} |
113 | - - {package: psycopg_pool, format: wheel, impl: python} |
114 | + - {package: psycopg_pool, format: sdist} |
115 | + - {package: psycopg_pool, format: wheel} |
116 | |
117 | steps: |
118 | - uses: actions/checkout@v3 |
119 | @@ -24,26 +24,18 @@ jobs: |
120 | with: |
121 | python-version: 3.9 |
122 | |
123 | - - name: Create the sdist packages |
124 | - run: |- |
125 | - python ${{ matrix.package }}/setup.py sdist -d `pwd`/dist/ |
126 | - if: ${{ matrix.format == 'sdist' }} |
127 | + - name: Install the build package |
128 | + run: pip install build |
129 | |
130 | - - name: Create the wheel packages |
131 | - run: |- |
132 | - pip install wheel |
133 | - python ${{ matrix.package }}/setup.py bdist_wheel -d `pwd`/dist/ |
134 | - if: ${{ matrix.format == 'wheel' }} |
135 | + - name: Create the package |
136 | + run: python -m build -o dist --${{ matrix.format }} ${{ matrix.package }} |
137 | |
138 | - name: Install the Python pool package and test requirements |
139 | - run: |- |
140 | - pip install dist/* |
141 | - pip install ./psycopg[test] |
142 | + run: pip install ./psycopg[test] dist/* |
143 | |
144 | - - name: Test the sdist package |
145 | - run: pytest -m 'not slow and not flakey' --color yes |
146 | + - name: Test the package |
147 | + run: pytest -m 'pool and not slow and not flakey' --color yes |
148 | env: |
149 | - PSYCOPG_IMPL: ${{ matrix.impl }} |
150 | PSYCOPG_TEST_DSN: "host=127.0.0.1 user=postgres" |
151 | PGPASSWORD: password |
152 | |
153 | diff --git a/.github/workflows/packages-src.yml b/.github/workflows/packages-src.yml |
154 | new file mode 100644 |
155 | index 0000000..8dc119d |
156 | --- /dev/null |
157 | +++ b/.github/workflows/packages-src.yml |
158 | @@ -0,0 +1,66 @@ |
159 | +name: Build source packages |
160 | + |
161 | +on: |
162 | + workflow_dispatch: |
163 | + schedule: |
164 | + - cron: '37 6 * * sun' |
165 | + |
166 | +jobs: |
167 | + |
168 | + sdist: |
169 | + runs-on: ubuntu-latest |
170 | + if: true |
171 | + |
172 | + strategy: |
173 | + fail-fast: false |
174 | + matrix: |
175 | + include: |
176 | + - {package: psycopg, format: sdist, impl: python} |
177 | + - {package: psycopg, format: wheel, impl: python} |
178 | + - {package: psycopg_c, format: sdist, impl: c} |
179 | + |
180 | + steps: |
181 | + - uses: actions/checkout@v3 |
182 | + |
183 | + - uses: actions/setup-python@v4 |
184 | + with: |
185 | + python-version: 3.9 |
186 | + |
187 | + - name: Install the build package |
188 | + run: pip install build |
189 | + |
190 | + - name: Create the package |
191 | + run: python -m build -o dist --${{ matrix.format }} ${{ matrix.package }} |
192 | + |
193 | + - name: Install the Python package and test requirements |
194 | + run: pip install `ls dist/*`[test] ./psycopg_pool |
195 | + if: ${{ matrix.package == 'psycopg' }} |
196 | + |
197 | + - name: Install the C package and test requirements |
198 | + run: pip install dist/* ./psycopg[test] ./psycopg_pool |
199 | + if: ${{ matrix.package == 'psycopg_c' }} |
200 | + |
201 | + - name: Test the sdist package |
202 | + run: pytest -m 'not slow and not flakey' --color yes |
203 | + env: |
204 | + PSYCOPG_IMPL: ${{ matrix.impl }} |
205 | + PSYCOPG_TEST_DSN: "host=127.0.0.1 user=postgres" |
206 | + PGPASSWORD: password |
207 | + |
208 | + - uses: actions/upload-artifact@v3 |
209 | + with: |
210 | + path: ./dist/* |
211 | + |
212 | + services: |
213 | + postgresql: |
214 | + image: postgres:14 |
215 | + env: |
216 | + POSTGRES_PASSWORD: password |
217 | + ports: |
218 | + - 5432:5432 |
219 | + # Set health checks to wait until postgres has started |
220 | + options: >- |
221 | + --health-cmd pg_isready |
222 | + --health-interval 10s |
223 | + --health-timeout 5s |
224 | + --health-retries 5 |
225 | diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml |
226 | index 9f6a7f5..7660f40 100644 |
227 | --- a/.github/workflows/tests.yml |
228 | +++ b/.github/workflows/tests.yml |
229 | @@ -35,22 +35,21 @@ jobs: |
230 | - {impl: c, python: "3.7", postgres: "postgres:15", libpq: newest} |
231 | - {impl: c, python: "3.8", postgres: "postgres:13"} |
232 | - {impl: c, python: "3.9", postgres: "postgres:14"} |
233 | - - {impl: c, python: "3.10", postgres: "postgres:13", libpq: oldest} |
234 | + - {impl: c, python: "3.10", postgres: "postgres:11", libpq: oldest} |
235 | - {impl: c, python: "3.11", postgres: "postgres:10", libpq: newest} |
236 | |
237 | - {impl: python, python: "3.9", ext: dns, postgres: "postgres:14"} |
238 | - {impl: python, python: "3.9", ext: postgis, postgres: "postgis/postgis"} |
239 | |
240 | + # Test with minimum dependencies versions |
241 | + - {impl: c, python: "3.7", ext: min, postgres: "postgres:15"} |
242 | + |
243 | env: |
244 | PSYCOPG_IMPL: ${{ matrix.impl }} |
245 | DEPS: ./psycopg[test] ./psycopg_pool |
246 | - PSYCOPG_TEST_DSN: "host=127.0.0.1 user=postgres" |
247 | - PGPASSWORD: password |
248 | + PSYCOPG_TEST_DSN: "host=127.0.0.1 user=postgres password=password" |
249 | MARKERS: "" |
250 | |
251 | - # Enable to run tests using the minimum version of dependencies. |
252 | - # PIP_CONSTRAINT: ${{ github.workspace }}/tests/constraints.txt |
253 | - |
254 | steps: |
255 | - uses: actions/checkout@v3 |
256 | |
257 | @@ -58,70 +57,48 @@ jobs: |
258 | with: |
259 | python-version: ${{ matrix.python }} |
260 | |
261 | - - name: Install the newest libpq version available |
262 | - if: ${{ matrix.libpq == 'newest' }} |
263 | + - name: Start PostgreSQL service |
264 | + # Note: this would love to be a service, but I don't see a way to pass |
265 | + # the args to the docker run command line. |
266 | run: | |
267 | - set -x |
268 | - |
269 | - curl -sL https://www.postgresql.org/media/keys/ACCC4CF8.asc \ |
270 | - | gpg --dearmor \ |
271 | - | sudo tee /etc/apt/trusted.gpg.d/apt.postgresql.org.gpg > /dev/null |
272 | - |
273 | - # NOTE: in order to test with a preview release, add its number to |
274 | - # the deb entry. For instance, to test on preview Postgres 16, use: |
275 | - # "deb http://apt.postgresql.org/pub/repos/apt ${rel}-pgdg main 16" |
276 | - rel=$(lsb_release -c -s) |
277 | - echo "deb http://apt.postgresql.org/pub/repos/apt ${rel}-pgdg main" \ |
278 | - | sudo tee -a /etc/apt/sources.list.d/pgdg.list > /dev/null |
279 | - sudo apt-get -qq update |
280 | - |
281 | - pqver=$(apt-cache show libpq5 | grep ^Version: | head -1 \ |
282 | - | awk '{print $2}') |
283 | - sudo apt-get -qq -y install "libpq-dev=${pqver}" "libpq5=${pqver}" |
284 | - |
285 | - - name: Install the oldest libpq version available |
286 | - if: ${{ matrix.libpq == 'oldest' }} |
287 | + docker pull ${{ matrix.postgres }} |
288 | + docker run --rm -d --name postgres -p 5432:5432 \ |
289 | + -e POSTGRES_PASSWORD=password ${{ matrix.postgres }} \ |
290 | + -c max_prepared_transactions=10 |
291 | + |
292 | + - name: Install the wanted libpq version |
293 | + run: sudo ./tools/build/ci_install_libpq.sh ${{ matrix.libpq }} |
294 | + |
295 | + - name: Include psycopg-c to the packages to install |
296 | + if: ${{ matrix.impl == 'c' }} |
297 | run: | |
298 | - set -x |
299 | - pqver=$(apt-cache show libpq5 | grep ^Version: | tail -1 \ |
300 | - | awk '{print $2}') |
301 | - sudo apt-get -qq -y --allow-downgrades install \ |
302 | - "libpq-dev=${pqver}" "libpq5=${pqver}" |
303 | + echo "DEPS=$DEPS ./psycopg_c" >> $GITHUB_ENV |
304 | |
305 | - - if: ${{ matrix.ext == 'dns' }} |
306 | + - name: Include dnspython to the packages to install |
307 | + if: ${{ matrix.ext == 'dns' }} |
308 | run: | |
309 | echo "DEPS=$DEPS dnspython" >> $GITHUB_ENV |
310 | echo "MARKERS=$MARKERS dns" >> $GITHUB_ENV |
311 | |
312 | - - if: ${{ matrix.ext == 'postgis' }} |
313 | + - name: Include shapely to the packages to install |
314 | + if: ${{ matrix.ext == 'postgis' }} |
315 | run: | |
316 | echo "DEPS=$DEPS shapely" >> $GITHUB_ENV |
317 | echo "MARKERS=$MARKERS postgis" >> $GITHUB_ENV |
318 | |
319 | - - if: ${{ matrix.impl == 'c' }} |
320 | + - name: Configure to use the oldest dependencies |
321 | + if: ${{ matrix.ext == 'min' }} |
322 | run: | |
323 | - echo "DEPS=$DEPS ./psycopg_c" >> $GITHUB_ENV |
324 | + echo "DEPS=$DEPS dnspython shapely" >> $GITHUB_ENV |
325 | + echo "PIP_CONSTRAINT=${{ github.workspace }}/tests/constraints.txt" \ |
326 | + >> $GITHUB_ENV |
327 | |
328 | - - name: Install Python dependencies |
329 | + - name: Install Python packages |
330 | run: pip install $DEPS |
331 | |
332 | - name: Run tests |
333 | run: ./tools/build/ci_test.sh |
334 | |
335 | - services: |
336 | - postgresql: |
337 | - image: ${{ matrix.postgres }} |
338 | - env: |
339 | - POSTGRES_PASSWORD: password |
340 | - ports: |
341 | - - 5432:5432 |
342 | - # Set health checks to wait until postgres has started |
343 | - options: >- |
344 | - --health-cmd pg_isready |
345 | - --health-interval 10s |
346 | - --health-timeout 5s |
347 | - --health-retries 5 |
348 | - |
349 | |
350 | # }}} |
351 | |
352 | @@ -152,29 +129,26 @@ jobs: |
353 | # Don't run timing-based tests as they regularly fail. |
354 | # pproxy-based tests fail too, with the proxy not coming up in 2s. |
355 | NOT_MARKERS: "timing proxy mypy" |
356 | - # PIP_CONSTRAINT: ${{ github.workspace }}/tests/constraints.txt |
357 | |
358 | steps: |
359 | - uses: actions/checkout@v3 |
360 | |
361 | + - uses: actions/setup-python@v4 |
362 | + with: |
363 | + python-version: ${{ matrix.python }} |
364 | + |
365 | - name: Install PostgreSQL on the runner |
366 | run: brew install postgresql@14 |
367 | |
368 | - - name: Start PostgreSQL service for test |
369 | + - name: Start PostgreSQL service |
370 | run: brew services start postgresql |
371 | |
372 | - - uses: actions/setup-python@v4 |
373 | - with: |
374 | - python-version: ${{ matrix.python }} |
375 | - |
376 | - - if: ${{ matrix.impl == 'c' }} |
377 | - # skip tests failing on importing psycopg_c.pq on subprocess |
378 | - # they only fail on Travis, work ok locally under tox too. |
379 | - # TODO: check the same on GitHub Actions |
380 | + - name: Include psycopg-c to the packages to install |
381 | + if: ${{ matrix.impl == 'c' }} |
382 | run: | |
383 | echo "DEPS=$DEPS ./psycopg_c" >> $GITHUB_ENV |
384 | |
385 | - - name: Install Python dependencies |
386 | + - name: Install Python packages |
387 | run: pip install $DEPS |
388 | |
389 | - name: Run tests |
390 | @@ -208,40 +182,47 @@ jobs: |
391 | PSYCOPG_TEST_DSN: "host=127.0.0.1 dbname=postgres" |
392 | # On windows pproxy doesn't seem very happy. Also a few timing test fail. |
393 | NOT_MARKERS: "timing proxy mypy" |
394 | - # PIP_CONSTRAINT: ${{ github.workspace }}/tests/constraints.txt |
395 | + |
396 | + defaults: |
397 | + run: |
398 | + shell: bash |
399 | |
400 | steps: |
401 | - uses: actions/checkout@v3 |
402 | |
403 | - - name: Start PostgreSQL service for test |
404 | + - uses: actions/setup-python@v4 |
405 | + with: |
406 | + python-version: ${{ matrix.python }} |
407 | + |
408 | + - name: Start PostgreSQL service |
409 | run: | |
410 | $PgSvc = Get-Service "postgresql*" |
411 | Set-Service $PgSvc.Name -StartupType manual |
412 | $PgSvc.Start() |
413 | + shell: pwsh |
414 | |
415 | - - uses: actions/setup-python@v4 |
416 | - with: |
417 | - python-version: ${{ matrix.python }} |
418 | - |
419 | - # Build a wheel package of the C extensions. |
420 | - # If the wheel is not delocated, import fails with some dll not found |
421 | - # (but it won't tell which one). |
422 | - name: Build the C wheel |
423 | if: ${{ matrix.impl == 'c' }} |
424 | run: | |
425 | pip install delvewheel wheel |
426 | - $env:Path = "C:\Program Files\PostgreSQL\14\bin\;$env:Path" |
427 | - python ./psycopg_c/setup.py bdist_wheel |
428 | - &"delvewheel" repair ` |
429 | - --no-mangle "libiconv-2.dll;libwinpthread-1.dll" ` |
430 | - @(Get-ChildItem psycopg_c\dist\*.whl) |
431 | - &"pip" install @(Get-ChildItem wheelhouse\*.whl) |
432 | + |
433 | + # The windows runner is a total mess, with random copies of the libpq |
434 | + # scattered all over the places. Give precedence to the one under our |
435 | + # control (or the illusion of it). |
436 | + export PATH="/c/Program Files/PostgreSQL/14/bin/:$PATH" |
437 | + |
438 | + # If the wheel is not delocated, import fails with some dll not found |
439 | + # (but it won't tell which one). |
440 | + pip wheel -v -w ./psycopg_c/dist/ ./psycopg_c/ |
441 | + delvewheel repair --no-mangle "libiconv-2.dll;libwinpthread-1.dll" \ |
442 | + -w ./wheelhouse/ psycopg_c/dist/*.whl |
443 | + echo "DEPS=$DEPS $(ls ./wheelhouse/*.whl)" >> $GITHUB_ENV |
444 | + |
445 | + - name: Install Python packages |
446 | + run: pip install $DEPS |
447 | |
448 | - name: Run tests |
449 | - run: | |
450 | - pip install $DEPS |
451 | - ./tools/build/ci_test.sh |
452 | - shell: bash |
453 | + run: ./tools/build/ci_test.sh |
454 | |
455 | |
456 | # }}} |
457 | @@ -268,7 +249,7 @@ jobs: |
458 | with: |
459 | python-version: ${{ matrix.python }} |
460 | |
461 | - - name: Run CockroachDB |
462 | + - name: Start CockroachDB service |
463 | # Note: this would love to be a service, but I don't see a way to pass |
464 | # the args to the docker run command line. |
465 | run: | |
466 | @@ -276,39 +257,19 @@ jobs: |
467 | docker run --rm -d --name crdb -p 26257:26257 \ |
468 | cockroachdb/cockroach:${{ matrix.crdb }} start-single-node --insecure |
469 | |
470 | - - name: Install the newest libpq version available |
471 | - if: ${{ matrix.libpq == 'newest' }} |
472 | - run: | |
473 | - set -x |
474 | - |
475 | - curl -sL https://www.postgresql.org/media/keys/ACCC4CF8.asc \ |
476 | - | gpg --dearmor \ |
477 | - | sudo tee /etc/apt/trusted.gpg.d/apt.postgresql.org.gpg > /dev/null |
478 | - |
479 | - # NOTE: in order to test with a preview release, add its number to |
480 | - # the deb entry. For instance, to test on preview Postgres 16, use: |
481 | - # "deb http://apt.postgresql.org/pub/repos/apt ${rel}-pgdg main 16" |
482 | - rel=$(lsb_release -c -s) |
483 | - echo "deb http://apt.postgresql.org/pub/repos/apt ${rel}-pgdg main" \ |
484 | - | sudo tee -a /etc/apt/sources.list.d/pgdg.list > /dev/null |
485 | - sudo apt-get -qq update |
486 | - |
487 | - pqver=$(apt-cache show libpq5 | grep ^Version: | head -1 \ |
488 | - | awk '{print $2}') |
489 | - sudo apt-get -qq -y install "libpq-dev=${pqver}" "libpq5=${pqver}" |
490 | + - name: Install the wanted libpq version |
491 | + run: sudo ./tools/build/ci_install_libpq.sh ${{ matrix.libpq }} |
492 | |
493 | - - if: ${{ matrix.impl == 'c' }} |
494 | + - name: Include psycopg-c to the packages to install |
495 | + if: ${{ matrix.impl == 'c' }} |
496 | run: | |
497 | echo "DEPS=$DEPS ./psycopg_c" >> $GITHUB_ENV |
498 | |
499 | - - name: Install Python dependencies |
500 | + - name: Install Python packages |
501 | run: pip install $DEPS |
502 | |
503 | - name: Run tests |
504 | run: ./tools/build/ci_test.sh |
505 | |
506 | - - name: Stop CockroachDB |
507 | - run: docker kill crdb |
508 | - |
509 | |
510 | # }}} |
511 | diff --git a/debian/changelog b/debian/changelog |
512 | index 91c9a7d..2e125c0 100644 |
513 | --- a/debian/changelog |
514 | +++ b/debian/changelog |
515 | @@ -1,3 +1,17 @@ |
516 | +psycopg3 (3.1.8-0ubuntu1) mantic; urgency=medium |
517 | + |
518 | + * New upstream release 3.1.8 (LP: #2022089) |
519 | + - This release update makes psycopg3 compatible with Django 4.2, see: |
520 | + https://docs.djangoproject.com/en/4.2/releases/4.2/#psycopg-3-support |
521 | + - Updates: |
522 | + + Don't pollute server logs when types looked for by TypeInfo.fetch() |
523 | + are not found. |
524 | + + Set Cursor.rowcount to the number of rows of each result set from |
525 | + Cursor.executemany() when called with !returning=True. |
526 | + + Fix TypeInfo.fetch() when used with ClientCursor |
527 | + |
528 | + -- Lena Voytek <lena.voytek@canonical.com> Wed, 30 Aug 2023 07:23:06 -0700 |
529 | + |
530 | psycopg3 (3.1.7-4) unstable; urgency=medium |
531 | |
532 | * Add *.pydist files to deal with dh_python psycopg names overrides. |
533 | diff --git a/debian/control b/debian/control |
534 | index d719c72..f0f9c2b 100644 |
535 | --- a/debian/control |
536 | +++ b/debian/control |
537 | @@ -23,7 +23,8 @@ Build-Depends-Indep: |
538 | python3-sphinx <!nodoc>, |
539 | python3-sphinx-autodoc-typehints <!nodoc>, |
540 | furo <!nodoc>, |
541 | -Maintainer: Tomasz Rybak <serpent@debian.org> |
542 | +Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com> |
543 | +XSBC-Original-Maintainer: Tomasz Rybak <serpent@debian.org> |
544 | Uploaders: Debian Python Team <team+python@tracker.debian.org> |
545 | Standards-Version: 4.6.2 |
546 | Rules-Requires-Root: no |
547 | diff --git a/docs/api/cursors.rst b/docs/api/cursors.rst |
548 | index 9c5b478..78c336b 100644 |
549 | --- a/docs/api/cursors.rst |
550 | +++ b/docs/api/cursors.rst |
551 | @@ -105,6 +105,11 @@ The `!Cursor` class |
552 | methods. Each input parameter will produce a separate result set: use |
553 | `nextset()` to read the results of the queries after the first one. |
554 | |
555 | + The value of `rowcount` is set to the cumulated number of rows |
556 | + affected by queries; except when using `!returning=True`, in which |
557 | + case it is set to the number of rows in the current result set (i.e. |
558 | + the first one, until `nextset()` gets called). |
559 | + |
560 | See :ref:`query-parameters` for all the details about executing |
561 | queries. |
562 | |
563 | @@ -239,6 +244,10 @@ The `!Cursor` class |
564 | a successful command, such as ``CREATE TABLE`` or ``UPDATE 42``. |
565 | |
566 | .. autoattribute:: rowcount |
567 | + |
568 | + From `executemany()`, unless called with `!returning=True`, this is |
569 | + the cumulated number of rows affected by executed commands. |
570 | + |
571 | .. autoattribute:: rownumber |
572 | |
573 | .. attribute:: _query |
574 | diff --git a/docs/news.rst b/docs/news.rst |
575 | index 46dfbe7..d20c993 100644 |
576 | --- a/docs/news.rst |
577 | +++ b/docs/news.rst |
578 | @@ -10,6 +10,15 @@ |
579 | Current release |
580 | --------------- |
581 | |
582 | +Psycopg 3.1.8 |
583 | +^^^^^^^^^^^^^ |
584 | + |
585 | +- Don't pollute server logs when types looked for by `TypeInfo.fetch()` |
586 | + are not found (:ticket:`#473`). |
587 | +- Set `Cursor.rowcount` to the number of rows of each result set from |
588 | + `~Cursor.executemany()` when called with `!returning=True` (:ticket:`#479`). |
589 | +- Fix `TypeInfo.fetch()` when used with `ClientCursor` (:ticket:`#484`). |
590 | + |
591 | Psycopg 3.1.7 |
592 | ^^^^^^^^^^^^^ |
593 | |
594 | diff --git a/psycopg/psycopg/_adapters_map.py b/psycopg/psycopg/_adapters_map.py |
595 | index a3a6ef8..70bf4cc 100644 |
596 | --- a/psycopg/psycopg/_adapters_map.py |
597 | +++ b/psycopg/psycopg/_adapters_map.py |
598 | @@ -197,9 +197,15 @@ class AdaptersMap: |
599 | use the last one of the dumpers registered on `!cls`. |
600 | """ |
601 | try: |
602 | - dmap = self._dumpers[format] |
603 | + # Fast path: the class has a known dumper. |
604 | + return self._dumpers[format][cls] |
605 | except KeyError: |
606 | - raise ValueError(f"bad dumper format: {format}") |
607 | + if format not in self._dumpers: |
608 | + raise ValueError(f"bad dumper format: {format}") |
609 | + |
610 | + # If the KeyError was caused by cls missing from dmap, let's |
611 | + # look for different cases. |
612 | + dmap = self._dumpers[format] |
613 | |
614 | # Look for the right class, including looking at superclasses |
615 | for scls in cls.__mro__: |
616 | diff --git a/psycopg/psycopg/_typeinfo.py b/psycopg/psycopg/_typeinfo.py |
617 | index 2f1a24d..52d5ca7 100644 |
618 | --- a/psycopg/psycopg/_typeinfo.py |
619 | +++ b/psycopg/psycopg/_typeinfo.py |
620 | @@ -12,13 +12,13 @@ from typing import Sequence, Tuple, Type, TypeVar, Union, TYPE_CHECKING |
621 | from typing_extensions import TypeAlias |
622 | |
623 | from . import errors as e |
624 | -from .abc import AdaptContext |
625 | +from .abc import AdaptContext, Query |
626 | from .rows import dict_row |
627 | |
628 | if TYPE_CHECKING: |
629 | - from .connection import Connection |
630 | + from .connection import BaseConnection, Connection |
631 | from .connection_async import AsyncConnection |
632 | - from .sql import Identifier |
633 | + from .sql import Identifier, SQL |
634 | |
635 | T = TypeVar("T", bound="TypeInfo") |
636 | RegistryKey: TypeAlias = Union[str, int, Tuple[type, int]] |
637 | @@ -62,34 +62,39 @@ class TypeInfo: |
638 | @overload |
639 | @classmethod |
640 | async def fetch( |
641 | - cls: Type[T], |
642 | - conn: "AsyncConnection[Any]", |
643 | - name: Union[str, "Identifier"], |
644 | + cls: Type[T], conn: "AsyncConnection[Any]", name: Union[str, "Identifier"] |
645 | ) -> Optional[T]: |
646 | ... |
647 | |
648 | @classmethod |
649 | def fetch( |
650 | - cls: Type[T], |
651 | - conn: "Union[Connection[Any], AsyncConnection[Any]]", |
652 | - name: Union[str, "Identifier"], |
653 | + cls: Type[T], conn: "BaseConnection[Any]", name: Union[str, "Identifier"] |
654 | ) -> Any: |
655 | """Query a system catalog to read information about a type.""" |
656 | from .sql import Composable |
657 | + from .connection import Connection |
658 | from .connection_async import AsyncConnection |
659 | |
660 | if isinstance(name, Composable): |
661 | name = name.as_string(conn) |
662 | |
663 | - if isinstance(conn, AsyncConnection): |
664 | + if isinstance(conn, Connection): |
665 | + return cls._fetch(conn, name) |
666 | + elif isinstance(conn, AsyncConnection): |
667 | return cls._fetch_async(conn, name) |
668 | + else: |
669 | + raise TypeError( |
670 | + f"expected Connection or AsyncConnection, got {type(conn).__name__}" |
671 | + ) |
672 | |
673 | + @classmethod |
674 | + def _fetch(cls: Type[T], conn: "Connection[Any]", name: str) -> Optional[T]: |
675 | # This might result in a nested transaction. What we want is to leave |
676 | # the function with the connection in the state we found (either idle |
677 | # or intrans) |
678 | try: |
679 | with conn.transaction(): |
680 | - with conn.cursor(binary=True, row_factory=dict_row) as cur: |
681 | + with conn.cursor(row_factory=dict_row) as cur: |
682 | cur.execute(cls._get_info_query(conn), {"name": name}) |
683 | recs = cur.fetchall() |
684 | except e.UndefinedObject: |
685 | @@ -101,14 +106,9 @@ class TypeInfo: |
686 | async def _fetch_async( |
687 | cls: Type[T], conn: "AsyncConnection[Any]", name: str |
688 | ) -> Optional[T]: |
689 | - """ |
690 | - Query a system catalog to read information about a type. |
691 | - |
692 | - Similar to `fetch()` but can use an asynchronous connection. |
693 | - """ |
694 | try: |
695 | async with conn.transaction(): |
696 | - async with conn.cursor(binary=True, row_factory=dict_row) as cur: |
697 | + async with conn.cursor(row_factory=dict_row) as cur: |
698 | await cur.execute(cls._get_info_query(conn), {"name": name}) |
699 | recs = await cur.fetchall() |
700 | except e.UndefinedObject: |
701 | @@ -146,17 +146,43 @@ class TypeInfo: |
702 | register_array(self, context) |
703 | |
704 | @classmethod |
705 | - def _get_info_query( |
706 | - cls, conn: "Union[Connection[Any], AsyncConnection[Any]]" |
707 | - ) -> str: |
708 | - return """\ |
709 | + def _get_info_query(cls, conn: "BaseConnection[Any]") -> Query: |
710 | + from .sql import SQL |
711 | + |
712 | + return SQL( |
713 | + """\ |
714 | SELECT |
715 | typname AS name, oid, typarray AS array_oid, |
716 | oid::regtype::text AS regtype, typdelim AS delimiter |
717 | FROM pg_type t |
718 | -WHERE t.oid = %(name)s::regtype |
719 | +WHERE t.oid = {regtype} |
720 | ORDER BY t.oid |
721 | """ |
722 | + ).format(regtype=cls._to_regtype(conn)) |
723 | + |
724 | + @classmethod |
725 | + def _has_to_regtype_function(cls, conn: "BaseConnection[Any]") -> bool: |
726 | + # to_regtype() introduced in PostgreSQL 9.4 and CockroachDB 22.2 |
727 | + info = conn.info |
728 | + if info.vendor == "PostgreSQL": |
729 | + return info.server_version >= 90400 |
730 | + elif info.vendor == "CockroachDB": |
731 | + return info.server_version >= 220200 |
732 | + else: |
733 | + return False |
734 | + |
735 | + @classmethod |
736 | + def _to_regtype(cls, conn: "BaseConnection[Any]") -> "SQL": |
737 | + # `to_regtype()` returns the type oid or NULL, unlike the :: operator, |
738 | + # which returns the type or raises an exception, which requires |
739 | + # a transaction rollback and leaves traces in the server logs. |
740 | + |
741 | + from .sql import SQL |
742 | + |
743 | + if cls._has_to_regtype_function(conn): |
744 | + return SQL("to_regtype(%(name)s)") |
745 | + else: |
746 | + return SQL("%(name)s::regtype") |
747 | |
748 | def _added(self, registry: "TypesRegistry") -> None: |
749 | """Method called by the `!registry` when the object is added there.""" |
750 | @@ -181,17 +207,19 @@ class RangeInfo(TypeInfo): |
751 | self.subtype_oid = subtype_oid |
752 | |
753 | @classmethod |
754 | - def _get_info_query( |
755 | - cls, conn: "Union[Connection[Any], AsyncConnection[Any]]" |
756 | - ) -> str: |
757 | - return """\ |
758 | + def _get_info_query(cls, conn: "BaseConnection[Any]") -> Query: |
759 | + from .sql import SQL |
760 | + |
761 | + return SQL( |
762 | + """\ |
763 | SELECT t.typname AS name, t.oid AS oid, t.typarray AS array_oid, |
764 | t.oid::regtype::text AS regtype, |
765 | r.rngsubtype AS subtype_oid |
766 | FROM pg_type t |
767 | JOIN pg_range r ON t.oid = r.rngtypid |
768 | -WHERE t.oid = %(name)s::regtype |
769 | +WHERE t.oid = {regtype} |
770 | """ |
771 | + ).format(regtype=cls._to_regtype(conn)) |
772 | |
773 | def _added(self, registry: "TypesRegistry") -> None: |
774 | # Map ranges subtypes to info |
775 | @@ -201,8 +229,7 @@ WHERE t.oid = %(name)s::regtype |
776 | class MultirangeInfo(TypeInfo): |
777 | """Manage information about a multirange type.""" |
778 | |
779 | - # TODO: expose to multirange module once added |
780 | - # __module__ = "psycopg.types.multirange" |
781 | + __module__ = "psycopg.types.multirange" |
782 | |
783 | def __init__( |
784 | self, |
785 | @@ -219,21 +246,24 @@ class MultirangeInfo(TypeInfo): |
786 | self.subtype_oid = subtype_oid |
787 | |
788 | @classmethod |
789 | - def _get_info_query( |
790 | - cls, conn: "Union[Connection[Any], AsyncConnection[Any]]" |
791 | - ) -> str: |
792 | + def _get_info_query(cls, conn: "BaseConnection[Any]") -> Query: |
793 | + from .sql import SQL |
794 | + |
795 | if conn.info.server_version < 140000: |
796 | raise e.NotSupportedError( |
797 | "multirange types are only available from PostgreSQL 14" |
798 | ) |
799 | - return """\ |
800 | + |
801 | + return SQL( |
802 | + """\ |
803 | SELECT t.typname AS name, t.oid AS oid, t.typarray AS array_oid, |
804 | t.oid::regtype::text AS regtype, |
805 | r.rngtypid AS range_oid, r.rngsubtype AS subtype_oid |
806 | FROM pg_type t |
807 | JOIN pg_range r ON t.oid = r.rngmultitypid |
808 | -WHERE t.oid = %(name)s::regtype |
809 | +WHERE t.oid = {regtype} |
810 | """ |
811 | + ).format(regtype=cls._to_regtype(conn)) |
812 | |
813 | def _added(self, registry: "TypesRegistry") -> None: |
814 | # Map multiranges ranges and subtypes to info |
815 | @@ -263,15 +293,16 @@ class CompositeInfo(TypeInfo): |
816 | self.python_type: Optional[type] = None |
817 | |
818 | @classmethod |
819 | - def _get_info_query( |
820 | - cls, conn: "Union[Connection[Any], AsyncConnection[Any]]" |
821 | - ) -> str: |
822 | - return """\ |
823 | + def _get_info_query(cls, conn: "BaseConnection[Any]") -> Query: |
824 | + from .sql import SQL |
825 | + |
826 | + return SQL( |
827 | + """\ |
828 | SELECT |
829 | t.typname AS name, t.oid AS oid, t.typarray AS array_oid, |
830 | t.oid::regtype::text AS regtype, |
831 | - coalesce(a.fnames, '{}') AS field_names, |
832 | - coalesce(a.ftypes, '{}') AS field_types |
833 | + coalesce(a.fnames, '{{}}') AS field_names, |
834 | + coalesce(a.ftypes, '{{}}') AS field_types |
835 | FROM pg_type t |
836 | LEFT JOIN ( |
837 | SELECT |
838 | @@ -282,15 +313,16 @@ LEFT JOIN ( |
839 | SELECT a.attrelid, a.attname, a.atttypid |
840 | FROM pg_attribute a |
841 | JOIN pg_type t ON t.typrelid = a.attrelid |
842 | - WHERE t.oid = %(name)s::regtype |
843 | + WHERE t.oid = {regtype} |
844 | AND a.attnum > 0 |
845 | AND NOT a.attisdropped |
846 | ORDER BY a.attnum |
847 | ) x |
848 | GROUP BY attrelid |
849 | ) a ON a.attrelid = t.typrelid |
850 | -WHERE t.oid = %(name)s::regtype |
851 | +WHERE t.oid = {regtype} |
852 | """ |
853 | + ).format(regtype=cls._to_regtype(conn)) |
854 | |
855 | |
856 | class EnumInfo(TypeInfo): |
857 | @@ -311,10 +343,11 @@ class EnumInfo(TypeInfo): |
858 | self.enum: Optional[Type[Enum]] = None |
859 | |
860 | @classmethod |
861 | - def _get_info_query( |
862 | - cls, conn: "Union[Connection[Any], AsyncConnection[Any]]" |
863 | - ) -> str: |
864 | - return """\ |
865 | + def _get_info_query(cls, conn: "BaseConnection[Any]") -> Query: |
866 | + from .sql import SQL |
867 | + |
868 | + return SQL( |
869 | + """\ |
870 | SELECT name, oid, array_oid, array_agg(label) AS labels |
871 | FROM ( |
872 | SELECT |
873 | @@ -323,11 +356,12 @@ FROM ( |
874 | FROM pg_type t |
875 | LEFT JOIN pg_enum e |
876 | ON e.enumtypid = t.oid |
877 | - WHERE t.oid = %(name)s::regtype |
878 | + WHERE t.oid = {regtype} |
879 | ORDER BY e.enumsortorder |
880 | ) x |
881 | GROUP BY name, oid, array_oid |
882 | """ |
883 | + ).format(regtype=cls._to_regtype(conn)) |
884 | |
885 | |
886 | class TypesRegistry: |
887 | diff --git a/psycopg/psycopg/cursor.py b/psycopg/psycopg/cursor.py |
888 | index 42c3804..7c32f29 100644 |
889 | --- a/psycopg/psycopg/cursor.py |
890 | +++ b/psycopg/psycopg/cursor.py |
891 | @@ -219,7 +219,8 @@ class BaseCursor(Generic[ConnectionType, Row]): |
892 | assert pipeline |
893 | |
894 | yield from self._start_query(query) |
895 | - self._rowcount = 0 |
896 | + if not returning: |
897 | + self._rowcount = 0 |
898 | |
899 | assert self._execmany_returning is None |
900 | self._execmany_returning = returning |
901 | @@ -251,8 +252,9 @@ class BaseCursor(Generic[ConnectionType, Row]): |
902 | Generator implementing `Cursor.executemany()` with pipelines not available. |
903 | """ |
904 | yield from self._start_query(query) |
905 | + if not returning: |
906 | + self._rowcount = 0 |
907 | first = True |
908 | - nrows = 0 |
909 | for params in params_seq: |
910 | if first: |
911 | pgq = self._convert_query(query, params) |
912 | @@ -266,17 +268,15 @@ class BaseCursor(Generic[ConnectionType, Row]): |
913 | self._check_results(results) |
914 | if returning: |
915 | self._results.extend(results) |
916 | - |
917 | - for res in results: |
918 | - nrows += res.command_tuples or 0 |
919 | + else: |
920 | + # In non-returning case, set rowcount to the cumulated number |
921 | + # of rows of executed queries. |
922 | + for res in results: |
923 | + self._rowcount += res.command_tuples or 0 |
924 | |
925 | if self._results: |
926 | self._select_current_result(0) |
927 | |
928 | - # Override rowcount for the first result. Calls to nextset() will change |
929 | - # it to the value of that result only, but we hope nobody will notice. |
930 | - # You haven't read this comment. |
931 | - self._rowcount = nrows |
932 | self._last_query = query |
933 | |
934 | for cmd in self._conn._prepared.get_maintenance_commands(): |
935 | @@ -545,16 +545,11 @@ class BaseCursor(Generic[ConnectionType, Row]): |
936 | self._results.extend(results) |
937 | if first_batch: |
938 | self._select_current_result(0) |
939 | - self._rowcount = 0 |
940 | - |
941 | - # Override rowcount for the first result. Calls to nextset() will |
942 | - # change it to the value of that result only, but we hope nobody |
943 | - # will notice. |
944 | - # You haven't read this comment. |
945 | - if self._rowcount < 0: |
946 | - self._rowcount = 0 |
947 | - for res in results: |
948 | - self._rowcount += res.command_tuples or 0 |
949 | + else: |
950 | + # In non-returning case, set rowcount to the cumulated number of |
951 | + # rows of executed queries. |
952 | + for res in results: |
953 | + self._rowcount += res.command_tuples or 0 |
954 | |
955 | def _send_prepare(self, name: bytes, query: PostgresQuery) -> None: |
956 | if self._conn._pipeline: |
957 | diff --git a/psycopg/psycopg/cursor_async.py b/psycopg/psycopg/cursor_async.py |
958 | index 8971d40..42cd8e5 100644 |
959 | --- a/psycopg/psycopg/cursor_async.py |
960 | +++ b/psycopg/psycopg/cursor_async.py |
961 | @@ -173,10 +173,10 @@ class AsyncCursor(BaseCursor["AsyncConnection[Any]", Row]): |
962 | async def fetchone(self) -> Optional[Row]: |
963 | await self._fetch_pipeline() |
964 | self._check_result_for_fetch() |
965 | - rv = self._tx.load_row(self._pos, self._make_row) |
966 | - if rv is not None: |
967 | + record = self._tx.load_row(self._pos, self._make_row) |
968 | + if record is not None: |
969 | self._pos += 1 |
970 | - return rv |
971 | + return record |
972 | |
973 | async def fetchmany(self, size: int = 0) -> List[Row]: |
974 | await self._fetch_pipeline() |
975 | diff --git a/psycopg/psycopg/pq/pq_ctypes.py b/psycopg/psycopg/pq/pq_ctypes.py |
976 | index 8b87c19..204e384 100644 |
977 | --- a/psycopg/psycopg/pq/pq_ctypes.py |
978 | +++ b/psycopg/psycopg/pq/pq_ctypes.py |
979 | @@ -262,7 +262,7 @@ class PGconn: |
980 | self._ensure_pgconn() |
981 | rv = impl.PQexec(self._pgconn_ptr, command) |
982 | if not rv: |
983 | - raise MemoryError("couldn't allocate PGresult") |
984 | + raise e.OperationalError(f"executing query failed: {error_message(self)}") |
985 | return PGresult(rv) |
986 | |
987 | def send_query(self, command: bytes) -> None: |
988 | @@ -286,7 +286,7 @@ class PGconn: |
989 | self._ensure_pgconn() |
990 | rv = impl.PQexecParams(*args) |
991 | if not rv: |
992 | - raise MemoryError("couldn't allocate PGresult") |
993 | + raise e.OperationalError(f"executing query failed: {error_message(self)}") |
994 | return PGresult(rv) |
995 | |
996 | def send_query_params( |
997 | @@ -427,7 +427,7 @@ class PGconn: |
998 | self._ensure_pgconn() |
999 | rv = impl.PQprepare(self._pgconn_ptr, name, command, nparams, atypes) |
1000 | if not rv: |
1001 | - raise MemoryError("couldn't allocate PGresult") |
1002 | + raise e.OperationalError(f"preparing query failed: {error_message(self)}") |
1003 | return PGresult(rv) |
1004 | |
1005 | def exec_prepared( |
1006 | @@ -477,7 +477,9 @@ class PGconn: |
1007 | result_format, |
1008 | ) |
1009 | if not rv: |
1010 | - raise MemoryError("couldn't allocate PGresult") |
1011 | + raise e.OperationalError( |
1012 | + f"executing prepared query failed: {error_message(self)}" |
1013 | + ) |
1014 | return PGresult(rv) |
1015 | |
1016 | def describe_prepared(self, name: bytes) -> "PGresult": |
1017 | @@ -486,7 +488,7 @@ class PGconn: |
1018 | self._ensure_pgconn() |
1019 | rv = impl.PQdescribePrepared(self._pgconn_ptr, name) |
1020 | if not rv: |
1021 | - raise MemoryError("couldn't allocate PGresult") |
1022 | + raise e.OperationalError(f"describe prepared failed: {error_message(self)}") |
1023 | return PGresult(rv) |
1024 | |
1025 | def send_describe_prepared(self, name: bytes) -> None: |
1026 | @@ -504,7 +506,7 @@ class PGconn: |
1027 | self._ensure_pgconn() |
1028 | rv = impl.PQdescribePortal(self._pgconn_ptr, name) |
1029 | if not rv: |
1030 | - raise MemoryError("couldn't allocate PGresult") |
1031 | + raise e.OperationalError(f"describe portal failed: {error_message(self)}") |
1032 | return PGresult(rv) |
1033 | |
1034 | def send_describe_portal(self, name: bytes) -> None: |
1035 | diff --git a/psycopg/psycopg/version.py b/psycopg/psycopg/version.py |
1036 | index a98bc35..f600f16 100644 |
1037 | --- a/psycopg/psycopg/version.py |
1038 | +++ b/psycopg/psycopg/version.py |
1039 | @@ -8,7 +8,7 @@ psycopg distribution version file. |
1040 | # https://www.python.org/dev/peps/pep-0440/ |
1041 | |
1042 | # STOP AND READ! if you change: |
1043 | -__version__ = "3.1.7" |
1044 | +__version__ = "3.1.8" |
1045 | # also change: |
1046 | # - `docs/news.rst` to declare this as the current version or an unreleased one |
1047 | # - `psycopg_c/psycopg_c/version.py` to the same version. |
1048 | diff --git a/psycopg/setup.cfg b/psycopg/setup.cfg |
1049 | index fdcb612..f1c211a 100644 |
1050 | --- a/psycopg/setup.cfg |
1051 | +++ b/psycopg/setup.cfg |
1052 | @@ -8,6 +8,8 @@ license = GNU Lesser General Public License v3 (LGPLv3) |
1053 | |
1054 | project_urls = |
1055 | Homepage = https://psycopg.org/ |
1056 | + Documentation = https://psycopg.org/psycopg3/docs/ |
1057 | + Changes = https://psycopg.org/psycopg3/docs/news.html |
1058 | Code = https://github.com/psycopg/psycopg |
1059 | Issue Tracker = https://github.com/psycopg/psycopg/issues |
1060 | Download = https://pypi.org/project/psycopg/ |
1061 | diff --git a/psycopg/setup.py b/psycopg/setup.py |
1062 | index 90d4380..9b18e86 100644 |
1063 | --- a/psycopg/setup.py |
1064 | +++ b/psycopg/setup.py |
1065 | @@ -14,11 +14,14 @@ here = os.path.abspath(os.path.dirname(__file__)) |
1066 | if os.path.abspath(os.getcwd()) != here: |
1067 | os.chdir(here) |
1068 | |
1069 | -# Only for release 3.1.7. Not building binary packages because Scaleway |
1070 | +# Only for release 3.1.7-3.1.8. Not building binary packages because Scaleway |
1071 | # has no runner available, but psycopg-binary 3.1.6 should work as well |
1072 | -# as the only change is in rows.py. |
1073 | -version = "3.1.7" |
1074 | -ext_versions = ">= 3.1.6, <= 3.1.7" |
1075 | +# as there was no change in the interface between psycopg and psycopg_c. |
1076 | +# |
1077 | +# This is getting frustrating, dangerous, stupid. We need to find a way |
1078 | +# out of Scaleway. |
1079 | +version = "3.1.8" |
1080 | +ext_versions = ">= 3.1.6, <= 3.1.8" |
1081 | |
1082 | extras_require = { |
1083 | # Install the C extension module (requires dev tools) |
1084 | @@ -40,7 +43,7 @@ extras_require = { |
1085 | "pytest >= 6.2.5", |
1086 | "pytest-asyncio >= 0.17", |
1087 | "pytest-cov >= 3.0", |
1088 | - "pytest-randomly >= 3.10", |
1089 | + "pytest-randomly >= 3.5", |
1090 | ], |
1091 | # Requirements needed for development |
1092 | "dev": [ |
1093 | diff --git a/psycopg_c/MANIFEST.in b/psycopg_c/MANIFEST.in |
1094 | new file mode 100644 |
1095 | index 0000000..117298e |
1096 | --- /dev/null |
1097 | +++ b/psycopg_c/MANIFEST.in |
1098 | @@ -0,0 +1,3 @@ |
1099 | +# Include the build backend in the distributed files. |
1100 | +# It doesn't seem it can be specified in setup.cfg |
1101 | +include build_backend/*.py |
1102 | diff --git a/psycopg_c/build_backend/cython_backend.py b/psycopg_c/build_backend/cython_backend.py |
1103 | new file mode 100644 |
1104 | index 0000000..97bd2af |
1105 | --- /dev/null |
1106 | +++ b/psycopg_c/build_backend/cython_backend.py |
1107 | @@ -0,0 +1,38 @@ |
1108 | +""" |
1109 | +Build backend to build a Cython-based project only if needed. |
1110 | + |
1111 | +This backend adds a build dependency on Cython if pxd files are available, |
1112 | +otherwise it only relies on the c files to have been precompiled. |
1113 | +""" |
1114 | + |
1115 | +# Copyright (C) 2023 The Psycopg Team |
1116 | + |
1117 | +import os |
1118 | +from typing import Any, List |
1119 | + |
1120 | +import tomli |
1121 | +from setuptools import build_meta |
1122 | + |
1123 | + |
1124 | +def get_requires_for_build_wheel(config_settings: Any = None) -> List[str]: |
1125 | + if not os.path.exists("psycopg_c/_psycopg.pyx"): |
1126 | + # Cython files don't exist: we must be in a sdist and we can trust |
1127 | + # that the .c files we have packaged exist. |
1128 | + return [] |
1129 | + |
1130 | + # Cython files exists: we must be in a git checkout and we need Cython |
1131 | + # to build. Get the version from the pyproject itself to keep things in the |
1132 | + # same place. |
1133 | + with open("pyproject.toml", "rb") as f: |
1134 | + pyprj = tomli.load(f) |
1135 | + |
1136 | + rv: List[str] = pyprj["cython-backend"]["cython-requires"] |
1137 | + return rv |
1138 | + |
1139 | + |
1140 | +get_requires_for_build_sdist = get_requires_for_build_wheel |
1141 | + |
1142 | +# For the rest, behave like the rest of setuptoos.build_meta |
1143 | +prepare_metadata_for_build_wheel = build_meta.prepare_metadata_for_build_wheel |
1144 | +build_wheel = build_meta.build_wheel |
1145 | +build_sdist = build_meta.build_sdist |
1146 | diff --git a/psycopg_c/psycopg_c/_psycopg.pyx b/psycopg_c/psycopg_c/_psycopg.pyx |
1147 | index 9d2b8ba..5bbde2b 100644 |
1148 | --- a/psycopg_c/psycopg_c/_psycopg.pyx |
1149 | +++ b/psycopg_c/psycopg_c/_psycopg.pyx |
1150 | @@ -28,6 +28,12 @@ PG_BINARY = _py_Format.BINARY |
1151 | |
1152 | cdef extern from *: |
1153 | """ |
1154 | +/* Include this early to avoid a warning about redefined ARRAYSIZE in winnt.h */ |
1155 | +#ifdef MS_WINDOWS |
1156 | +#define WIN32_LEAN_AND_MEAN |
1157 | +#include <winsock2.h> |
1158 | +#endif |
1159 | + |
1160 | #ifndef ARRAYSIZE |
1161 | #define ARRAYSIZE(a) ((sizeof(a) / sizeof(*(a)))) |
1162 | #endif |
1163 | diff --git a/psycopg_c/psycopg_c/_psycopg/waiting.pyx b/psycopg_c/psycopg_c/_psycopg/waiting.pyx |
1164 | index 0af6c57..baaead6 100644 |
1165 | --- a/psycopg_c/psycopg_c/_psycopg/waiting.pyx |
1166 | +++ b/psycopg_c/psycopg_c/_psycopg/waiting.pyx |
1167 | @@ -72,12 +72,9 @@ wait_c_impl(int fileno, int wait, float timeout) |
1168 | select_rv = poll(&input_fd, 1, timeout_ms); |
1169 | Py_END_ALLOW_THREADS |
1170 | |
1171 | + if (select_rv < 0) { goto error; } |
1172 | if (PyErr_CheckSignals()) { goto finally; } |
1173 | |
1174 | - if (select_rv < 0) { |
1175 | - goto error; |
1176 | - } |
1177 | - |
1178 | if (input_fd.events & POLLIN) { rv |= SELECT_EV_READ; } |
1179 | if (input_fd.events & POLLOUT) { rv |= SELECT_EV_WRITE; } |
1180 | |
1181 | @@ -120,12 +117,9 @@ wait_c_impl(int fileno, int wait, float timeout) |
1182 | select_rv = select(fileno + 1, &ifds, &ofds, &efds, tvptr); |
1183 | Py_END_ALLOW_THREADS |
1184 | |
1185 | + if (select_rv < 0) { goto error; } |
1186 | if (PyErr_CheckSignals()) { goto finally; } |
1187 | |
1188 | - if (select_rv < 0) { |
1189 | - goto error; |
1190 | - } |
1191 | - |
1192 | if (FD_ISSET(fileno, &ifds)) { rv |= SELECT_EV_READ; } |
1193 | if (FD_ISSET(fileno, &ofds)) { rv |= SELECT_EV_WRITE; } |
1194 | |
1195 | @@ -168,7 +162,7 @@ def wait_c(gen: PQGen[RV], int fileno, timeout = None) -> RV: |
1196 | if timeout is None: |
1197 | ctimeout = -1.0 |
1198 | else: |
1199 | - ctimeout = float(timeout) |
1200 | + ctimeout = <float>float(timeout) |
1201 | if ctimeout < 0.0: |
1202 | ctimeout = -1.0 |
1203 | |
1204 | diff --git a/psycopg_c/psycopg_c/pq/pgconn.pyx b/psycopg_c/psycopg_c/pq/pgconn.pyx |
1205 | index 4a60530..5c5a911 100644 |
1206 | --- a/psycopg_c/psycopg_c/pq/pgconn.pyx |
1207 | +++ b/psycopg_c/psycopg_c/pq/pgconn.pyx |
1208 | @@ -207,7 +207,7 @@ cdef class PGconn: |
1209 | with nogil: |
1210 | pgresult = libpq.PQexec(self._pgconn_ptr, command) |
1211 | if pgresult is NULL: |
1212 | - raise MemoryError("couldn't allocate PGresult") |
1213 | + raise e.OperationalError(f"executing query failed: {error_message(self)}") |
1214 | |
1215 | return PGresult._from_ptr(pgresult) |
1216 | |
1217 | @@ -244,7 +244,7 @@ cdef class PGconn: |
1218 | <const char *const *>cvalues, clengths, cformats, result_format) |
1219 | _clear_query_params(ctypes, cvalues, clengths, cformats) |
1220 | if pgresult is NULL: |
1221 | - raise MemoryError("couldn't allocate PGresult") |
1222 | + raise e.OperationalError(f"executing query failed: {error_message(self)}") |
1223 | return PGresult._from_ptr(pgresult) |
1224 | |
1225 | def send_query_params( |
1226 | @@ -353,7 +353,7 @@ cdef class PGconn: |
1227 | self._pgconn_ptr, name, command, <int>nparams, atypes) |
1228 | PyMem_Free(atypes) |
1229 | if rv is NULL: |
1230 | - raise MemoryError("couldn't allocate PGresult") |
1231 | + raise e.OperationalError(f"preparing query failed: {error_message(self)}") |
1232 | return PGresult._from_ptr(rv) |
1233 | |
1234 | def exec_prepared( |
1235 | @@ -382,14 +382,18 @@ cdef class PGconn: |
1236 | |
1237 | _clear_query_params(ctypes, cvalues, clengths, cformats) |
1238 | if rv is NULL: |
1239 | - raise MemoryError("couldn't allocate PGresult") |
1240 | + raise e.OperationalError( |
1241 | + f"executing prepared query failed: {error_message(self)}" |
1242 | + ) |
1243 | return PGresult._from_ptr(rv) |
1244 | |
1245 | def describe_prepared(self, const char *name) -> PGresult: |
1246 | _ensure_pgconn(self) |
1247 | cdef libpq.PGresult *rv = libpq.PQdescribePrepared(self._pgconn_ptr, name) |
1248 | if rv is NULL: |
1249 | - raise MemoryError("couldn't allocate PGresult") |
1250 | + raise e.OperationalError( |
1251 | + f"describe prepared failed: {error_message(self)}" |
1252 | + ) |
1253 | return PGresult._from_ptr(rv) |
1254 | |
1255 | def send_describe_prepared(self, const char *name) -> None: |
1256 | @@ -404,7 +408,9 @@ cdef class PGconn: |
1257 | _ensure_pgconn(self) |
1258 | cdef libpq.PGresult *rv = libpq.PQdescribePortal(self._pgconn_ptr, name) |
1259 | if rv is NULL: |
1260 | - raise MemoryError("couldn't allocate PGresult") |
1261 | + raise e.OperationalError( |
1262 | + f"describe prepared failed: {error_message(self)}" |
1263 | + ) |
1264 | return PGresult._from_ptr(rv) |
1265 | |
1266 | def send_describe_portal(self, const char *name) -> None: |
1267 | diff --git a/psycopg_c/psycopg_c/types/array.pyx b/psycopg_c/psycopg_c/types/array.pyx |
1268 | index 9abaef9..280d33c 100644 |
1269 | --- a/psycopg_c/psycopg_c/types/array.pyx |
1270 | +++ b/psycopg_c/psycopg_c/types/array.pyx |
1271 | @@ -164,7 +164,7 @@ cdef object _parse_token( |
1272 | if has_quotes: |
1273 | end -= 1 |
1274 | |
1275 | - cdef int length = (end - start) |
1276 | + cdef Py_ssize_t length = (end - start) |
1277 | if length == 4 and not has_quotes \ |
1278 | and start[0] == b'N' and start[1] == b'U' \ |
1279 | and start[2] == b'L' and start[3] == b'L': |
1280 | diff --git a/psycopg_c/psycopg_c/version.py b/psycopg_c/psycopg_c/version.py |
1281 | index 5c989c2..cb0609c 100644 |
1282 | --- a/psycopg_c/psycopg_c/version.py |
1283 | +++ b/psycopg_c/psycopg_c/version.py |
1284 | @@ -6,6 +6,6 @@ psycopg-c distribution version file. |
1285 | |
1286 | # Use a versioning scheme as defined in |
1287 | # https://www.python.org/dev/peps/pep-0440/ |
1288 | -__version__ = "3.1.7" |
1289 | +__version__ = "3.1.8" |
1290 | |
1291 | # also change psycopg/psycopg/version.py accordingly. |
1292 | diff --git a/psycopg_c/pyproject.toml b/psycopg_c/pyproject.toml |
1293 | index f0d7a3f..48958ce 100644 |
1294 | --- a/psycopg_c/pyproject.toml |
1295 | +++ b/psycopg_c/pyproject.toml |
1296 | @@ -1,3 +1,13 @@ |
1297 | [build-system] |
1298 | -requires = ["setuptools>=49.2.0", "wheel>=0.37", "Cython>=3.0.0a11"] |
1299 | -build-backend = "setuptools.build_meta" |
1300 | +requires = ["setuptools >= 49.2.0", "wheel >= 0.37", "tomli >= 2.0.1"] |
1301 | + |
1302 | +# The cython_backend is a build backend adding a Cython dependency if the c |
1303 | +# source must be build from pxd files (when building from git checkout), and |
1304 | +# doesn't require Cython when needing to build from c files (when building |
1305 | +# from the sdist bundle). |
1306 | +build-backend = "cython_backend" |
1307 | +backend-path = ["build_backend"] |
1308 | + |
1309 | +[cython-backend] |
1310 | +# These packages are only installed if there are pyx files to compile. |
1311 | +cython-requires = ["Cython >= 3.0.0a11"] |
1312 | diff --git a/psycopg_c/setup.cfg b/psycopg_c/setup.cfg |
1313 | index 6c5c93c..bf12b96 100644 |
1314 | --- a/psycopg_c/setup.cfg |
1315 | +++ b/psycopg_c/setup.cfg |
1316 | @@ -8,6 +8,8 @@ license = GNU Lesser General Public License v3 (LGPLv3) |
1317 | |
1318 | project_urls = |
1319 | Homepage = https://psycopg.org/ |
1320 | + Documentation = https://psycopg.org/psycopg3/docs/ |
1321 | + Changes = https://psycopg.org/psycopg3/docs/news.html |
1322 | Code = https://github.com/psycopg/psycopg |
1323 | Issue Tracker = https://github.com/psycopg/psycopg/issues |
1324 | Download = https://pypi.org/project/psycopg-c/ |
1325 | @@ -36,7 +38,6 @@ license_files = LICENSE.txt |
1326 | |
1327 | [options] |
1328 | python_requires = >= 3.7 |
1329 | -setup_requires = Cython >= 3.0.0a11 |
1330 | packages = find: |
1331 | zip_safe = False |
1332 | |
1333 | diff --git a/psycopg_pool/psycopg_pool/version.py b/psycopg_pool/psycopg_pool/version.py |
1334 | index fc99bbd..3645d78 100644 |
1335 | --- a/psycopg_pool/psycopg_pool/version.py |
1336 | +++ b/psycopg_pool/psycopg_pool/version.py |
1337 | @@ -8,6 +8,6 @@ psycopg pool version file. |
1338 | # https://www.python.org/dev/peps/pep-0440/ |
1339 | |
1340 | # STOP AND READ! if you change: |
1341 | -__version__ = "3.1.5" |
1342 | +__version__ = "3.1.6.dev1" |
1343 | # also change: |
1344 | # - `docs/news_pool.rst` to declare this version current or unreleased |
1345 | diff --git a/psycopg_pool/pyproject.toml b/psycopg_pool/pyproject.toml |
1346 | new file mode 100644 |
1347 | index 0000000..21e410c |
1348 | --- /dev/null |
1349 | +++ b/psycopg_pool/pyproject.toml |
1350 | @@ -0,0 +1,3 @@ |
1351 | +[build-system] |
1352 | +requires = ["setuptools>=49.2.0", "wheel>=0.37"] |
1353 | +build-backend = "setuptools.build_meta" |
1354 | diff --git a/psycopg_pool/setup.cfg b/psycopg_pool/setup.cfg |
1355 | index 1a3274e..876e09d 100644 |
1356 | --- a/psycopg_pool/setup.cfg |
1357 | +++ b/psycopg_pool/setup.cfg |
1358 | @@ -8,6 +8,8 @@ license = GNU Lesser General Public License v3 (LGPLv3) |
1359 | |
1360 | project_urls = |
1361 | Homepage = https://psycopg.org/ |
1362 | + Documentation = https://www.psycopg.org/psycopg3/docs/advanced/pool.html |
1363 | + Changes = https://psycopg.org/psycopg3/docs/news_pool.html |
1364 | Code = https://github.com/psycopg/psycopg |
1365 | Issue Tracker = https://github.com/psycopg/psycopg/issues |
1366 | Download = https://pypi.org/project/psycopg-pool/ |
1367 | diff --git a/tests/constraints.txt b/tests/constraints.txt |
1368 | index ef03ba1..5c86d5a 100644 |
1369 | --- a/tests/constraints.txt |
1370 | +++ b/tests/constraints.txt |
1371 | @@ -6,27 +6,34 @@ |
1372 | # From install_requires |
1373 | backports.zoneinfo == 0.2.0 |
1374 | typing-extensions == 4.1.0 |
1375 | +importlib-metadata == 1.4 |
1376 | |
1377 | # From the 'test' extra |
1378 | -mypy == 0.981 |
1379 | +mypy == 0.990 |
1380 | pproxy == 2.7.0 |
1381 | pytest == 6.2.5 |
1382 | pytest-asyncio == 0.17.0 |
1383 | pytest-cov == 3.0.0 |
1384 | -pytest-randomly == 3.10.0 |
1385 | +pytest-randomly == 3.5.0 |
1386 | |
1387 | # From the 'dev' extra |
1388 | black == 22.3.0 |
1389 | dnspython == 2.1.0 |
1390 | flake8 == 4.0.0 |
1391 | -mypy == 0.981 |
1392 | types-setuptools == 57.4.0 |
1393 | wheel == 0.37 |
1394 | |
1395 | # From the 'docs' extra |
1396 | -Sphinx == 4.2.0 |
1397 | -furo == 2021.11.23 |
1398 | +Sphinx == 5.0 |
1399 | +furo == 2022.6.21 |
1400 | sphinx-autobuild == 2021.3.14 |
1401 | sphinx-autodoc-typehints == 1.12.0 |
1402 | -dnspython == 2.1.0 |
1403 | + |
1404 | +# Build tools |
1405 | +setuptools == 49.2.0 |
1406 | +wheel == 0.37 |
1407 | +Cython == 3.0.0a11 |
1408 | +tomli == 2.0.1 |
1409 | + |
1410 | +# Undeclared extras to "unblock" extra features |
1411 | shapely == 1.7.0 |
1412 | diff --git a/tests/crdb/test_cursor.py b/tests/crdb/test_cursor.py |
1413 | index 991b084..d3c10e5 100644 |
1414 | --- a/tests/crdb/test_cursor.py |
1415 | +++ b/tests/crdb/test_cursor.py |
1416 | @@ -61,5 +61,14 @@ def test_changefeed(conn_cls, dsn, conn, testfeed, fmt_out): |
1417 | cur.execute("cancel query %s", [qid]) |
1418 | assert cur.statusmessage == "CANCEL QUERIES 1" |
1419 | |
1420 | - assert q.get(timeout=1) is None |
1421 | + # We often find the record with {"after": null} at least another time |
1422 | + # in the queue. Let's tolerate an extra one. |
1423 | + for i in range(2): |
1424 | + row = q.get(timeout=1) |
1425 | + if row is None: |
1426 | + break |
1427 | + assert json.loads(row.value)["after"] is None, json |
1428 | + else: |
1429 | + pytest.fail("keep on receiving messages") |
1430 | + |
1431 | t.join() |
1432 | diff --git a/tests/crdb/test_cursor_async.py b/tests/crdb/test_cursor_async.py |
1433 | index 229295d..fcc7760 100644 |
1434 | --- a/tests/crdb/test_cursor_async.py |
1435 | +++ b/tests/crdb/test_cursor_async.py |
1436 | @@ -57,5 +57,14 @@ async def test_changefeed(aconn_cls, dsn, aconn, testfeed, fmt_out): |
1437 | await cur.execute("cancel query %s", [qid]) |
1438 | assert cur.statusmessage == "CANCEL QUERIES 1" |
1439 | |
1440 | - assert await asyncio.wait_for(q.get(), 1.0) is None |
1441 | + # We often find the record with {"after": null} at least another time |
1442 | + # in the queue. Let's tolerate an extra one. |
1443 | + for i in range(2): |
1444 | + row = await asyncio.wait_for(q.get(), 1.0) |
1445 | + if row is None: |
1446 | + break |
1447 | + assert json.loads(row.value)["after"] is None, json |
1448 | + else: |
1449 | + pytest.fail("keep on receiving messages") |
1450 | + |
1451 | await asyncio.gather(t) |
1452 | diff --git a/tests/fix_crdb.py b/tests/fix_crdb.py |
1453 | index 88ab504..32ec457 100644 |
1454 | --- a/tests/fix_crdb.py |
1455 | +++ b/tests/fix_crdb.py |
1456 | @@ -106,6 +106,7 @@ _crdb_reasons = { |
1457 | "encoding": 35882, |
1458 | "geometric types": 21286, |
1459 | "hstore": 41284, |
1460 | + "inet": 94192, |
1461 | "infinity date": 41564, |
1462 | "interval style": 35807, |
1463 | "json array": 23468, |
1464 | @@ -121,11 +122,14 @@ _crdb_reasons = { |
1465 | "server-side cursor": 41412, |
1466 | "severity_nonlocalized": 81794, |
1467 | "stored procedure": 1751, |
1468 | + "to_regtype": None, |
1469 | } |
1470 | |
1471 | _crdb_reason_version = { |
1472 | "backend pid": "skip < 22", |
1473 | + "inet": "skip == 22.2.1", |
1474 | "cancel": "skip < 22", |
1475 | "server-side cursor": "skip < 22.1.3", |
1476 | "severity_nonlocalized": "skip < 22.1.3", |
1477 | + "to_regtype": "skip < 22.2", |
1478 | } |
1479 | diff --git a/tests/scripts/bench-411.py b/tests/scripts/bench-411.py |
1480 | index 82ea451..a4d1af3 100644 |
1481 | --- a/tests/scripts/bench-411.py |
1482 | +++ b/tests/scripts/bench-411.py |
1483 | @@ -8,6 +8,7 @@ from enum import Enum |
1484 | from typing import Any, Dict, List, Generator |
1485 | from argparse import ArgumentParser, Namespace |
1486 | from contextlib import contextmanager |
1487 | +from concurrent.futures import ThreadPoolExecutor |
1488 | |
1489 | logger = logging.getLogger() |
1490 | logging.basicConfig( |
1491 | @@ -18,6 +19,7 @@ logging.basicConfig( |
1492 | |
1493 | class Driver(str, Enum): |
1494 | psycopg2 = "psycopg2" |
1495 | + psycopg2_green = "psycopg2_green" |
1496 | psycopg = "psycopg" |
1497 | psycopg_async = "psycopg_async" |
1498 | asyncpg = "asyncpg" |
1499 | @@ -58,6 +60,12 @@ def main() -> None: |
1500 | |
1501 | run_psycopg2(psycopg2, args) |
1502 | |
1503 | + elif name == Driver.psycopg2_green: |
1504 | + import psycopg2 |
1505 | + import psycopg2.extras # type: ignore |
1506 | + |
1507 | + run_psycopg2_green(psycopg2, args) |
1508 | + |
1509 | elif name == Driver.psycopg: |
1510 | import psycopg |
1511 | |
1512 | @@ -134,15 +142,61 @@ def run_psycopg2(psycopg2: Any, args: Namespace) -> None: |
1513 | cursor.executemany(insert, data) |
1514 | conn.commit() |
1515 | |
1516 | - logger.info(f"running {args.ntests} queries") |
1517 | - to_query = random.choices(ids, k=args.ntests) |
1518 | - with psycopg2.connect(args.dsn) as conn: |
1519 | - with time_log("psycopg2"): |
1520 | - for id_ in to_query: |
1521 | - with conn.cursor() as cursor: |
1522 | - cursor.execute(select, {"id": id_}) |
1523 | - cursor.fetchall() |
1524 | - # conn.rollback() |
1525 | + def run(i): |
1526 | + logger.info(f"thread {i} running {args.ntests} queries") |
1527 | + to_query = random.choices(ids, k=args.ntests) |
1528 | + with psycopg2.connect(args.dsn) as conn: |
1529 | + with time_log("psycopg2"): |
1530 | + for id_ in to_query: |
1531 | + with conn.cursor() as cursor: |
1532 | + cursor.execute(select, {"id": id_}) |
1533 | + cursor.fetchall() |
1534 | + # conn.rollback() |
1535 | + |
1536 | + if args.concurrency == 1: |
1537 | + run(0) |
1538 | + else: |
1539 | + with ThreadPoolExecutor(max_workers=args.concurrency) as executor: |
1540 | + list(executor.map(run, range(args.concurrency))) |
1541 | + |
1542 | + if args.drop: |
1543 | + logger.info("dropping test records") |
1544 | + with psycopg2.connect(args.dsn) as conn: |
1545 | + with conn.cursor() as cursor: |
1546 | + cursor.execute(drop) |
1547 | + conn.commit() |
1548 | + |
1549 | + |
1550 | +def run_psycopg2_green(psycopg2: Any, args: Namespace) -> None: |
1551 | + logger.info("Running psycopg2_green") |
1552 | + |
1553 | + psycopg2.extensions.set_wait_callback(psycopg2.extras.wait_select) |
1554 | + |
1555 | + if args.create: |
1556 | + logger.info(f"inserting {args.ntests} test records") |
1557 | + with psycopg2.connect(args.dsn) as conn: |
1558 | + with conn.cursor() as cursor: |
1559 | + cursor.execute(drop) |
1560 | + cursor.execute(table) |
1561 | + cursor.executemany(insert, data) |
1562 | + conn.commit() |
1563 | + |
1564 | + def run(i): |
1565 | + logger.info(f"thread {i} running {args.ntests} queries") |
1566 | + to_query = random.choices(ids, k=args.ntests) |
1567 | + with psycopg2.connect(args.dsn) as conn: |
1568 | + with time_log("psycopg2"): |
1569 | + for id_ in to_query: |
1570 | + with conn.cursor() as cursor: |
1571 | + cursor.execute(select, {"id": id_}) |
1572 | + cursor.fetchall() |
1573 | + # conn.rollback() |
1574 | + |
1575 | + if args.concurrency == 1: |
1576 | + run(0) |
1577 | + else: |
1578 | + with ThreadPoolExecutor(max_workers=args.concurrency) as executor: |
1579 | + list(executor.map(run, range(args.concurrency))) |
1580 | |
1581 | if args.drop: |
1582 | logger.info("dropping test records") |
1583 | @@ -151,6 +205,8 @@ def run_psycopg2(psycopg2: Any, args: Namespace) -> None: |
1584 | cursor.execute(drop) |
1585 | conn.commit() |
1586 | |
1587 | + psycopg2.extensions.set_wait_callback(None) |
1588 | + |
1589 | |
1590 | def run_psycopg(psycopg: Any, args: Namespace) -> None: |
1591 | logger.info("Running psycopg sync") |
1592 | @@ -164,15 +220,22 @@ def run_psycopg(psycopg: Any, args: Namespace) -> None: |
1593 | cursor.executemany(insert, data) |
1594 | conn.commit() |
1595 | |
1596 | - logger.info(f"running {args.ntests} queries") |
1597 | - to_query = random.choices(ids, k=args.ntests) |
1598 | - with psycopg.connect(args.dsn) as conn: |
1599 | - with time_log("psycopg"): |
1600 | - for id_ in to_query: |
1601 | - with conn.cursor() as cursor: |
1602 | - cursor.execute(select, {"id": id_}) |
1603 | - cursor.fetchall() |
1604 | - # conn.rollback() |
1605 | + def run(i): |
1606 | + logger.info(f"thread {i} running {args.ntests} queries") |
1607 | + to_query = random.choices(ids, k=args.ntests) |
1608 | + with psycopg.connect(args.dsn) as conn: |
1609 | + with time_log("psycopg"): |
1610 | + for id_ in to_query: |
1611 | + with conn.cursor() as cursor: |
1612 | + cursor.execute(select, {"id": id_}) |
1613 | + cursor.fetchall() |
1614 | + # conn.rollback() |
1615 | + |
1616 | + if args.concurrency == 1: |
1617 | + run(0) |
1618 | + else: |
1619 | + with ThreadPoolExecutor(max_workers=args.concurrency) as executor: |
1620 | + list(executor.map(run, range(args.concurrency))) |
1621 | |
1622 | if args.drop: |
1623 | logger.info("dropping test records") |
1624 | @@ -196,15 +259,22 @@ async def run_psycopg_async(psycopg: Any, args: Namespace) -> None: |
1625 | await cursor.executemany(insert, data) |
1626 | await conn.commit() |
1627 | |
1628 | - logger.info(f"running {args.ntests} queries") |
1629 | - to_query = random.choices(ids, k=args.ntests) |
1630 | - async with await psycopg.AsyncConnection.connect(args.dsn) as conn: |
1631 | - with time_log("psycopg_async"): |
1632 | - for id_ in to_query: |
1633 | - cursor = await conn.execute(select, {"id": id_}) |
1634 | - await cursor.fetchall() |
1635 | - await cursor.close() |
1636 | - # await conn.rollback() |
1637 | + async def run(i): |
1638 | + logger.info(f"task {i} running {args.ntests} queries") |
1639 | + to_query = random.choices(ids, k=args.ntests) |
1640 | + async with await psycopg.AsyncConnection.connect(args.dsn) as conn: |
1641 | + with time_log("psycopg_async"): |
1642 | + for id_ in to_query: |
1643 | + cursor = await conn.execute(select, {"id": id_}) |
1644 | + await cursor.fetchall() |
1645 | + await cursor.close() |
1646 | + # await conn.rollback() |
1647 | + |
1648 | + if args.concurrency == 1: |
1649 | + await run(0) |
1650 | + else: |
1651 | + tasks = [run(i) for i in range(args.concurrency)] |
1652 | + await asyncio.gather(*tasks) |
1653 | |
1654 | if args.drop: |
1655 | logger.info("dropping test records") |
1656 | @@ -232,16 +302,23 @@ async def run_asyncpg(asyncpg: Any, args: Namespace) -> None: |
1657 | await conn.executemany(a_insert, [tuple(d.values()) for d in data]) |
1658 | await conn.close() |
1659 | |
1660 | - logger.info(f"running {args.ntests} queries") |
1661 | - to_query = random.choices(ids, k=args.ntests) |
1662 | - conn = await asyncpg.connect(args.dsn) |
1663 | - with time_log("asyncpg"): |
1664 | - for id_ in to_query: |
1665 | - tr = conn.transaction() |
1666 | - await tr.start() |
1667 | - await conn.fetch(a_select, id_) |
1668 | - # await tr.rollback() |
1669 | - await conn.close() |
1670 | + async def run(i): |
1671 | + logger.info(f"task {i} running {args.ntests} queries") |
1672 | + to_query = random.choices(ids, k=args.ntests) |
1673 | + conn = await asyncpg.connect(args.dsn) |
1674 | + with time_log("asyncpg"): |
1675 | + for id_ in to_query: |
1676 | + # tr = conn.transaction() |
1677 | + # await tr.start() |
1678 | + await conn.fetch(a_select, id_) |
1679 | + # await tr.rollback() |
1680 | + await conn.close() |
1681 | + |
1682 | + if args.concurrency == 1: |
1683 | + await run(0) |
1684 | + else: |
1685 | + tasks = [run(i) for i in range(args.concurrency)] |
1686 | + await asyncio.gather(*tasks) |
1687 | |
1688 | if args.drop: |
1689 | logger.info("dropping test records") |
1690 | @@ -263,12 +340,21 @@ def parse_cmdline() -> Namespace: |
1691 | |
1692 | parser.add_argument( |
1693 | "--ntests", |
1694 | + "-n", |
1695 | type=int, |
1696 | default=10_000, |
1697 | help="number of tests to perform [default: %(default)s]", |
1698 | ) |
1699 | |
1700 | parser.add_argument( |
1701 | + "--concurrency", |
1702 | + "-c", |
1703 | + type=int, |
1704 | + default=1, |
1705 | + help="number of parallel tasks [default: %(default)s]", |
1706 | + ) |
1707 | + |
1708 | + parser.add_argument( |
1709 | "--dsn", |
1710 | default=os.environ.get("PSYCOPG_TEST_DSN", ""), |
1711 | help="database connection string" |
1712 | diff --git a/tests/test_client_cursor.py b/tests/test_client_cursor.py |
1713 | index b355604..5ac793b 100644 |
1714 | --- a/tests/test_client_cursor.py |
1715 | +++ b/tests/test_client_cursor.py |
1716 | @@ -9,6 +9,7 @@ import psycopg |
1717 | from psycopg import sql, rows |
1718 | from psycopg.adapt import PyFormat |
1719 | from psycopg.postgres import types as builtins |
1720 | +from psycopg.types import TypeInfo |
1721 | |
1722 | from .utils import gc_collect, gc_count |
1723 | from .test_cursor import my_row_factory |
1724 | @@ -326,9 +327,10 @@ def test_executemany_returning(conn, execmany): |
1725 | [(10, "hello"), (20, "world")], |
1726 | returning=True, |
1727 | ) |
1728 | - assert cur.rowcount == 2 |
1729 | + assert cur.rowcount == 1 |
1730 | assert cur.fetchone() == (10,) |
1731 | assert cur.nextset() |
1732 | + assert cur.rowcount == 1 |
1733 | assert cur.fetchone() == (20,) |
1734 | assert cur.nextset() is None |
1735 | |
1736 | @@ -352,12 +354,13 @@ def test_executemany_no_result(conn, execmany): |
1737 | [(10, "hello"), (20, "world")], |
1738 | returning=True, |
1739 | ) |
1740 | - assert cur.rowcount == 2 |
1741 | + assert cur.rowcount == 1 |
1742 | assert cur.statusmessage.startswith("INSERT") |
1743 | with pytest.raises(psycopg.ProgrammingError): |
1744 | cur.fetchone() |
1745 | pgresult = cur.pgresult |
1746 | assert cur.nextset() |
1747 | + assert cur.rowcount == 1 |
1748 | assert cur.statusmessage.startswith("INSERT") |
1749 | assert pgresult is not cur.pgresult |
1750 | assert cur.nextset() is None |
1751 | @@ -853,3 +856,8 @@ def test_message_0x33(conn): |
1752 | assert cur.fetchone() == ("test",) |
1753 | |
1754 | assert not notices |
1755 | + |
1756 | + |
1757 | +def test_typeinfo(conn): |
1758 | + info = TypeInfo.fetch(conn, "jsonb") |
1759 | + assert info is not None |
1760 | diff --git a/tests/test_client_cursor_async.py b/tests/test_client_cursor_async.py |
1761 | index 0cf8ec6..50f08f7 100644 |
1762 | --- a/tests/test_client_cursor_async.py |
1763 | +++ b/tests/test_client_cursor_async.py |
1764 | @@ -6,6 +6,7 @@ from typing import List |
1765 | import psycopg |
1766 | from psycopg import sql, rows |
1767 | from psycopg.adapt import PyFormat |
1768 | +from psycopg.types import TypeInfo |
1769 | |
1770 | from .utils import alist, gc_collect, gc_count |
1771 | from .test_cursor import my_row_factory |
1772 | @@ -316,9 +317,10 @@ async def test_executemany_returning(aconn, execmany): |
1773 | [(10, "hello"), (20, "world")], |
1774 | returning=True, |
1775 | ) |
1776 | - assert cur.rowcount == 2 |
1777 | + assert cur.rowcount == 1 |
1778 | assert (await cur.fetchone()) == (10,) |
1779 | assert cur.nextset() |
1780 | + assert cur.rowcount == 1 |
1781 | assert (await cur.fetchone()) == (20,) |
1782 | assert cur.nextset() is None |
1783 | |
1784 | @@ -342,12 +344,13 @@ async def test_executemany_no_result(aconn, execmany): |
1785 | [(10, "hello"), (20, "world")], |
1786 | returning=True, |
1787 | ) |
1788 | - assert cur.rowcount == 2 |
1789 | + assert cur.rowcount == 1 |
1790 | assert cur.statusmessage.startswith("INSERT") |
1791 | with pytest.raises(psycopg.ProgrammingError): |
1792 | await cur.fetchone() |
1793 | pgresult = cur.pgresult |
1794 | assert cur.nextset() |
1795 | + assert cur.rowcount == 1 |
1796 | assert cur.statusmessage.startswith("INSERT") |
1797 | assert pgresult is not cur.pgresult |
1798 | assert cur.nextset() is None |
1799 | @@ -725,3 +728,8 @@ async def test_message_0x33(aconn): |
1800 | assert (await cur.fetchone()) == ("test",) |
1801 | |
1802 | assert not notices |
1803 | + |
1804 | + |
1805 | +async def test_typeinfo(aconn): |
1806 | + info = await TypeInfo.fetch(aconn, "jsonb") |
1807 | + assert info is not None |
1808 | diff --git a/tests/test_cursor.py b/tests/test_cursor.py |
1809 | index a667f4f..a39ed67 100644 |
1810 | --- a/tests/test_cursor.py |
1811 | +++ b/tests/test_cursor.py |
1812 | @@ -308,9 +308,10 @@ def test_executemany_returning(conn, execmany): |
1813 | [(10, "hello"), (20, "world")], |
1814 | returning=True, |
1815 | ) |
1816 | - assert cur.rowcount == 2 |
1817 | + assert cur.rowcount == 1 |
1818 | assert cur.fetchone() == (10,) |
1819 | assert cur.nextset() |
1820 | + assert cur.rowcount == 1 |
1821 | assert cur.fetchone() == (20,) |
1822 | assert cur.nextset() is None |
1823 | |
1824 | @@ -334,12 +335,13 @@ def test_executemany_no_result(conn, execmany): |
1825 | [(10, "hello"), (20, "world")], |
1826 | returning=True, |
1827 | ) |
1828 | - assert cur.rowcount == 2 |
1829 | + assert cur.rowcount == 1 |
1830 | assert cur.statusmessage.startswith("INSERT") |
1831 | with pytest.raises(psycopg.ProgrammingError): |
1832 | cur.fetchone() |
1833 | pgresult = cur.pgresult |
1834 | assert cur.nextset() |
1835 | + assert cur.rowcount == 1 |
1836 | assert cur.statusmessage.startswith("INSERT") |
1837 | assert pgresult is not cur.pgresult |
1838 | assert cur.nextset() is None |
1839 | diff --git a/tests/test_cursor_async.py b/tests/test_cursor_async.py |
1840 | index ac3fdeb..fdc3d89 100644 |
1841 | --- a/tests/test_cursor_async.py |
1842 | +++ b/tests/test_cursor_async.py |
1843 | @@ -295,9 +295,10 @@ async def test_executemany_returning(aconn, execmany): |
1844 | [(10, "hello"), (20, "world")], |
1845 | returning=True, |
1846 | ) |
1847 | - assert cur.rowcount == 2 |
1848 | + assert cur.rowcount == 1 |
1849 | assert (await cur.fetchone()) == (10,) |
1850 | assert cur.nextset() |
1851 | + assert cur.rowcount == 1 |
1852 | assert (await cur.fetchone()) == (20,) |
1853 | assert cur.nextset() is None |
1854 | |
1855 | @@ -321,12 +322,13 @@ async def test_executemany_no_result(aconn, execmany): |
1856 | [(10, "hello"), (20, "world")], |
1857 | returning=True, |
1858 | ) |
1859 | - assert cur.rowcount == 2 |
1860 | + assert cur.rowcount == 1 |
1861 | assert cur.statusmessage.startswith("INSERT") |
1862 | with pytest.raises(psycopg.ProgrammingError): |
1863 | await cur.fetchone() |
1864 | pgresult = cur.pgresult |
1865 | assert cur.nextset() |
1866 | + assert cur.rowcount == 1 |
1867 | assert cur.statusmessage.startswith("INSERT") |
1868 | assert pgresult is not cur.pgresult |
1869 | assert cur.nextset() is None |
1870 | diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py |
1871 | index 56fe598..3ef4014 100644 |
1872 | --- a/tests/test_pipeline.py |
1873 | +++ b/tests/test_pipeline.py |
1874 | @@ -326,9 +326,10 @@ def test_executemany(conn): |
1875 | [(10,), (20,)], |
1876 | returning=True, |
1877 | ) |
1878 | - assert cur.rowcount == 2 |
1879 | + assert cur.rowcount == 1 |
1880 | assert cur.fetchone() == (10,) |
1881 | assert cur.nextset() |
1882 | + assert cur.rowcount == 1 |
1883 | assert cur.fetchone() == (20,) |
1884 | assert cur.nextset() is None |
1885 | |
1886 | diff --git a/tests/test_pipeline_async.py b/tests/test_pipeline_async.py |
1887 | index 2e743cf..1dc6110 100644 |
1888 | --- a/tests/test_pipeline_async.py |
1889 | +++ b/tests/test_pipeline_async.py |
1890 | @@ -327,9 +327,10 @@ async def test_executemany(aconn): |
1891 | [(10,), (20,)], |
1892 | returning=True, |
1893 | ) |
1894 | - assert cur.rowcount == 2 |
1895 | + assert cur.rowcount == 1 |
1896 | assert (await cur.fetchone()) == (10,) |
1897 | assert cur.nextset() |
1898 | + assert cur.rowcount == 1 |
1899 | assert (await cur.fetchone()) == (20,) |
1900 | assert cur.nextset() is None |
1901 | |
1902 | diff --git a/tests/test_typeinfo.py b/tests/test_typeinfo.py |
1903 | index d0e57e6..f715539 100644 |
1904 | --- a/tests/test_typeinfo.py |
1905 | +++ b/tests/test_typeinfo.py |
1906 | @@ -4,6 +4,10 @@ import psycopg |
1907 | from psycopg import sql |
1908 | from psycopg.pq import TransactionStatus |
1909 | from psycopg.types import TypeInfo |
1910 | +from psycopg.types.composite import CompositeInfo |
1911 | +from psycopg.types.enum import EnumInfo |
1912 | +from psycopg.types.multirange import MultirangeInfo |
1913 | +from psycopg.types.range import RangeInfo |
1914 | |
1915 | |
1916 | @pytest.mark.parametrize("name", ["text", sql.Identifier("text")]) |
1917 | @@ -44,29 +48,66 @@ async def test_fetch_async(aconn, name, status): |
1918 | assert info.array_oid == psycopg.adapters.types["text"].array_oid |
1919 | |
1920 | |
1921 | -@pytest.mark.parametrize("name", ["nosuch", sql.Identifier("nosuch")]) |
1922 | -@pytest.mark.parametrize("status", ["IDLE", "INTRANS"]) |
1923 | -def test_fetch_not_found(conn, name, status): |
1924 | +_name = pytest.mark.parametrize("name", ["nosuch", sql.Identifier("nosuch")]) |
1925 | +_status = pytest.mark.parametrize("status", ["IDLE", "INTRANS"]) |
1926 | +_info_cls = pytest.mark.parametrize( |
1927 | + "info_cls", |
1928 | + [ |
1929 | + pytest.param(TypeInfo), |
1930 | + pytest.param(RangeInfo, marks=pytest.mark.crdb_skip("range")), |
1931 | + pytest.param( |
1932 | + MultirangeInfo, |
1933 | + marks=(pytest.mark.crdb_skip("range"), pytest.mark.pg(">= 14")), |
1934 | + ), |
1935 | + pytest.param(CompositeInfo, marks=pytest.mark.crdb_skip("composite")), |
1936 | + pytest.param(EnumInfo), |
1937 | + ], |
1938 | +) |
1939 | + |
1940 | + |
1941 | +@_name |
1942 | +@_status |
1943 | +@_info_cls |
1944 | +def test_fetch_not_found(conn, name, status, info_cls, monkeypatch): |
1945 | + |
1946 | + if TypeInfo._has_to_regtype_function(conn): |
1947 | + exit_orig = psycopg.Transaction.__exit__ |
1948 | + |
1949 | + def exit(self, exc_type, exc_val, exc_tb): |
1950 | + assert exc_val is None |
1951 | + return exit_orig(self, exc_type, exc_val, exc_tb) |
1952 | + |
1953 | + monkeypatch.setattr(psycopg.Transaction, "__exit__", exit) |
1954 | status = getattr(TransactionStatus, status) |
1955 | if status == TransactionStatus.INTRANS: |
1956 | conn.execute("select 1") |
1957 | |
1958 | assert conn.info.transaction_status == status |
1959 | - info = TypeInfo.fetch(conn, name) |
1960 | + info = info_cls.fetch(conn, name) |
1961 | assert conn.info.transaction_status == status |
1962 | assert info is None |
1963 | |
1964 | |
1965 | @pytest.mark.asyncio |
1966 | -@pytest.mark.parametrize("name", ["nosuch", sql.Identifier("nosuch")]) |
1967 | -@pytest.mark.parametrize("status", ["IDLE", "INTRANS"]) |
1968 | -async def test_fetch_not_found_async(aconn, name, status): |
1969 | +@_name |
1970 | +@_status |
1971 | +@_info_cls |
1972 | +async def test_fetch_not_found_async(aconn, name, status, info_cls, monkeypatch): |
1973 | + |
1974 | + if TypeInfo._has_to_regtype_function(aconn): |
1975 | + exit_orig = psycopg.AsyncTransaction.__aexit__ |
1976 | + |
1977 | + async def aexit(self, exc_type, exc_val, exc_tb): |
1978 | + assert exc_val is None |
1979 | + return await exit_orig(self, exc_type, exc_val, exc_tb) |
1980 | + |
1981 | + monkeypatch.setattr(psycopg.AsyncTransaction, "__aexit__", aexit) |
1982 | status = getattr(TransactionStatus, status) |
1983 | if status == TransactionStatus.INTRANS: |
1984 | await aconn.execute("select 1") |
1985 | |
1986 | assert aconn.info.transaction_status == status |
1987 | - info = await TypeInfo.fetch(aconn, name) |
1988 | + info = await info_cls.fetch(aconn, name) |
1989 | assert aconn.info.transaction_status == status |
1990 | |
1991 | assert info is None |
1992 | diff --git a/tests/types/test_net.py b/tests/types/test_net.py |
1993 | index 8739398..d5fe346 100644 |
1994 | --- a/tests/types/test_net.py |
1995 | +++ b/tests/types/test_net.py |
1996 | @@ -6,9 +6,11 @@ from psycopg import pq |
1997 | from psycopg import sql |
1998 | from psycopg.adapt import PyFormat |
1999 | |
2000 | +crdb_skip_inet = pytest.mark.crdb_skip("inet") |
2001 | crdb_skip_cidr = pytest.mark.crdb_skip("cidr") |
2002 | |
2003 | |
2004 | +@crdb_skip_inet |
2005 | @pytest.mark.parametrize("fmt_in", PyFormat) |
2006 | @pytest.mark.parametrize("val", ["192.168.0.1", "2001:db8::"]) |
2007 | def test_address_dump(conn, fmt_in, val): |
2008 | @@ -22,6 +24,7 @@ def test_address_dump(conn, fmt_in, val): |
2009 | assert cur.fetchone()[0] is True |
2010 | |
2011 | |
2012 | +@crdb_skip_inet |
2013 | @pytest.mark.parametrize("fmt_in", PyFormat) |
2014 | @pytest.mark.parametrize("val", ["127.0.0.1/24", "::ffff:102:300/128"]) |
2015 | def test_interface_dump(conn, fmt_in, val): |
2016 | diff --git a/tests/types/test_numeric.py b/tests/types/test_numeric.py |
2017 | index a27bc84..fffc32f 100644 |
2018 | --- a/tests/types/test_numeric.py |
2019 | +++ b/tests/types/test_numeric.py |
2020 | @@ -77,7 +77,7 @@ class MyEnum(enum.IntEnum): |
2021 | foo = 42 |
2022 | |
2023 | |
2024 | -class MyMixinEnum(enum.IntEnum): |
2025 | +class MyMixinEnum(int, enum.Enum): |
2026 | foo = 42000000 |
2027 | |
2028 | |
2029 | diff --git a/tools/build/ci_install_libpq.sh b/tools/build/ci_install_libpq.sh |
2030 | new file mode 100755 |
2031 | index 0000000..55525e3 |
2032 | --- /dev/null |
2033 | +++ b/tools/build/ci_install_libpq.sh |
2034 | @@ -0,0 +1,47 @@ |
2035 | +#!/bin/bash |
2036 | + |
2037 | +# Install the desired libpq in github action (Linux runner) |
2038 | +# |
2039 | +# Specify `oldest` or `newest` as first argument in order to choose the oldest |
2040 | +# available to the debian distro or the newest available from the pgdg ppa. |
2041 | + |
2042 | +set -euo pipefail |
2043 | +set -x |
2044 | + |
2045 | +libpq=${1:-} |
2046 | +rel=$(lsb_release -c -s) |
2047 | + |
2048 | +setup_repo () { |
2049 | + version=${1:-} |
2050 | + curl -sL -o /etc/apt/trusted.gpg.d/apt.postgresql.org.asc \ |
2051 | + https://www.postgresql.org/media/keys/ACCC4CF8.asc |
2052 | + echo "deb http://apt.postgresql.org/pub/repos/apt ${rel}-pgdg main ${version}" \ |
2053 | + >> /etc/apt/sources.list.d/pgdg.list |
2054 | + apt-get -qq update |
2055 | +} |
2056 | + |
2057 | +case "$libpq" in |
2058 | + "") |
2059 | + # Assume a libpq is already installed in the system. We don't care about |
2060 | + # the version. |
2061 | + exit 0 |
2062 | + ;; |
2063 | + |
2064 | + oldest) |
2065 | + setup_repo 10 |
2066 | + pqver=$(apt-cache show libpq5 | grep ^Version: | tail -1 | awk '{print $2}') |
2067 | + apt-get -qq -y --allow-downgrades install "libpq-dev=${pqver}" "libpq5=${pqver}" |
2068 | + ;; |
2069 | + |
2070 | + newest) |
2071 | + setup_repo |
2072 | + pqver=$(apt-cache show libpq5 | grep ^Version: | head -1 | awk '{print $2}') |
2073 | + apt-get -qq -y install "libpq-dev=${pqver}" "libpq5=${pqver}" |
2074 | + ;; |
2075 | + |
2076 | + *) |
2077 | + echo "Unexpected wanted libpq: '${libpq}'" >&2 |
2078 | + exit 1 |
2079 | + ;; |
2080 | + |
2081 | +esac |
2082 | diff --git a/tools/bump_version.py b/tools/bump_version.py |
2083 | index 50dbe0b..53810db 100755 |
2084 | --- a/tools/bump_version.py |
2085 | +++ b/tools/bump_version.py |
2086 | @@ -241,9 +241,12 @@ def main() -> int | None: |
2087 | bumper = Bumper(packages[opt.package], bump_level=opt.level) |
2088 | logger.info("current version: %s", bumper.current_version) |
2089 | logger.info("bumping to version: %s", bumper.want_version) |
2090 | - if not opt.dry_run: |
2091 | + |
2092 | + if opt.actions is None or Action.UPDATE in opt.actions: |
2093 | bumper.update_files() |
2094 | + if opt.actions is None or Action.COMMIT in opt.actions: |
2095 | bumper.commit() |
2096 | + if opt.actions is None or Action.TAG in opt.actions: |
2097 | if opt.level != BumpLevel.DEV: |
2098 | bumper.create_tag() |
2099 | |
2100 | @@ -257,18 +260,26 @@ class BumpLevel(str, Enum): |
2101 | DEV = "dev" |
2102 | |
2103 | |
2104 | +class Action(str, Enum): |
2105 | + UPDATE = "update" |
2106 | + COMMIT = "commit" |
2107 | + TAG = "tag" |
2108 | + |
2109 | + |
2110 | def parse_cmdline() -> Namespace: |
2111 | parser = ArgumentParser(description=__doc__) |
2112 | |
2113 | parser.add_argument( |
2114 | + "-l", |
2115 | "--level", |
2116 | - choices=[level.value for level in BumpLevel], |
2117 | + choices=[m.value for m in BumpLevel], |
2118 | default=BumpLevel.PATCH.value, |
2119 | type=BumpLevel, |
2120 | help="the level to bump [default: %(default)s]", |
2121 | ) |
2122 | |
2123 | parser.add_argument( |
2124 | + "-p", |
2125 | "--package", |
2126 | choices=list(packages.keys()), |
2127 | default="psycopg", |
2128 | @@ -276,10 +287,11 @@ def parse_cmdline() -> Namespace: |
2129 | ) |
2130 | |
2131 | parser.add_argument( |
2132 | - "-n", |
2133 | - "--dry-run", |
2134 | - help="Just pretend", |
2135 | - action="store_true", |
2136 | + "-a", |
2137 | + "--actions", |
2138 | + help="The actions to perform [default: all]", |
2139 | + nargs="*", |
2140 | + choices=[m.value for m in Action], |
2141 | ) |
2142 | |
2143 | g = parser.add_mutually_exclusive_group() |
The only feature included here is to the package internal development tools, which is not shipped in our package.
+1 on not needing a FFe;
+1 on going ahead of Debian here to support django.
Approved.