A blog about Python and the Internet


Tests aren’t enough: Case study after adding type hints to urllib3

Published 2021-10-18

Since Python 3.5 was released in 2015 including PEP 484 and the typing module type hints have grown from a nice-to-have to an expectation for popular packages. To fulfill this expectation our team has committed to shipping type hints for the v2.0 milestone. What we didn’t realize is the amount of value we’d derive from this project in terms of code correctness.

We wanted to share the journey, what we learned, and problems we encountered along the way as well as celebrate this enormous milestone for the urllib3 project.

Hasan Ramenzani has taken on the mantle of being the primary contributor to urllib3’s type hints along with help from other team members and community contributors Ran Benita, Ezzeri Esa, Franek Magiera, and Quentin Pradet. Thanks to them for all their work!

Why type hints?

Python doesn’t have a concept of “type safety”: if a function accepts a string parameter and you pass an integer parameter instead the language will not complain (although the function likely won’t work as intended!) Even when using type hints the “type safety” they provide is completely opt-in with tools like Mypy or Pyright. You can continue incorrectly passing an integer parameter, your tools just won’t be very happy about it.

Tests aren't a substitute for type hints

When we originally started this journey our primary goal was to provide a good experience for users who wanted to use type checking tools with urllib3. We weren’t expecting to find and fix many defects.

After all, urllib3 has over 1800 test cases, 100% test line coverage, and is likely some of the most pervasive third-party Python code in the solar system.

Despite all of the above, the inclusion of type checking to our CI has yielded many improvements to our code that likely wouldn't have been discovered otherwise. For this reason we recommend projects evaluate adding type hints and strict type checking to their development workflow instead of relying on testing alone.

For a great visual explanation of how types help catch issues differently than tests see this PyCon Cleveland 2018 talk by Carl Meyer.

How we developed and iterated on type hints

This section should serve as a guide to developers looking to add type hints to a medium-to-large size project. This effort took many months for our team to complete so make sure to allocate enough time.

Incrementally adding types to existing projects

We wanted to add Mypy to our continuous integration to ensure new contributors wouldn’t accidentally break type hints but we also wanted to add Mypy incrementally so it wouldn’t grind all our development to a halt.

If you’ve ever tried running Mypy on a single file in a project without any type hints you’re unlikely to only receive errors from the file you ran Mypy on. Instead you’re likely to receive type errors from files where you imported other APIs, both within the project and third-party modules.

This makes adding type hints seem like a daunting task, either needing to be added all at once or with tons of temporary # type: ignore annotations along the way to ensure Mypy continues to pass in CI.

Our solution to this was to maintain a list of files in our project that we knew had correct type hints. Mypy would be run one by one on every file and the list of issues that were detected by Mypy would be gathered up, de-duplicated, and crucially we’d filter out files that weren’t on the “known-good” list.

This meant that once a file was complete we’d add the path to the known-good list to ensure that future contributions wouldn’t regress on our type hints.

Reviewing type hint additions

Reviewing large diffs in GitHub, especially ones where very small changes are made to large numbers of lines, is a difficult task because of how little context is given within the GitHub UI by default (usually 2-3 lines above and below the diff). Take your time here as a mistake may leak out to users if you aren’t adding types to your test suite too.

If it wasn’t obvious why a # type: ignore annotation was added a comment would be added to Github by the author to let reviewers know why the decision was made. This made for a less back-and-forth on individual changed lines.

Type your tests!

Once we completed the addition of type hints to the source code of urllib3 we set our sights on the test suite. This may seem strange as our users are very unlikely to directly benefit from us adding type hints there, however there’s one big benefit: we’re able to find issues with types from a user perspective!

In our case the test suite contained more use-cases than what we originally thought for multiple APIs which meant we had to change or loosen the type hints in those cases. Doing this work up-front meant we were less likely to release type hints that were too strict which would likely cause issues for users.

Backwards-compatibility types

Many times we had to make a decision about whether to advertise support for a certain type, especially types which were allowed for backwards compatibility but not what we want users to start using in newly written code. Consciously excluding a type is a good way to push users in the right direction without introducing breaking changes.

Use strict mode if possible

Mypy includes a --strict parameter which enables all the optional error checking flags. This provides the best coverage of typing errors and also means when you upgrade Mypy you’ll automatically start checking errors that were added in the latest version.

Remember to pin the version of Mypy you’re using for your project so you won’t be caught flat-footed when CI starts failing due to new type errors being checked in a new Mypy release.

Specify error codes on type ignore annotations

Instead of adding a blanket annotation to ignore all type issues every type: ignore annotation should specify the error codes to narrow down the error that Mypy is ignoring. This means the code will continue to be checked for all other error and if the error ever changes then Mypy will be able to signal the situation.

You can see which error code to use by using the --show-error-codes option with Mypy.

Reference: urllib3#2363

Anything but Any

Using typing.Any in a particularly complicated type situation is a tempting option. Resist the easy-out using Any because complicated situations for you are likely to translate to complicated situations for your users.

Python typing has come a long way since it was introduced: read up on the new features available for modeling complex types and try your best to keep Any out of your code.

What we found and learned

Our whole team learned a bunch about how Mypy and Python typing works during this project. Below are some of the interesting issues and features that we found that are worthy of sharing:

Bytes comparison warnings

Python includes a warning called “BytesWarning” which among other uses can warn against using equality (==) to compare bytes and string types. This warning can help you find subtle type issues in your own code and third-party libraries.

Quentin attempted to enable this feature for urllib3 and immediately we saw some issues with urllib3 code and the brotlicffi package (brotlicffi#177). The fixes in urllib3 (urllib3#2145) were mostly related to how we handle headers, we accept both bytes and strings for header names but this leads to issues when header retrieval occurs.

After fixing all the issues we were able to enable the -bb option which raises an error for bytes comparisons instead of only issuing a warning.

Adding type hints to trustme

urllib3 uses the package trustme for generating realistic CA certificates on-demand for our test suite. As a part of the effort to add type hints to our test suite we also wanted to add type hints to packages used by our test suite to avoid using # type: ignore.

The trustme package at the time supported Python 2 so we got to use Mypy’s Python 2 mode and Python 2 compatible type hints using comments.

Ran Benita created a pull request (trustme#341) adding the types which was reviewed, merged, and released by Quentin Pradet. It’s a small world!

Untested but documented feature with Retry.allowed_methods

Comparing the types to the documentation for a parameter helped us discover a feature that didn’t have a test case but was documented within a docstring. After we discovered this we added the case to our test suite to ensure we never regressed on our advertised feature.

Reference: urllib3#2215

Bad default parameter values

This case shows how few people are using our PoolManager.connection_from_X() APIs as the default value for the only parameter would immediately cause an exception. Mypy helped us find and fix this issue which had been silently missed in our codebase for some time.

Reference: urllib3#2232

Mypy providing better protections for method=None case

Mypy alerted us to a missing check to see that method was non-None before calling a function with a parameter. Previously this behavior was only protected by knowing how the function should be called.

Reference: urllib3#2215

Boolean literals and context managers

Mypy has special handling for returning boolean literals from the context manager __exit__() method. If your __exit__() method returns a raw True or False you must annotate with Literal[True] or Literal[False] as using bool signals that the context manager may or may not swallow exceptions.

Reference: urllib3#2232

Mypy-friendly code is also human-friendly

Mypy signaled on a line of code that was difficult for even a human to understand from a quick glance. By simplifying the code it became easier for both humans and Mypy alike to understand what was intended.

Reference: urllib3#2251

Function signatures not matching mimicked API

Mypy found a mismatch between the APIs of our custom SSLTransport class which is meant to be used like a socket during TLS-in-TLS tunneling and the socket API:

Reference: urllib3#2443

Making types more general should require additional test cases

Whenever loosening a type from strict to general make sure your test suite grows to cover that case as well. In our example we loosened socket_options from List[...] to Sequence[...] as technically passing a tuple was acceptable and being done by some users.

Reference: urllib3#2232

Use @overload for filtered values types

Instead of using f(x: Union[int, str]) -> Union[int, str] take advantage of the @overload decorator to define the instances where certain input types always result in a certain output type for each case. This allows Mypy to give much better results when the interface is being used.

Reference: urllib3#2251

Explicit return None when a function can return other values

It’s always good practice to return None when you intend the result of the function to a variable. Mypy helpfully enforces this good practice when there are values being returned in other parts of the function but the default “drop through” return None isn’t explicitly added.

Reference: urllib3#2255

Don’t expose Generators unless you want Generator functionality

Generators have additional behaviors over iterables so if the API isn’t meant to be used like a generator then it’s best to keep this fact a secret and annotate with Iterable[X] instead of Generator[X, None, None].

Reference: urllib3#2255

Missing type hints in the standard library

Some of the less-traveled parts of the standard library don't have complete type coverage. These types are distributed in a library called typeshed so if you uncover a missing or incorrect type hint for the standard library they should be fixed here.

References: typeshed#6176, urllib3#2458


Adding type hints to urllib3 was clearly a huge amount of work, hundreds of engineer hours across several months. What we once thought would be a purely developer-facing change ended up making the codebase more robust than ever. Several non-trivial logic errors were fixed and our team is more confident reviewing and merging PRs. This is a big win for our users and a very worthy investment.

Portions of this work, including writing this case study, was funded by generous support on GitHub Sponsors, GitCoin Grants, and Open Collective. Thank you for your support!

The problem with Flask async views and async globals

Published 2021-08-01

Starting in v2.0 Flask has added async views which allow using async and await within a view function. This allows you to use other async APIs when building a web application with Flask.

If you're planning on using Flask's async views there's a consideration to be aware of for using globally defined API clients or fixtures that are async.

Creating an async Flask application

In this example we're using the async Elasticsearch Python client as our global fixture. We initialize a simple Flask application with a single async view that makes a request with the async Elasticsearch client:

from flask import Flask, jsonify
from elasticsearch import AsyncElasticsearch

app = Flask(__name__)

# Create the AsyncElasticsearch instance in the global scope.
es = AsyncElasticsearch(

@app.route("/", methods=["GET"])
async def async_view():
    return jsonify(**(await

Running with gunicorn via $gunicorn app:app and then visiting the app via http://localhost:8000. After the first request everything looks fine:

  "cluster_name": "d31d9d6abb334a398210484d7ac8567b",
  "cluster_uuid": "K5uyniiMT9u2grNBmsSt_Q",
  "name": "instance-0000000001",
  "tagline": "You Know, for Search",
  "version": {
    "build_date": "2021-04-20T20:56:39.040728659Z",
    "build_flavor": "default",
    "build_hash": "3186837139b9c6b6d23c3200870651f10d3343b7",
    "build_snapshot": false,
    "build_type": "docker",
    "lucene_version": "8.8.0",
    "minimum_index_compatibility_version": "6.0.0-beta1",
    "minimum_wire_compatibility_version": "6.8.0",
    "number": "7.13.1"

However when you refresh the page to send a second request you receive an InternalError and the following traceback:

Traceback (most recent call last):
  File "/app/", line 13, in async_route
    return jsonify(**(await
  File "/app/venv/lib/...", line 288, in info
    return await self.transport.perform_request(
  File "/app/venv/lib/...", line 327, in perform_request
    raise e
  File "/app/venv/lib/...", line 296, in perform_request
    status, headers, data = await connection.perform_request(
  File "/app/venv/lib/...", line 312, in perform_request
    raise ConnectionError("N/A", str(e), e)

ConnectionError(Event loop is closed) caused by:
  RuntimeError(Event loop is closed)

Why is this happening?

The error message mentions the event loop is closed, huh? To understand why this is happening you need to know how AsyncElasticsearch is implemented and how async views in Flask work.

No global event loops

Async code relies on something called an event loop. So any code using async or await can't execute without an event loop that is "running". The unfortunate thing is that there's no running event loop right when you start Python (ie, the global scope).

This is why you can't have code that looks like this:

async def f():
    print("I'm async!")

# Can't do this!
await f()

instead you have to use and typically an async main/entrypoint function to use await like so:

import asyncio

async def f():
    print("I'm async!")

async def main():
    await f()

# asyncio starts an event loop here:

(There's an exception to this via python -m asyncio / IPython, but really this is running the REPL after starting an event loop)

So if you need an event loop to run any async code, how can you define an AsyncElasticsearch instance in the global scope?

How AsyncElasticsearch allows global definitions

The magic of global definitions for AsyncElasticsearch is delaying the full initialization of calling asyncio.get_running_loop(), creating aiohttp.Session, sniffing, etc until after we've received our first async call. Once an async call is made we can almost guarantee that there's a running event loop, because if there wasn't a running event loop the request wouldn't work out anyways.

This is great for async programs especially, as typically a single event loop gets used throughout a single execution of the program and means you can create your AsyncElasticsearch instance in the global scope how users create their synchronous Elasticsearch client in the global scope.

Using multiple event loops is tricky and would likely break many other libraries like aiohttp in the process for no (?) benefit, so we don't support this configuration. Now how does this break when used with Flask's new async views?

New event loop per async request

The simple explanation is that Flask uses WSGI to service HTTP requests and responses which doesn't support asynchronous I/O. Asynchronous code requires a running event loop to execute, so Flask needs to get a running event loop from somewhere in order to execute an async view.

To do so, Flask will create a new event loop and start running the view within this new event loop for every execution of the async view. This means all the async and await calls within the view will see the same event loop, but any other request before or after this view will see a different event loop.

The trouble comes when you want to use async fixtures that are in the global scope, which in my experience is common in small to medium Flask applications. Very unfortunate situation! So what can we do?

Fixing the problem

The problem isn't with Flask or the Python Elasticsearch client, the problem is the incompatibility between WSGI and async globals. There are a couple of solutions, both of which involve Async Server Gateway Interface (ASGI), WSGI's async-flavored cousin which was designed with async programs in mind.

Use an ASGI framework and server

One way to avoid the problem with WSGI completely is to simply use a native ASGI web application framework instead. There are a handful of popular and widely used ASGI frameworks you can choose from:

If you're looking for an experience that's very similar to Flask you can use Quart which is inspired by Flask. Quart even has a guide about how to migrate from a Flask application to using Quart! Flask's own documentation for async views actually recommends using Quart in some cases due to the performance hit from using a new event loop per request.

If you're looking to learn something new you can check out FastAPI which includes a bunch of builtin functionality for documenting APIs, strict model declarations, and data validation.

Something to keep in mind when developing an ASGI application is you need an ASGI-compatible server. Common choices include Uvicorn, Hypercorn, and Daphne. Another option is to use the Gunicorn with Uvicorn workers.

All the options mentioned above function pretty similarly so pick whichever one you like. My personal choice has historically been Gunicorn with Uvicorn workers because of how widely used and mature Gunicorn is relative to how new the other libraries are.

You can do so like this:

$ gunicorn app:app -k uvicorn.workers.UvicornWorker

Use WsgiToAsgi from asgiref

If you really love Flask and want to continue using it you can also use the asgiref package provides an easy wrapper called WsgiToAsgi that converts a WSGI application to an ASGI application.

from flask import Flask, jsonify
from elasticsearch import AsyncElasticsearch

# Same definition as above...
wsgi_app = Flask(__name__)
es = AsyncElasticsearch(

@wsgi_app.route("/", methods=["GET"])
async def async_view():
    return jsonify(**(await

# Convert the WSGI application to ASGI
from asgiref.wsgi import WsgiToAsgi

asgi_app = WsgiToAsgi(wsgi_app)

In this example we're converting the WSGI application wsgi_app into an ASGI application asgi_app which means when we run the application a single event loop will be used for every request instead of a new event loop per request.

This approach will still require you to use an ASGI-compatible server.

Everything to know about Requests v2.26.0

Published 2021-07-13

Requests v2.26.0 is a large release which changes and removes many features and dependencies that you should know about when upgrading. Read on to find out all about the changes and what you should do if you're a user of Requests.

Summary of the release

What changes are important?

  • Changed the requests[security] extra into a no-op. Can be safely removed for v2.24.0+ for almost all users (OpenSSL 1.1.1+ and not relying on specific features in pyOpenSSL)
  • Dropped support for Python 3.5
  • Changed encoding detection library for Python 3.6+ from chardet to charset_normalizer
  • Added support for Brotli compressed response bodies

What should you do now?

  • Upgrade to Requests v2.26.0 if you're not using Python 3.5
  • Stop using requests[security] and instead install just requests
  • Regenerate your lock files and pinned dependencies if you're using pip-tools, poetry, or pipenv
  • Read the full set of changes for v2.26.0

Encoding detection with charset_normalizer

The following section has a brief discussion of licensing issues. Please remember that I am not a lawyer and don't claim to understand anything about open source licenses.

Requests uses character detection of response bodies in order to reliably decode bytes to str when responses don't define what encoding to use via Content-Type. This feature only gets used when you call the Response.text() API.

The library that Requests uses for content encoding detection has for the past 10 years been chardet which is licensed LGPL-2.1.

The LGPL-2.1 license is not a problem for almost all users, but an issue arises with statically linked Python applications which are pretty rare but becoming more common. When Requests is bundled with a static application users can no longer "switch out" chardet for a different library which causes a problem with LGPL.

Starting in v2.26.0 for Python 3 the new default library for encoding detection will be charset_normalizer which is MIT licensed. The library itself is relatively young so a lot of work has gone into making sure users aren't broken with this change including extensive tests against real-life websites and comparing the results against chardet to ensure better performance and accuracy in every case.

Requests will continue to use chardet if the library is installed in your environment. To take advantage of charset_normalizer you must uninstall chardet from your environment. If you want to continue using chardet with Requests on Python 3 you can install chardet or install Requests using requests[use_chardet_on_py3]:

$ python -m pip install requests chardet

- OR -

$ python -m pip install requests[use_chardet_on_py3]

Removed the deprecated [security] extra

Before Requests v2.24.0 the pyOpenSSL implementation of TLS was used by default if it was available. This pyOpenSSL code is packaged along with urllib3 as a way to use Subject Name Identification (SNI) when Python was compiled against super-old OpenSSL versions that didn't support it yet.

Thankfully these super-old versions of OpenSSL aren't common at all anymore! So now that pyOpenSSL code that urllib3 provides is a lot less useful and now a maintenance burden for our team, so we now have a long-term plan to eventually remove this code. The biggest dependency using this code was Requests, a logical first place to start the journey.

Starting in Requests v2.24.0 pyOpenSSL wouldn't be used unless Python wasn't compiled with TLS support (ie, no ssl module) or if the OpenSSL version that Python was compiled against didn't support SNI. Basically the two rare scenarios where pyOpenSSL was actually useful!

The release of v2.24.0 came and went quietly which signaled to our team that our long-term plan of actually removing pyOpenSSL will likely go smoothly. So in Requests v2.25.0 we officially deprecated the requests[security] extra and in v2.26.0 the [security] extra will be a no-op. Instead of installing pyOpenSSL and cryptography no dependencies will be installed.

What this means for you is if you've got a list of dependencies that previously used requests[security] you can remove the [security] and only install requests. If you have a lock file via a tool like pip-tools or poetry you can regenerate the lock file and potentially see pyOpenSSL and cryptography removed from your lock file. Woo!

Dropped support for Python 3.5

Starting in Requests v2.25.0 there was a notice for Python 3.5's deprecation in the changelog. Now that 2.26.0 has arrived Requests will only be supported with Python 2.7.x and 3.6+.

This is a big win for Requests maintainers as it progressively becomes more and more difficult to maintain a codebase that supports a wide range of Python versions.

Brotli support via urllib3

Since v1.25 urllib3 has supported automatically decoding Brotli-compressed HTTP response bodies using either Google's brotli library or the brotlicffi library (previously named brotlipy).

Before v2.26.0 Requests would never emit an Accept-Encoding header with br signaling Brotli support even if urllib3 would have been able to decode the response. Now Requests will use urllib3's feature detection for Brotli and emit Accept-Encoding: gzip, deflate, br. This is great news for servers that support Brotli on pre-compressed static resources like fonts, CSS, and JavaScript. Woo!

To take advantage of Brotli decoding you need to install one of the Brotli libraries mentioned above. You can ensure you're getting the right library for your Python implementation by installing like so:

$ python -m pip install urllib3[brotli]

urllib3 Newsletter #5

Published 2021-06-29

Fifth newsletter, commence! If you'd like to discuss this edition of our newsletter you can join our community Discord.

Thanks to our Sponsors

The urllib3 team is very grateful for all of our sponsors and supporters. If you'd like to support our team we have a GitHub Sponsors, GitCoin Grants, and Open Collective.

Notable updates to our sponsors include:

GitCoin Grant Round 10 included urllib3 which has raised >$2000 so far! 🎉

NewRelic started sponsoring our team on GitHub Sponsors 👏

We paid someone to work on Open Source

David Lord who is known for his work on Flask, Jinja and other Pallets projects worked on one of our v2.0 issues related to how we encode fields into the URL. We wanted to modernize how urllib3 does things, you'd think that wouldn't be too tough... However it took a ton of time to unravel what urllib3 was doing and why that had deviated from the current standard WHATWG HTML. You can read all of the discussion and discoveries that went into untangling this pile of standard spaghetti and code archaeology.

The most exciting part of all this is that this is the first time we've paid a contributor who's not a part of our team to work on Open Source, woohoo! 🥳

If you're interested in getting paid to work on urllib3 v2.0 issues you can join our Discord or reach out to the team and we'll walk you through everything. We're also working on making issues which we're willing to pay for work much more visible.

urllib3 v1.26.6 released

We've released another patch for the urllib3 v1.26.x series. This release included a few fixes for small bugs but also included a larger change in deprecated the urllib3.contrib.ntlmpool module, more on that below.

Quentin has been working on migrating the downstream integration tests that are run before every urllib3 release from Travis which have been defunct for some time now to GitHub Actions. This will greatly reduce the amount of manual work required to release urllib3 and drastically reduce maintainer stress, thanks Quentin! 🙇

Quentin and I also did the release together this time around and we've created a complete checklist to make executing releases by other collaborators easier.

Deprecating NTLMConnectionPool in v1.26.6

The urllib3.contrib.ntlmpool module will now unconditionally raise a DeprecationWarning pointing users to a specific issue where we justify this change and we'd like for users to comment if they're actually relying on the module.

The module itself was contributed a long time ago and hasn't had many issues, pull requests, or maintenance and we actually don't have any test cases so we're not even sure how well it works anymore...

Given that NTLM has been deprecated for 10 years we'd like to remove the module in v2.0 but aren't sure if it should live somewhere else or if it should be deleted completely. Please let us know!


A security vulnerability was reported by Nariyoshi Chida in our URL parser. We coordinated with Nariyoshi and our Tidelift security contact to verify the vulnerability and provide a suitable fix for the issue and released v1.26.5 which included the fix.

Read the full GitHub Security Advisory for more information.

New collaborators and contributors

We've invited a few of our contributors to become collaborators on the project after consistent high-quality contributions. Welcome Bastian Venthur and Ran Benita! Thank you for everything you've done so far for urllib3 👏

We also had many first time contributors in the past month after a couple of tweets brought in a bunch of new faces. Thanks to everyone who contributed! If you're interested in getting started contributing to urllib3 we announce all the new "Contributor Friendly" issues in the community Discord.

urllib3 Newsletter #4

Published 2021-05-03

Welcome to our fourth newsletter! If you'd like to join our community you can find us on Discord.

Thanks to all of our Sponsors!

If you'd like to support our team we have a GitHub Sponsors, GitCoin Grant, and Open Collective.

Big thank you to the generous individuals who are lending their financial support:

If you or your organization uses Python: consider sponsoring our team's effort to keep urllib3 maintained and to ship urllib3 v2.0 in 2021, we really appreciate it.

Unreasonable effectiveness of investing in Open Source

Fellow urllib3 maintainer Quentin Pradet recently set aside time for extended work on urllib3 v2.0, specifically to complete a complex issue regarding urllib3 using Python's built-in ssl.SSLContext for certificate hostname verification instead of using our current method of verifying certificates via our own vendored ssl.match_hostname.

Quentin wrote on his blog about the work that was completed and about the unreasonable effectiveness of financial contributions to Open Source.

In summary about 20 hours of work was able to uncover a security vulnerability in urllib3, a bug in CPython related to ssl.SSLContext.hostname_check_common_name, and a fix in OpenSSL along with completing the original task of making urllib3 use SSLContext for hostname verification. Wow!

HTTP on Mars

urllib3 is officially running on two planets! 🚀

GitHub recently announced a list of Open Source projects hosted on GitHub that were running on the Mars Helicopter Ingenuity and urllib3 was among them. This announcement has been a super exciting achievement for our team and we're all excited to see the future of Open Source being used within the realm of space exploration.


urllib3 1.26.4 included a fix for CVE-2021-28363 thanks to Quentin Pradet and Jorge Lopez-Silva for their work here! Versions of urllib3 that are vulnerable to re 1.26.0 to 1.26.3. Versions prior to 1.26.0 are not affected.

Welcoming a new collaborator!

After multiple impactful contributions to the project our team welcomes Franek Magiera. Franek has been contributing to the effort of completely type-hinting the urllib3 API which is one of the highlighted improvements coming to urllib3 v2.0. Thanks for all the hard work, Franek!

If you enjoyed these posts there's more where that came from!
You can subscribe via Email and RSS