Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't have strong opinions about these PEPs, but there's some background on how python is governed that the pydantic maintainer omits (as does most of the discussion here).

I know people don't like the specific decisions python has made recently (me too!), but the process for deprecation and implementation once those decisions are made is at least clear and unambiguous, thanks to backwards compatibility policy adopted in PEP 387, i.e. way back in 2009.

"Wait for the warning to appear in at least two major Python versions. It's fine to wait more than two releases."

PEP 563, the one that apparently breaks pydantic, was accepted during Python 3.7. This was before pydantic reached 1.0 and before fastapi even existed. It was eligible for full implementation in 3.9 according to the backward compatibility policy, but the plan right now is for it to be implemented in Python 3.10. Python 3.10 is due to code freeze in three weeks. The maintainers of these projects have had over three years to raise these issues.

pydantic and fastapi both have outstanding documentation that is a credit to their projects and to the python community. One side-effect of this, however, is they have some inexperienced and frankly excitable users, who perhaps aren't aware how python itself is developed. These are the users their maintainers (see also https://twitter.com/tiangolo/status/1382800928982642692) are unleashing on the python core maintainers and release managers literally days before a release that will implement a change they've known about for over three years.

I hope this can be worked out. PEP 649 looks promising. Maybe PEP 563 should wait for python 3.11. And pydantic and fastapi are great! But this is, simply put, a shitty way to treat volunteer open source maintainers at the absolute busiest and most stressful time of the release cycle.



The fact you are warned in advance that something bad is going to happen doesn't make it less bad. Just more manageable.

The elephant in the room is that the core devs, after Python 2.7, have taken the very bad habit of breaking compat. For such a popular and core tech a Python, this is not responsible. It erodes trust. It also raises the cost of producing good software tremendously.

There are so many moving parts in tech now a day you can't spend you time, as an open source dev, tracking everything that break all the time. Nor fixing it. We have limited resources, energy and spare time.

And what if I miss it? It's human after all.

A project of the scale of Python, that is so central, so essential, should not break on me more than once in a decade, at least willingly.

The way the issue is dealt with currently is very short sighted. It's not about the short term cost of having some teams having to pay the price of a rewrite.

It's about how bad it is for the Python community, ecosystem and project on the long run.


Compatibility is great if you're building on solid, or at least good enough foundation. But that's not Python, the foundation is dubious, or even bad. Extensive C API which prevents advanced JIT efforts, tons of dead libraries in standard library.

Also, even having solid foundation and big company backing, Java is also regularly deprecating and removing things. Full, unquestionable compatibility is good only if you're committee wizard in your tower and don't care about users beyond big corporate at all, aka C++ model.

Oh, and one thing about Python: it's very understaffed. They have like 3 fully paid devs at all? For a project of this impact, it's really bad. Compare this to tens of thousands of man-years spend on JS.


I agree, and I'm not advocating to freeze the python language, nor don't understand how understaffed the python core devs are.

But there are strategies that are in used or have been in used that we could draw inspiration from:

- async/await didn't need to become keywords so fast. True/False were not for 15 years.

- changing the default encoding is behind env var flag.

- changing the way the c api is done is currently made doing the separate HPy project. It's not breaking anything.

So there were instances where people intelligently decided to move the language forward, but stopped and remembered Linus wisdom of "we don't breaking userland". And they did improve things without breaking others.

Guido used to be very strict on that too. But after so many years, we can't expect him to spend all his energy still policing every aspect of the project, especially after he stepped down as a BDFL.


I looked at the psf budget last normal year, iirc it was like five million in income. They choose to spend most profits on outreach and teaching. That’s their decision but then they also complain about being under resourced, leading to compat breaks on minor versions.

I don’t believe it is necessary, compatibility should come before most other activities.


1. 3.x to 3.x+1 is referred to as a major version change in Python. It's the same in Go, where 1.x to 1.x+1 is also a major version change (https://golang.org/doc/devel/release.html#policy). Python does not use semantic versioning, which may be why your expecations do not align with reality. And they do not introduce breaking changes without two major version bumps (i.e. two years) of public warnings about their plans (in that sense they are more conservative than semver).

2. This breaking change has not happened yet.

3. If it does happen, it won't be because the PSF is under-resourced. It will be because a long formal process occurred in back in 2017 to ratify PEP 563, and that processes success started a timer on its incorporation into python 3.9 or later, consistent with Python's deprecation policy. You can argue that this was a bad decision, but "spending most profits on outreach and teaching" is not the reason it happened.

4. The reason resources are a problem is because the original link here is someone encouraging his users to harass the small (too small!) group of uncompensated volunteers who manage python releases, three weeks before the release, behavior for which he has since apologized.


> why your expectations do not align with reality

My expectations align with industry practice, and so should yours.

Python is welcome to play games with their version numbers, it doesn't change expectations however. That they made mistakes with unicode at 3.0 doesn't mean there should be breaks every X.Y release instead.


> A project of the scale of Python, that is so central, so essential, should not break on me more than once in a decade, at least willingly.

They have been unambiguous about their breaking changes policy for twelve years: you get at least two major versions of warning (i.e. two to three years), possibly more. And old major versions are supported (bugfixes, security fixes, and occasional non-breaking backports) for five years.

If that's not what you're looking for (which is totally fine!) then python is probably not a good fit for your use case.


Human being have the right to have an opinion and question the decision of the people designing the product they use.

FOSS is not a religion.


I didn't say anything about your rights? You are free to continue to use python while criticizing it.

p.s. Python is not a "product". Treating it like one is guaranteed to result in disappointment.


> The elephant in the room is that the core devs, after Python 2.7, have taken the very bad habit of breaking compat. For such a popular and core tech a Python, this is not responsible. It erodes trust. It also raises the cost of producing good software tremendously.

You know, the core devs cannot make it right for everyone. For each its-a-breaking-change-complaint there is a similar amount of we-want-shiny-new-things-and-my-teammates-want-to-move-on-to-shiny-new-language-X complaint.

So its clear to me that Python in order to stay relevant needs to evolve, add features, break occasionally with new releases.

There was once the idea of combining a couple of breaking changes in one release (I would call this the 'once-in-a-decade' breaking change you seem to find acceptable), that was Py2->Py3 and it was a shitshow, in part, because too many changes happened at once.

So I am grateful that we now get the occasional breaking change, one change at a time. I think it is the right way if Python aims to be still relevant in 10 years from now.


See my answer to your sibling. It's not an "or" proposition.

The 2 to 3 transition was hard because we did an abrupt cut. We could have had few transitioning versions that supported both worlds, then disabled the old ways but with a switch to put them back on if needed for a while, and allow for a more peaceful transition.


It’s very much an “or” situation from a practical standpoint. Maybe not exclusively or, but it’s unrealistic to say one does not significantly affect the other. Python core dev (CPython or otherwise) is already thin on resource for a long time, and you can’t just keep adding things without cleaning up old cruft. Parties most involved in development of the Python language (CPython devs, the Steering Council, etc.) value sustainability most, so it makes sense they decide to drop old stuff when they see no need of it. On the other side, parties that value stability more (e.g. libraries authors) need to actively lobby their goals to the core devs so they can assess the situation correctly and find the right balance between the two. You can’t just sit there and expect people to understand your needs; you have to tell them, which is what pydantic failed to do in this situation.


There was an even smaller team in 2.X, and 2.X broke less stuff while still providing new features.

So no.

It's a matter of product vision here.


> 2.X broke less stuff while still providing new features.

By putting off all the cleanups to Python 3. I think I like their current approach better.


Python 3 was necessary for cleaning. Having default new style classes, super() and so on are goodies, not necessities.

It was necessary for the things that are were not easy to do without breaking compat such as unicode, fixing comparison bugs, etc.

Now none of the stuff that broke compat in 3 after the initial release were things that were necessities. They were goodies. You don't need make from __future__ import annotations as default. You don't need to make async/await keyword.


> You don't need make from __future__ import annotations as default. You don't need to make async/await keyword.

By the same logic unicode_literal and print_function should have never been the default? Yeah I guess you can make that case, but I’m not in your camp. Sorry, goodbye.


I see the point that unicode_literal being quite important, but what about print_function? How does it make coding more difficult or awkward?


I literally wrote "It was necessary for the things that are were not easy to do without breaking compat such as unicode"


I don’t agree because it was the Unicode change that made it difficult, every thing else was relatively trivial. No further huge breaks are even being considered. So kind of a moot point anyway.


> The fact you are warned in advance that something bad is going to happen doesn't make it less bad. Just more manageable.

Exactly, it was very manageable. They could have avoided writing a whole library (FastAPI) knowing it was going to break inevitably. They could have raised concerns during three years.


1 - FastAPI was created before the change was announced.

2 - You assume they knew about it. It's a lot of work to create FOSS, it's very easy to miss those things given the huge amount of information you have to check for compat and security. Hell, I certainly don't assume my main dependency, which is a rock solid language no changing the major version and used by millions is going to break my project. I don't read each line of each changelog of each version upgrade of each of my dependency.

3 - Even if they knew about it, they may never have realized it would have broken something before people told them so.

4 - They may even have noticed, 3 years ago, but forgot about it. It was 3 years ago!

It's only human. It's FOSS.


> 1 - FastAPI was created before the change was announced.

No, it wasn't. You might be thinking of pydantic.

FastAPI's first commit was December 2018: https://github.com/tiangolo/fastapi/commit/406c092a3bf65bbd4.... PEP 563 was the first item in the Python 3.7.0 release notes earlier that year.


Indeed, my mistake.


It is surprising that they missed this change, though. Given how core type hints are for their projects and how broadly publicised the changes and developments in type hints were. I'd be surprised if there weren't any issues raised against their projects about those. It feels it'd be harder to miss it than it'd be to stumble upon it.

Also, weird for them to forget that there is a change that will break the foundations of their libraries. You'd think they would have that in the back of their minds and, being stakeholders, even being part of the discussions. Like you said, it's FOSS and, most importantly, python is a community project.


> And what if I miss it?

Irrelevant here, AIUI, since pydantic had bugs for this since 2018 but only raised issues with core 3 weeks before 3.10 feature freeze.

They knew it was coming, they were able to test it because it waa available behind a __future__ import, they knew it didn’t work for them, and they sat on it until it waa virtually impossible to resolve well.


I must say I've just spent quite some time looking at both PEPs and discussions and I'm strongly in favor of PEP 563 right now... it does improve the performance of typed modules substantially (which is a big downside for me in typed codebases) -- PEP 649 improves the situation but still adds a considerable overhead to the import time.

I think that the numbers given from Inada Naoki https://mail.python.org/archives/list/python-dev@python.org/... do sum it well.

Also, I don't see what will be made impossible at the pydantic side... from the bug list that's shown at https://github.com/samuelcolvin/pydantic/issues/2678, yes, the code needs to be adapted, but I don't see what's made impossible (maybe at some point the pydantic devs will actually say what the real problems are as so far no real concrete info on that was given).

Heck, they could even probably not even use `typing.get_type_hints()` (which currently apparently does an internal `eval()` and treat the annotation string however they'd like -- maybe more work, but then, absolute freedom to treat it as they see fit).

So, my plead goes the other way around: please don't make all projects which don't require runtime info from annotations pay the price for the few use cases which do use them (but don't make those impossible either -- but my take so far as that this is not the case).


> I hope this can be worked out. PEP 649 looks promising. Maybe PEP 563 should wait for python 3.11. And pydantic and fastapi are great! But this is, simply put, a shitty way to treat volunteer open source maintainers at the absolute busiest and most stressful time of the release cycle.

Updates from the fastapi maintainer (https://twitter.com/tiangolo/status/1383041351160369154):

"Quick update, the Python steering council, and core devs are fully aware of this and supportively helping figure it out. You can read all the conversations here and in the issue, but please restrain from commenting, tagging, or writing to them directly about this, it's already being handled with care. And [my] Tweet is already causing more extra work for them than helping."

And the pydantic maintainer (https://github.com/samuelcolvin/pydantic/issues/2678#issueco...)

"I only heard about the debate about PEP 649 on Wednesday night, and given the extremely narrow window to get my point heard, I banged the drum in every way I could think of. The lesson here is "careful what you wish for", my message has been amplified more than I expected. I'm sorry if that has wasted anyone's time, even more so if it reduces the chance my request gets a positive reception. You're also right that i should have engaged with python-dev long ago about this, I'm sorry I did not. Lesson (hopefully) learnt."

(More detail and explanation in his comment, but that's the bit that's salient to the point I was making in this thread.)


Both authors saw PEP-649 as a solution, with no need to raise alarm bells. If you follow the discussion of the last proposal, you’ll see that it’s acceptance is now in doubt, and it’s inclusion in 3.10 seems unlikely. Both authors have been active in that discussion.

The issue is not that they had no warning, it’s that the proposed solution is being pulled out from under them.


> I know people don't like the specific decisions python has made recently (me too!), but the process for deprecation and implementation once those decisions are made is at least clear and unambiguous, thanks to backwards compatibility policy adopted in PEP 387, i.e. way back in 2009.

> "Wait for the warning to appear in at least two major Python versions. It's fine to wait more than two releases."

> PEP 563, the one that apparently breaks pydantic, was accepted during Python 3.7. [..] It was eligible for full implementation in 3.9 according to the backward compatibility policy

Can you point me to the warning that was raised in Python 3.7, 3.8, and 3.9? I don't find it on a quick look.


> Can you point me to the warning that was raised in Python 3.7, 3.8, and 3.9?

AIUI, No warning was raised because the PEP 484 functionality that was changed was at the time identified as provisional, and not subject to the deprecation policy, which excludes provisional features.

The lack of warning, but not rationale, is explicitly noted in PEP 563.


If you're looking for runtime warnings, me neither. I think the definition of "appear in" is vague. My understanding (not a python core dev, or even someone who follows PEPs closely) is that a PEP has a corresponding python version, but it does not necessarily result in changes to cpython (or other implementations) that issue runtime warnings.


Ah, we disagree on that reading of PEP 387 then. I read it to say the API or behavior being altered must have a deprecation warning for two major releases.


No. I agree with your reading, but the reality is that the PSF doesn't appear to interpret it that way and it's sufficiently vague to allow them to get away with that :(

I can't remember the last time (if ever) I saw a runtime deprecation warning from cpython itself (I see plenty from libraries).

I don't understand PEP 563 (or the python type system and ecosystem of tooling) well enough to know whether it would even be possible for cpython itself to issue a deprecation warning in this case.

In any case, it's clear that the pydantic and fastapi authors were aware of this issue.


Like I say, it's clear the pydantic and fastapi authors were aware of this issue, but for what it's worth I confirmed that no runtime warnings were issued for PEP 563.

The behavior has been available via `from __future__ import annotations` since Python 3.7. It was the very first change mentioned in the 3.7 release notes (alongside an explicit statement that it would become default four years later in 3.10):

https://github.com/python/cpython/blob/a41782cc84bcd813209a0...


Ha! PEP 387 was edited yesterday in a way that does not make any the above wrong, but does mean you'll be confused if you go looking for my quote: https://github.com/python/peps/commit/307b9cdf8897cbade62773....





Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: