The more I use Python the more I hate it. It’s genuinely a bad language, with a stellar ecosystem. Ironically, the most valuable parts of the ecosystem are often written in C (NumPy).
It’d be interesting to see how much of the Python ecosystem is actually necessary to move PyTorch to a better language.
I’m afraid we’re stuck with Python for the next 20 years. That makes me very, very sad.
This is one of the nicer aspects of Julia. It starts out being a great language to work in. Its easy to implement algorithms that are generally difficult in other languages.
Its important to remember that most of the python ecosystem, isn't written in python. The functions are often thin wrappers/objects around the real computation, which is often written in a faster language, C/C++/Fortran.
Julia excels in composability, performance, ease of development. You don't need to recode your algorithms in another language due to the performance of Julia, as is needed in Python's case.
Generally speaking, I see Julia's biggest fault, time to first plot, being addressed far sooner than python being redesigned to have Julia's capabilities. For the record, I use python daily in the day job. And I use Julia there for analytics there, often with jupyter notebooks. Easy for users to consume and interact with.
Let’s not ignore the giant elephant in the room: 1-based indexing. I don’t particularly care since I use R and Python but Java, C, C/C++, C# all used 0-based indexing. It’s truly a bizarre choice Julia made there.
I barfed at 1-based indexing for about a week, but now it is as natural as anything.
I would compare 0-based and 1-based indexing with whether you put semicolons at the end of each line or not. Either way doesn't really change the feel (semantics) of the language.
Also, fortan is 1-based, iirc, and a lot of numerical code is in fortan.
Oh, and many many beginning programmers and scientists have a hard time with 0-based indexing. Not sure why, but such you hear, so the choice is really not that odd.
The reason beginners have a hard time with 0 based indexing is that humans count from 1. Seriously, I've spent weeks trying to tell people "yeah, we want rows 4 and 5, so that's, uh, rows 3 and 4..." and they think it's nuts, and I now think they're right.
Right. Zero based indexing makes zero sense, unless you explain the underlying technical reason, that it’s an offset relative to a memory pointer (spend a week teaching pointers first!).
It makes sense in certain context (and in languages like C that have a low-level mental model). For scientific computing at a higher level of abstraction where the mental model of a multidimensional array is a tensor, and not a memory offset location, zero-based indices really get in the way
Precisely. Indexing makes sense in a context, and it is trivial in general to switch. This said, telling people that the first element they want is the "0th", is completely unnatural.
It really is a rather small elephant. It can be jarring at first (unless you think of R, MatLab and Bash), but then you just stop thinking about it because it legit doesn't matter.
People should stop wasting time bikeshedding this insignificant detail, but for some reason it is to programmers like a red rag to a bull.
Really its only an issue to some programmers, who prefer specific languages.
When you have to deal with a wide range of languages, stuff like this is small potatoes, compared to, say, indentation based structure. The latter can result in the completely non-obvious change of program flow due to a single errant space or tab.
1-based indexing makes sense to computational scientists (target audience). Fortran, Matlab, etc. make very good use of that. Moreover, you can change to zero based if you wish.
So this "very big elephant" is, in reality, a nothingburger.
For me, the very big elephant in the room is the semantic formatting. It has and continues to trip me up time and again. A touch of a space bar can change the flow of a python program. That is a huge asteroid of a problem.
I have had the opposite experience with python, the more I use it and learn the standard library and ecosystem the more I love it. What exactly makes you think its a bad language?
For me I think the packaging ecosystem is bad, we need one package management tool like poetry built in. We need a built in typing system like typescript. Lastly we need to remove the GIL.
I’m pretty sure all of these are currently being addressed by the community.
I switch languages a lot and things like functools, itertools, dunder methods, list comprehensions, dict comprehensions are things I sorely miss especially in typescript. In particular list and dict comprehensions when used with care are a great deal easier to work with and reason about when transforming data.
I'd be moderately happy with Python if it had full static typing, improved error handling, fixed packaging, fixed deployment, and removed the GIL.
I like to think that containers only exist because deploying a Python application is so %^#(&*# complicated that the easiest way to do is to deploy an entire runtime image. It's an absolute nightmare and travesty. So bad. So very very bad. https://xkcd.com/1987/
I'm not optimistic on TypeScript for Python. That'd be great if such a thing existed! I'm not optimistic on packaging or deployment. There is recent progress on GIL removal which is exciting! There is hope, but I'm proceeding with very cautious optimism.
Comprehensions are kinda great, but also hideous and backwards. Rust iterators are a significant improvement imho. The fact that no recent languages have chosen to copy Python's syntax for comprehensions is telling!
Oh, and I think the standard library API design is pretty poor. Filesystem has caused me immense pain and suffering. Runtime errors are the worst.
> I'm not optimistic on TypeScript for Python. That'd be great if such a thing existed
MyPy exists, Python officially supports type annotations.
I do think comprehensions are a weird feature for Python, particularly from the “one way to do it” perspective. And also because the language so strongly discourages FP patterns outside of comprehensions.
Overall I would say the language is a lot better than the ecosystem, and that it suffers a lot from having a bad packaging design. I’m not a fan, suffice it to say. It’s best if you can stick to the standard library.
Bad languages like Python, JavaScript, PHP are responsible for powering large part of tech revolution. Ability to write bad code easily is IMO large part of why they’re so popular. Low barrier to entry helps to build huge ecosystem.
I would say that those are not bad languages. People are just elitist and think if your language isn’t strictly typed, functional and gives first year CS students a headache it’s a bad language and “creates spaghetti code.” The only thing wrong with dynamic typing is it’s slower and is harder to debug, but people are able to be way more productive in these languages you call bad.
> The only thing wrong with dynamic typing is it’s slower and is harder to debug, but people are able to be way more productive in these languages you call bad.
Not even close to "only thing". Dynamic language is a net loss of productivity once you reach a certain level of scale. Refactoring a codebase with millions of lines of code in a dynamic language is an absolute nightmare.
Opinions vary at what level of scale this happens. My personal experience is that once you hit just a few thousand lines of code that dynamic typing is a drag on productivity.
There's a reason things like TypeScript are increasingly popular.
I don't care what language first year CS students use. I care what languages I'm forced to deal with on a day-to-day basis!
i agree, this is without a doubt the number one thing that julia needs.
nearly every dynamic language has trended towards this in the last 7-8 years. it's probably had the most profound effect in modern Javascript (typescript) and python. It's incredible how different production javascript and production python are compared to say, 2014 javascript or 2014 python.
more so than the goals of JET itself, the abstract interpretation framework it exposes should allow for TypeScript-like gradual typing.
This is important, imho, if Julia is ever to go out of a niche area because most of the rest of the world has already moved on to see the benefit that the improvement in application reliability that static analysis allows.
I would claim the tech revolution happened despite those terrible languages rather than because of them. The languages are popular because of inertia, not because they're good.
Python is popular because of the ML revolution. If ML didn't take off neither would Python's popularity. Is ML successful because of Python or despite Python? Well, the world is probably further along with Python than if it merely didn't exist. But if a different language that sucked less existed we would, imho, be further along than we are.
I'm not annoyed Python exists. I'm annoyed that its inertia is so monumental it's inhibiting further progress. We're at a local maximum and the cost to take the next step is really really really high. That's not Python's fault mind you, just the way things work.
As someone who uses Python since 2004, you got it backwards.
Python was popular before because it's very nice language. People wanted to use it for science to, so they wrote very good scientific libraries for it.
R was very popular for non-neural-network ML some years ago, yet it wasn't picked up for NN, because R kind of sucks for general programming. As the joke goes, the best part of R is that is a language written by statisticians. The worse part of R is that is a language written by statisticians.
Python was growing at accelerated speed year on year well before neural networks.
The runtime of R is it’s biggest downfall. Its single threaded nature kills it. I say this having used future::promise in a Plumber API to get some semblance of concurrency.
Python was popular, including for scientific use, before the ML revolution; in fact, the only reason it is associated with the ML revolution in the first place is it's preexisting broad adoption in scientific computing.
No these languages are the drivers of the tech revolution. PHP was widely adopted because it allowed people to rapidly build personal websites and then those people went on to build companies with it. Python is popular in ML because of its syntax being close to pseudo-code and allowing people who aren’t programmers to interact with all the old math libraries you used to have write C and Fortran to use. JavaScript is popular because it made the web more interactive and thus a ton of people who made websites learned it as their first programming language when they were young.
Python and PHP are so big because of the languages themselves and their implementation, Js is a bit different in that regard I’ll admit.
It's true that the tech revolution would not have happened if all code had to be written in assembly. And it's true the machine learning revolution would not have happened if all code had to be written in C.
Is Python better than assembly and C/C++ for ML? Absolutely. Is Python good? I don't think so. Other people might use that term, I do not. I think Python is a bad language that would be designed very differently it was built today. And we're stuck with its decisions because the inertia to change them is monumental.
It's not really the language's fault. As an industry we've learned a lot in the past 30 years! It would be a travesty if we hadn't made language design process! Unfortunately we haven't figured out how to effectively apply lessons to encumbered ecosystems. Case in point: the transition from Python 2 to Python 3 was an absolute disaster. And it only made tiny, incremental improvements!
Sure, but I get the feeling that the pendulum is swinging from a period where being able to write code easily is relatively devalued to being able to read code easily. Ironically, python itself benefitted from the last swing of that pendulum, as it was widely regarded as an "easier to read" PL than, say C (well, yeah) or perl (well, super yeah) or PHP (super-duper yeah).
Python is not a bad language, programming languages do not have to be unreadable or have a steep learning curve to be good. The problem with python is that it’s implementation is slow and offers a ton of hang ups that you have to know the language in and out to even know they’re there. There’s a post here about once a year that details some of the funnier things.
What do you think the major drawbacks are? Speed would be the top of my list, but most projects to not need anything more than what python can currently pump out.
Aside from speed, one thing that really eats at me is that it makes any sort of functional programming overly verbose, unfun, and just not very idiomatic. Also the the vast majority of python programmers simply don't understand the best practices in their own ecosystem.
I was recently writing code using Reactor/RxJava in Java 11 w/ Lombok. I don't think I've ever been so productive or lead a team as productive as when we were going ham on the functional/reactive Java. Now that I'm back in Python land, I am constantly frustrate on a daily basis with both the language and the runtime at every turn. Even with the asyncio we are working on, it feels like the absolute minimum viable product when compared to the java, node, or rust I have done.
There are some fantastic python enhancements that bridge some of the gaps like PEP 484 (Type Hints) and PEP 517 (holy crap an actual build systems that are not dogcrap) but it feels like the python community does not care.
Never seen Python described as verbose. Rx has libraries in Python libraries.
If you think scientific programmers give a damn about build systems you don’t know what you’re talking about
You can't dismiss the fact that hiring python is hard. You think you're getting a good programmer, because they know all the leetcode tricks, but that person turns out to be a dud.
> You can't dismiss the fact that hiring python is hard.
I deny that hiring Python is hard beyond “hiring is hard”.
> You think you're getting a good programmer, because they know all the leetcode tricks,
Unless I want someone for a role that is very much like reproducing leetcode tricks, I don't think I would think someone is good for it because they are good at those. In fact, leetcode is mercilessly mocked as being almost completely irrelevant as a positive signal for hiring, though it may be useful as a filtering tool to reduce an overwhelming volume of applicants to a manageable one where high rates of both false negatives and false positives, but some slight advantage over just randomly discarding applicants, is tolerable.
Well considering I made the initial statement, no, it's not. A company I used to work for once hired exclusively from the must-have-Python-experience pool (not my decision) and the cto fawned over how well he solved this "well-known-leetcode problem", and he utterly failed my problem, which tests for actually useful competency... of course he was hired -- and turned out to be a complete lemon. I remember that hiring round distinctly, everyone we interviewed for that position (n~10) was competent for the leetcode problem but never did basic things in my interview like "write tests", "don't try to make a complicated algorithm", etc, even when told explicitly to do/ not to do those things.
Outside of that I interviewed several of my friends (I know them from a non-programming context, so I don't know their competency) who were predominantly python devs, and completely noped out of them for the same reasons (and these were my friends).
Can you elaborate? If someone can pump out leet code I would assume they would be a half decent programmer and it would just take some time for them to be as productive as you wanted. Then again I’m mostly self taught and never done leet code and I still manage to be a good programmer according to those I’ve worked with
I personally think that leetcode-interview-passers that I have had to work with do poorly with actually useful tasks like, writing organized code, writing tests, documenting, etc.
Probably but those are things they can pick up on the job pretty quick, while it’s a lot harder to teach leet code style problem solving while solving actual business problems.
Maybe you have a different experience, but none of the actual business problems I deal with are leet-code style. Then again, I have recently stopped calling myself a software developer (and have long since stopped calling myself a software engineer) in favor of the more accurate "software plumber". But I imagine, even coding tasks that require algorithmic knowledge and insight, are benefitted from well-organized and well-designed code. Doing that correctly is often a matter of insight and judgement, and I think often those are not "easily learned".
> Why are you hiring for language skills rather than problem solving aptitude and conceptual fundamentals?
All I'm saying is that signal to noise for the common tests you give for 'problem solving aptitude and conceptual fundamentals', is much lower when you are hiring for a python position. You think you're hiring for those things, but you're actually hiring for leetcode-optimizers.
I mean, I'm not trying to do hire like that, and I think I have an interview that tries to test that effectively, but I have had to deal with the downstream effects of people who are doing hiring like this, and that has been a real problem for me.
If you instead hire for "engineering" positions, without caring about what languages the candidate knows, you can interview for their ability to solve practical programming exercises [1] in whatever language they are most familiar/comfortable with. Maybe this only works at FAANG-level hiring, but in these contexts, top tier candidates can get things done in any language, and that's really what matters, no? But more to your point, I've generally found candidates that pick Python (or Ruby/Perl/etc) can actually accomplish more–and therefore prove their capabilities–in the space of an interview simply because they're picking a more expressive language. Bad candidates will prove they are bad candidates no matter what language they choose.
1: Eg, reading/manipulating files, merging/filtering/sorting data, generating and processing multi-dimensional data structures, etc.
I personally have an interview that essentially tests if you can produce well-organized code for a challenging algorithm that is persented as-is (the steps to the algorithm are provided, so no cleverness is necessary, you are dinged in my interview for using cleverness if it introduces errors). But, I do not control hiring for all people I have to work with.
I don't think Python is a bad language, it just often gets used where it shouldn't.
Python is one of the few languages that has a balance of ease of use, ecosystem, ubiquity, and useable type system. It's a fantastic glue language and it's extremely flexible.
It’d be interesting to see how much of the Python ecosystem is actually necessary to move PyTorch to a better language.
I’m afraid we’re stuck with Python for the next 20 years. That makes me very, very sad.