Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nah, it's not nicer. dplyr is way better than pandas. But there is no end to the supply of Python fanbois who only know Python and assume that whatever's in Python just has to be better


I don't mind pandas so much, although dplyr is quite nice IMO (feels like natural language and declarative/SQL like, whereas pandas ends up with lots of procedural idioms).

ggplot is something that I don't think matplotlib is comparable to at all, though. I am so much faster at iterating on a visualization with R/ggplot than Python/matplotlib. Maybe it is my tooling, though. How about others who have used both? What are your experiences?


No, same here. I tried to recreate some covid rate graphs in python. The ggplot code did facetting and fitted a LOESS to the data. Nothing ground breaking, but it really hit the limits of what seaborn was able to do, and I wasn't able to tinker with it much further. It got to the point where to make it look good I needed to calculate all the curves manually.


ggplot >> matplotlib and dplyr >> pandas. its not even close imo.


Pandas is used in some top 10 banks for analytics. Its performance is abysmal at the scale used there. Nobody wants to invest resources in training analysts to write high performance code so here we are. I have never viewed SQL more highly after seeing the mess that analysts make when writing imperative code.


No surprise there - pandas encourages ugly, inefficient code with its bloated, unintuitive API.

Once I was a lead on a new project and asked the intern to write some basic ETL code for data in some spreadsheets. I said she could write it in Python if she wanted, because "Python is good for ETL", right?

This intern was not dumb by any means, but she wrote code that took 5 minutes to do something that can be done in <1 second with the obvious dplyr approach.

Also, if your bank analysts pick up dplyr, they can use dbplyr to write SQL for them :)


R’s meta programming facilities are head and shoulders above Python’s, which I think explains the brilliance of dplyr and dbplyr. But I feel like with R you have to scrape back a bunch of layers to get to the Schemey parts. I’ve always wondered what Hadley and Co would have done with dplyr and dbplyr had they had something like Racket at their disposal.


Unfortunately R success killed xlisp-stat: http://homepage.divms.uiowa.edu/~luke/xls/xlsinfo/

Edit: or maybe it's not dead? I just found http://www.user2019.fr/static/pres/t246174.pdf


I was offended the first time I encountered R's nonstandard evaluation, but it didn't take long to accept it. Now I wonder why anyone would want to write `mytable.column` a million times when it's obvious from context what `column` is referred to, and the computer can reliably figure it out for you with some simple scoping rules. It's a superior notation that facilitates focus on the real underlying problem, and data analysts love that.


IMO they should just bite the bullet and learn proper SQL. I say this as a data scientist who learned SQL later than C, Matlab, R, Python/Pandas (though earlier than PySpark).


I agree. SQL is nothing to be afraid of, and there's no happier place to be analyzing huge tabular datasets than in a modern columnar database


R’s data.table package is faster at these things out of the box than any single instance of a database server I’ve encountered. This is frustrating because I’m trying to explain some systemic issues we suffer by not using a relational database, but it’s really hard to make my case when data.table is one install.packages away and a version upgrade from Postgres 9 to something a little faster is gatekept by bureaucracy. I’ve been trying for months!


You need a columnar database for good performance. Try DuckDB to ease them into it, it's a columnar SQLite.


Thanks, I’m checking it out, it seems pretty interesting to keep an eye on. Lots of properties that would be useful in our shared computing environment like not requiring root or Docker.


Might also be worth running a local instance of Postgres 13. Super easy to do on Windows without administrator rights.


Pandas/python is amazingly prevalent at trading firms. And everyday, we bitch about the performance, we bitch about the stupid API, we bitch about the GIL, the lack of expressiveness. The list goes on and on. But for some braindead reason, we never switch to Julia. It's masochistic.


I do think Julia is a far better language for numerics than python, but compared to DataFrames.jl, pandas can be quite fast. I know, "but it's easier to make it faster in Julia". Last I checked `sort(df, :col)` was significantly slower than `df[sortperm(df[:col])]`. Someone actually has to go through and make these libraries fast.

Second issue, in my field (bioinformatics) the script is still a pretty common unit of code. Without cached compilation being a simple flag, Julia often is slower.


Yeah, that's a good point. DataFrames.jl starts to really shine what the cookie cutter pandas functions arent adequate for what you need to do. DataFrames.jl can certainly be slower in some cases, but you should expect a consistent level of performance no matter what you do. This is a farcry from Pandas, which tanks by large factors when you start calling Python code vs C code.

In regards to Julia's compilation problem, you can use https://github.com/JuliaLang/PackageCompiler.jl to precompile an image, allowing you to avoid paying the JIT performance penalty over and over again.


I use Python at work, but R is the uber weapon.


yeah. Pick one, learn it, and you'll be fine, no patter if you chose Python or R.

Personally, I prefer R for my use case which is longitudinal analysis of experimental data.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: