No surprise I use python, but I’ve recently started experimenting with polars instead of pandas. I’ve enjoyed it so far, but Im not sure if the benefits for my team’s work will be enough to outweigh the cost of moving from our existing pandas/numpy code over to polars.

I’ve also started playing with grafana, as a quick dashboarding utility to make some basic visualizations on some live production databases.

  • Kache@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    What kind of query optimization can it for scanning data that’s already in memory?

    • rutrum@lm.paradisus.dayOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      A big feature of polars is only loading applicable data from disk. But during exporatory data analysis (EDA) you often have the whole dataset in memory. In this case, filters wont help much there. Polars has a good page in their docs about all the possible optimizations it is capable of. https://docs.pola.rs/user-guide/lazy/optimizations/

      One I see off the top is projection pushdown, which only selects relevant columns for a final transformations. In pandas, if you perform a group by with aggregation, then only look at a few columns, you still perform aggregation across all the data. In polars lazy API, you would define the entire process upfront, and it would know not to aggregate certain columns, for instance.

      • Kache@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 day ago

        Hm, that’s kind of interesting

        But my first reaction is that optimizations only at the “Python processing level” are going to be pretty limited since it’s not going to have metadata/statistics, and it’d depend heavily on the source data layout, e.g. CSV vs parquet