Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: groupby.sum() is inconsistent with df.sum() for large integers #34681

Open
1 task
tom-weiss opened this issue Jun 9, 2020 · 21 comments
Open
1 task

BUG: groupby.sum() is inconsistent with df.sum() for large integers #34681

tom-weiss opened this issue Jun 9, 2020 · 21 comments
Assignees
Labels
Bug Dtype Conversions Unexpected or buggy dtype conversions good first issue Groupby Needs Tests Unit test(s) needed to prevent regressions Reduction Operations sum, mean, min, max, etc.

Comments

@tom-weiss
Copy link

  • [ x ] I have checked that this issue has not already been reported.

  • [ x ] I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.

Code Sample, a copy-pastable example

import pandas as pd


for i in range(100):
    n = 2 ** i

    print(f"Trying n = 2 ** {i} '{n}'...")

    df = pd.DataFrame([['A', 14], ['A', n]], columns=['gb', 'val'])

    gb_sum = df.groupby('gb').sum().values[0][0]
    df_sum = df.sum().values[1]

    if gb_sum != df_sum:
        print(f"df.sum().values[1] '{df_sum}' != df.groupby('gb').sum().values[0][0] '{gb_sum}")
        break

    del df

print(f"Runing pandas {pd.__version__}")

Problem description

groupby.sum() results currently provide different results for df.sum() results for large integers.

It is expected that they should provide the same results.

Expected Output

When grouping by a colum with a single value, the groupby().sum() result should always equal the df.sum() result()

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.7.7.final.0
python-bits : 64
OS : Darwin
OS-release : 19.5.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8

pandas : 1.0.4
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 19.0.3
setuptools : 40.8.0
Cython : 0.29.16
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.3 (dt dec pq3 ext lo64)
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : 2.6.2
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : 0.4.2
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
xlsxwriter : None
numba : None
Runing pandas None

Process finished with exit code 0

@tom-weiss tom-weiss added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jun 9, 2020
@TomAugspurger
Copy link
Contributor

Thanks for the report. There's a cast to float64

> /Users/taugspurger/sandbox/pandas/pandas/core/groupby/generic.py(1043)_cython_agg_blocks()
-> try:
(Pdb) l
1038            for block in data.blocks:
1039                # Avoid inheriting result from earlier in the loop
1040                result = no_result
1041                locs = block.mgr_locs.as_array
1042                breakpoint()
1043 ->             try:
1044                    result, _ = self.grouper.aggregate(
1045                        block.values, how, axis=1, min_count=min_count
1046                    )
1047                except NotImplementedError:
1048                    # generally if we have numeric_only=False
(Pdb) block.values
array([[               14, 18014398509481984]])
(Pdb) how
'add'
(Pdb) n
> /Users/taugspurger/sandbox/pandas/pandas/core/groupby/generic.py(1044)_cython_agg_blocks()
-> result, _ = self.grouper.aggregate(
(Pdb) n
> /Users/taugspurger/sandbox/pandas/pandas/core/groupby/generic.py(1045)_cython_agg_blocks()
-> block.values, how, axis=1, min_count=min_count
(Pdb) n
> /Users/taugspurger/sandbox/pandas/pandas/core/groupby/generic.py(1091)_cython_agg_blocks()
-> assert not isinstance(result, DataFrame)
(Pdb) result
array([[1.80143985e+16]])
(Pdb) result.dtype
dtype('float64')

I think i occurs somewhere in groupby/ops.py

> /Users/taugspurger/sandbox/pandas/pandas/core/groupby/ops.py(519)_cython_operation()
-> func, values = self._get_cython_func_and_vals(kind, how, values, is_numeric)

@tom-weiss are you interested in debugging any further?

@TomAugspurger
Copy link
Contributor

Oh, perhaps it's as simple as us just not having an int64-dtype group_add function in libgroupby. Does that sound right @WillAyd?

@TomAugspurger TomAugspurger added Dtype Conversions Unexpected or buggy dtype conversions Groupby and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Jun 10, 2020
@WillAyd
Copy link
Member

WillAyd commented Jun 10, 2020

At first glance seems logical; not sure why it's not there to begin with

@tom-weiss
Copy link
Author

I am happy to help debug and assist as I can but am entirely unfamiliar with the inner workings of the pandas codebase.

@WillAyd
Copy link
Member

WillAyd commented Jun 10, 2020

The line of interest is here:

ctypedef fused complexfloating_t:

I don't know the history as to why only complex / floating are listed here. This may be better served just using Cython.numeric instead of a custom fused type

In any case if you wanted to try, I would suggest adding a test for the issue in pandas/tests/groupby/aggregate/test_aggregate.py and changing the lines above as described to see if you can get it to work

@tom-weiss
Copy link
Author

tom-weiss commented Jun 10, 2020

So the issue appears to be how to handle NAN values. It's easy enough to switch to a numeric but then I guess issues on line 482:

                if nobs[i, j] < min_count:
                    out[i, j] = NAN

I can't assign type 'long double' to 'uint64_t'

@WillAyd
Copy link
Member

WillAyd commented Jun 10, 2020

You can qualify the preceding condition with something like if numeric is not cython.integral_t to prevent assigning NA values to integer types

@WillAyd
Copy link
Member

WillAyd commented Jun 10, 2020

@tom-weiss
Copy link
Author

So it's quite easy to fix this to work with Integers but the required pandas functionality with max_count in the groupby() method is to return NaN for values that don't reach min_count():
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sum.html

Does anyone know how I can return the NAN in the case where I want to do integer arithmetic?

@jreback
Copy link
Contributor

jreback commented Jun 13, 2020

we always cast to float64 because sum can overflow int64 with large inputs ; we do try to cast back if possible i think

@tom-weiss
Copy link
Author

@jreback yes I can see the results cast back to int64 but we're getting rounding errors that are throwing the results out. These are not present in the standard df.sum() path.

@jreback
Copy link
Contributor

jreback commented Jun 13, 2020

the soln of course is to add a groupby int64 sum kernel

@tom-weiss
Copy link
Author

Is there any easy way to make int64 nullable in cython?

The _groupby_add function in pandas/_libs/groupby.pyx uses numpy types which are not-nullable for integers.

I'm not clear how I could implement the new nullable integer type into this:
https://pandas.pydata.org/pandas-docs/stable/user_guide/integer_na.html.

My curent code is here:


ctypedef fused complexfloating_t:
    int16_t
    int32_t
    int64_t
    int8_t
    uint16_t
    uint32_t
    uint64_t
    uint8_t
    float64_t
    float32_t
    complex64_t
    complex128_t


@cython.wraparound(False)
@cython.boundscheck(False)
def _group_add(complexfloating_t[:, :] out,
               int64_t[:] counts,
               complexfloating_t[:, :] values,
               const int64_t[:] labels,
               Py_ssize_t min_count=0):
    """
    Only aggregates on axis=0
    """
    cdef:
        Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
        complexfloating_t val, count
        complexfloating_t[:, :] sumx
        int64_t[:, :] nobs

    if len(values) != len(labels):
        raise ValueError("len(index) != len(labels)")

    nobs = np.zeros((<object>out).shape, dtype=np.int64)
    sumx = np.zeros_like(out)

    N, K = (<object>values).shape

    with nogil:
        for i in range(N):
            lab = labels[i]
            if lab < 0:
                continue

            counts[lab] += 1
            for j in range(K):
                val = values[i, j]

                # not nan
                if val == val:
                    nobs[lab, j] += 1
                    if (complexfloating_t is complex64_t or
                            complexfloating_t is complex128_t):
                        # clang errors if we use += with these dtypes
                        sumx[lab, j] = sumx[lab, j] + val
                    else:
                        sumx[lab, j] += val

        for i in range(ncounts):
            for j in range(K):
                if nobs[i, j] < min_count:
                    if  (complexfloating_t is int8_t or
                            complexfloating_t is int16_t or
                            complexfloating_t is int32_t or
                            complexfloating_t is int64_t or
                            complexfloating_t is uint64_t or
                            complexfloating_t is uint32_t or
                            complexfloating_t is uint16_t or
                            complexfloating_t is uint8_t):
                        out[i, j] = 0 # this needs to be pd.NA. How to do?
                    else:
                        out[i, j] = NAN
                else:
                    out[i, j] = sumx[i, j]


group_add_int8 = _group_add['int8_t']
group_add_int16 = _group_add['int16_t']
group_add_int32 = _group_add['int32_t']
group_add_int64 = _group_add['int64_t']
group_add_uint64 = _group_add['uint64_t']
group_add_uint32 = _group_add['uint32_t']
group_add_uint16 = _group_add['uint16_t']
group_add_uint8 = _group_add['uint8_t']
group_add_float32 = _group_add['float32_t']
group_add_float64 = _group_add['float64_t']
group_add_complex64 = _group_add['float complex']
group_add_complex128 = _group_add['double complex']

@jreback
Copy link
Contributor

jreback commented Jun 14, 2020

nullables are not supported directly in cython, this can be worked around by actually passing the mask to the cython functions, this is will require some reworking of the code and calling conventions.

@TomAugspurger
Copy link
Contributor

For this specific case (int64), we might not need to worry about min_count in the cython level. That can be done outside of it, since we know that the number of valid values equals the length of the array.

However a masked groupby sum would be helpful for the nullable integer dtype.

@tom-weiss
Copy link
Author

If someone can point me in the right direction I am happy to try and fix or help validate a fix.

@jbrockmendel jbrockmendel added the Reduction Operations sum, mean, min, max, etc. label Sep 21, 2020
@rhshadrach
Copy link
Member

OP example now has gb_sum always equal to df_sum. Would be good to run a git bisect to see which PR fixed this and if this needs tests.

@rhshadrach rhshadrach added good first issue Needs Tests Unit test(s) needed to prevent regressions labels Jul 15, 2023
@deejlucas
Copy link

take

@deejlucas
Copy link

Update: It's been slow going as it took me a while to build an environment where I could perform git bisect for versions 1.x.

I have results from the bisect now that say that the bug has been fixed between d3f0856 and 1ce1c3c. I will double check that result, write some tests, and hopefully submit a PR soon.

@jahn96
Copy link
Contributor

jahn96 commented Jul 25, 2024

@deejlucas happy to take on this issue if you aren't working on it

@jahn96
Copy link
Contributor

jahn96 commented Jul 26, 2024

take

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Dtype Conversions Unexpected or buggy dtype conversions good first issue Groupby Needs Tests Unit test(s) needed to prevent regressions Reduction Operations sum, mean, min, max, etc.
Projects
None yet
Development

No branches or pull requests

8 participants