Handling missing data

Considerations for managing missing data and a look at how Pandas tools can address missing data in Python.

By Jake VanderPlas
August 3, 2015
Not a number Not a number

The difference between data found in many tutorials and data in the real world is that real-world data is rarely clean and homogeneous. In particular, many interesting datasets will have some amount of data missing. To make matters even more complicated, different data sources may indicate missing data in different ways.

In this section, we will discuss some general considerations for missing data, discuss how Pandas chooses to represent it, and demonstrate some built-in Pandas tools for handling missing data in Python. Here and throughout the book, we’ll refer to missing data in general as “null”, “NaN”, or “NA” values.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Tradeoffs in Missing Data Conventions

There are a number of schemes that have been developed to indicate the presence of missing data in an array of data. Generally, they revolve around one of two strategies: using a mask which globally indicates missing values, or choosing a sentinel value which indicates a missing entry.

In the masking approach, the mask might be an entirely separate boolean array, or it may involve appropriation of one bit in the data representation to locally indicate the null status of a value.

In the sentinel approach, the sentinel value could be some data-specific convention, such as indicating a missing integer value with -9999 or some rare bit pattern, or it could be a more global convention, such as indicating a missing floating point value with NaN (Not a Number), a special value which is part of the IEEE floating point specification.

None of these approaches is without tradeoffs: use of a separate mask array requires allocation of an additional boolean array which adds overhead in both storage and computation. A sentinel value reduces the range of valid values which can be represented, and may require extra (often non-optimized) logic in CPU & GPU arithmetic. Common special values like NaN are not available for all data types.

As in most cases where no universally optimal choice exists, different languages and systems use different conventions. For example, the R language uses reserved bit patterns within each data type as sentinel values indicating missing data, while the SciDB system uses an extra byte attached to every cell which indicates a NA state.

Missing Data in Pandas

Pandas’ choice for how to handle missing values is constrained by its reliance on the NumPy package, which does not have a built-in notion of NA values for non-floating-point datatypes.

Pandas could have followed R’s lead in specifying bit patterns for each individual data type to indicate nullness, but this approach turns out to be rather unwieldy in Pandas’ case. While R contains four basic data types, NumPy supports far more than this: for example, while R has a single integer type, NumPy supports fourteen basic integer types once you account for available precisions, signedness, and endianness of the encoding. Reserving a specific bit pattern in all available NumPy types would lead to an unwieldly amount of overhead in special-casing various operations for various types, and the implementation would probably require a new fork of the NumPy package.

NumPy does have support for masked arrays – i.e. arrays which have a separate boolean mask array attached which marks data as “good” or “bad”. Pandas could have derived from this, but the overhead in both storage, computation, and code maintenance makes that an unattractive choice.

With these constraints in mind, Pandas chose to use sentinels for missing data, and further chose to use two already-existing Python null values: the special floating-point NaN value, and the Python None object. This choice has some side-effects, as we will see, but in practice ends up being a good compromise in most cases of interest.

None: Pythonic Missing Data

The first sentinel value used by Pandas is None. None is a Python singleton object which is often used for missing data in Python code. Because it is a Python object, it cannot be used in any arbitrary NumPy/Pandas array, but only in arrays with data type 'object' (i.e. arrays of Python objects):

import numpy as np
import pandas as pd
vals1 = np.array([1, None, 3, 4])
vals1

This dtype=object means that the best common type representation NumPy could infer for the contents of the array is that they are Python objects. While this kind of object array is useful for some purposes, any operations on the data will be done at the Python level, with much more overhead than the typically fast operations seen for arrays with native types.

for dtype in ['object', 'int']:
print("dtype =", dtype)
%timeit np.arange(1E6, dtype=dtype).sum()
print()

The use of Python objects in an array also means that if you perform aggregations like sum() or min() across an array with a None value, you will generally get an error.

vals1.sum()

This is because addition in Python between an integer and None is undefined.

NaN: Missing Numerical Data

The other missing data representation, NaN (acronym for Not a Number) is different: it is a special floating-point value that is recognized by all systems which use the standard IEEE floating-point representation.

vals2 = np.array([1, np.nan, 3, 4])
vals2.dtype

Notice that NumPy chose a native floating-point type for this array: this means that unlike the object array above, this array supports fast operations pushed into compiled code. You should be aware that NaN is a bit like a data virus which infects any other object it touches. Regardless of the operation, the result of arithmetic with NaN will be another NaN:

1 + np.nan
0 *  np.nan

Note that this means that the sum or maximum of the values is well-defined (it doesn’t result in an error), but not very useful:

vals2.sum(), vals2.min(), vals2.max()

Keep in mind that NaN is specifically a floating-point value; there is no equivalent NaN value for integers, strings, or other types.

Examples

Each of the above sentinel representations has its place, and Pandas is built to handle the two of them nearly interchangeably, and will convert between the two sentinel values where appropriate:

data = pd.Series([1, np.nan, 2, None])
data

Keep in mind, though, that because None is a Python object type and NaN is a floating-point type, there is no in-type NA representation in Pandas for string, boolean, or integer values. Pandas gets around this by type-casting in cases where NA values are present. For example, if we set a value in an integer array to np.nan, it will automatically be up-cast to a floating point type to accommodate the NA:

x = pd.Series(range(2), dtype=int)
x[0] = None
x

Notice that in addition to casting the integer array to floating point, Pandas automatically converts the None to a NaN value. Though this type of magic may feel a bit hackish compared to the more unified approach to NA values in domain-specific languages like R, the Pandas sentinel/casting approach works well in practice and in my experience only rarely causes issues.

Here is a short table of the upcasting conventions in Pandas when NA values are introduced:

Typeclass Promotion when storing NAs NA sentinel value
floating no change np.nan
object no change None or np.nan
integer cast to float64 np.nan
boolean cast to object None or np.nan

Keep in mind that in Pandas, string data is always stored with an object dtype.

Operating on Null Values

As we have seen, Pandas treats None and NaN as essentially interchangeable for indicating missing or null values. To facilitate this convention, there are several useful methods for detecting, removing, and replacing null values in Pandas data structures. They are:

  • isnull(): generate a boolean mask indicating missing values
  • notnull(): opposite of isnull()
  • dropna(): return a filtered version of the data
  • fillna(): return a copy of the data with missing values filled or imputed

We will finish this section with a brief discussion and demonstration of these routines:

Detecting Null Values

Pandas data structures have two useful methods for detecting null data: isnull() and notnull(). Either one will return a boolean mask over the data, for example:

data = pd.Series([1, np.nan, 'hello', None])
data.isnull()

As mentioned in section X.X, boolean masks can be used directly as a Series or DataFrame index:

data[data.notnull()]

The isnull() and notnull() methods produce similar boolean results for DataFrames.

Dropping Null Values

In addition to the masking used above, there are the convenience methods, dropna() and fillna(), which respectively remove NA values and fill-in NA values. For a Series, the result is straightforward:

data.dropna()

For a dataframe, there are more options. Consider the following dataframe:

df = pd.DataFrame([[1,      np.nan, 2],
							[2,      3,      5],
							[np.nan, 4,      6]])
df

We cannot drop single values from a DataFrame; we can only drop full rows or full columns. Depending on the application, you might want one or the other, so dropna() gives a number of options for a DataFrame.

By default, dropna() will drop all rows in which any null value is present:

df.dropna()

Alternatively, you can drop NA values along a different axis: axis=1 drops all columns containing a null value:

df.dropna(axis=1)

But this drops some good data as well; you might rather be interested in dropping rows or columns with all NA values, or a majority of NA values. This can be specified through the how or thresh parameters, which allow fine control of the number of nulls to allow through.

The default is how='any', such that any row or column (depending on the axis keyword) containing a null value will be dropped. You can also specify how='all', which will only drop rows/columns which are all null values:

df[3] = np.nan
df
df.dropna(axis=1, how='all')

Keep in mind that to be a bit more clear, you can use axis='rows' rather than axis=0 and axis='columns' rather than axis=1.

For finer-grained control, the thresh parameter lets you specify a minimum number of non-null values for the row/column to be kept:

df.dropna(thresh=3)

Here the first and last row have been dropped, because they contain only two non-null values.

Filling Null Values

Sometimes rather than dropping NA values, you’d rather replace them with a valid value. This value might be a single number like zero, or it might be some sort of imputation or interpolation from the good values. You could do this in-place using the isnull() method as a mask, but because it is such a common operation Pandas provides the fillna() method, which returns a copy of the array with the null values replaced.

Consider the following Series:

data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))
data

We can fill NA entries with a single value, such as zero:

data.fillna(0)

We can specify a forward-fill to propagate the previous value forward:

# forward-fill
data.fillna(method='ffill')

Or we can specify a back-fill to propagate the next values backward:

# back-fill
data.fillna(method='bfill')

For DataFrames, the options are similar, but we can also specify an axis along-which the fills take place:

df
df.fillna(method='ffill', axis=1)

Notice that if a previous value is not available during a forward fill, the NA value remains.

Summary

Here we have seen how Pandas handles null/NA values, and seen a few DataFrame and Series methods specifically designed to handle these missing values in a uniform way. Missing data is a fact of life in real-world datasets, and we’ll see these tools often in the following chapters.

Post topics: Data science
Share:

Get the O’Reilly Radar Trends to Watch newsletter