Data Cleaning and Preprocessing
75 minData cleaning involves handling missing values, duplicates, and data type conversions, preparing raw data for analysis. Real-world data is often messy—incomplete, inconsistent, or incorrectly formatted. Cleaning transforms raw data into usable format. Understanding data cleaning enables working with real-world data. Data cleaning is essential for quality analysis.
Handling missing values uses methods like df.dropna() (remove rows/columns with missing values), df.fillna() (fill with values), and interpolation. Missing values can bias analysis if not handled properly. Choose strategy based on data characteristics: drop if few missing, fill if many missing. Understanding missing value handling enables robust analysis. Missing value handling is crucial.
Removing duplicates uses df.drop_duplicates() to eliminate duplicate rows, preserving data integrity. Duplicates can skew statistics and analysis. Identify duplicates with df.duplicated() before removing. Understanding duplicate handling ensures data quality. Duplicate removal is important for accurate results.
Data type conversions use methods like pd.to_datetime() (dates), astype() (type conversion), and pd.to_numeric() (numbers), ensuring data types match their content. Correct types enable proper operations (e.g., date arithmetic, numeric calculations). Understanding type conversion enables data correctness. Type conversion is essential for data quality.
Preprocessing includes normalization (scaling to common range), encoding categorical variables (one-hot, label encoding), and outlier detection (identifying and handling extreme values). Preprocessing prepares data for machine learning. Understanding preprocessing enables model-ready data. Preprocessing is essential for machine learning.
Best practices include always exploring data before cleaning, documenting cleaning steps, handling missing values appropriately, removing duplicates, converting data types correctly, and validating cleaned data. Understanding data cleaning and preprocessing enables quality analysis. Clean data is essential for accurate results.
Key Concepts
- Data cleaning involves handling missing values, duplicates, and type conversions.
- Missing values can be dropped (dropna) or filled (fillna).
- Duplicates are removed with drop_duplicates().
- Data type conversions ensure correct data types.
- Preprocessing includes normalization, encoding, and outlier detection.
Learning Objectives
Master
- Handling missing values with appropriate strategies
- Removing duplicates and handling inconsistencies
- Converting data types correctly
- Preprocessing data for analysis and modeling
Develop
- Understanding data quality principles
- Designing data cleaning workflows
- Appreciating cleaning's role in analysis quality
Tips
- Always check for missing values: df.isnull().sum().
- Use fillna() with appropriate strategy (mean, median, forward fill).
- Remove duplicates: df.drop_duplicates().
- Convert types: pd.to_datetime(), astype(), pd.to_numeric().
Common Pitfalls
- Not handling missing values, causing errors in calculations.
- Removing too many rows with dropna(), losing data.
- Not converting data types, causing unexpected behavior.
- Not handling outliers, skewing analysis results.
Summary
- Data cleaning involves handling missing values, duplicates, and type conversions.
- Preprocessing includes normalization, encoding, and outlier detection.
- Clean data is essential for accurate analysis and modeling.
- Understanding data cleaning enables working with real-world data.
- Data cleaning is essential for quality analysis.
Exercise
Clean a dataset with missing values, duplicates, and inconsistent data.
import pandas as pd
import numpy as np
# Create a messy dataset
messy_data = {
'Name': ['Alice', 'Bob', 'Charlie', 'Diana', 'Eve', 'Alice', 'Frank'],
'Age': [25, 30, np.nan, 28, 32, 25, 40],
'City': ['New York', 'Los Angeles', 'Chicago', 'Houston', 'Phoenix', 'New York', 'Boston'],
'Salary': [50000, 60000, 70000, 55000, 65000, 50000, 80000],
'Department': ['IT', 'HR', 'IT', 'Finance', 'IT', 'IT', 'Marketing'],
'Start_Date': ['2020-01-15', '2019-03-20', '2021-06-10', '2020-11-05', '2018-09-12', '2020-01-15', '2022-02-28']
}
df = pd.DataFrame(messy_data)
print("Original messy data:")
print(df)
print("\nShape:", df.shape)
# 1. Handle missing values
print("\n=== Handling Missing Values ===")
print("Missing values before cleaning:")
print(df.isnull().sum())
# Fill missing age with median
df['Age'] = df['Age'].fillna(df['Age'].median())
print("\nAfter filling missing ages:")
print(df['Age'])
# 2. Remove duplicates
print("\n=== Removing Duplicates ===")
print("Duplicates found:", df.duplicated().sum())
df_clean = df.drop_duplicates()
print("Shape after removing duplicates:", df_clean.shape)
# 3. Convert data types
print("\n=== Converting Data Types ===")
df_clean['Start_Date'] = pd.to_datetime(df_clean['Start_Date'])
df_clean['Department'] = df_clean['Department'].astype('category')
print("Data types after conversion:")
print(df_clean.dtypes)
# 4. Handle outliers in Salary
print("\n=== Handling Outliers ===")
Q1 = df_clean['Salary'].quantile(0.25)
Q3 = df_clean['Salary'].quantile(0.75)
IQR = Q3 - Q1
lower_bound = Q1 - 1.5 * IQR
upper_bound = Q3 + 1.5 * IQR
outliers = df_clean[(df_clean['Salary'] < lower_bound) | (df_clean['Salary'] > upper_bound)]
print("Outliers in Salary:")
print(outliers[['Name', 'Salary']])
# Cap outliers
df_clean['Salary'] = df_clean['Salary'].clip(lower_bound, upper_bound)
print("\nFinal cleaned data:")
print(df_clean)
print("\nFinal shape:", df_clean.shape)