← Back to Curriculum

Performance Optimization and Best Practices

📚 Lesson 8 of 10 ⏱ 70 min

Performance Optimization and Best Practices

70 min

Performance optimization is crucial for working with large datasets, enabling efficient processing and analysis. Optimization includes choosing appropriate data types, using vectorized operations, and avoiding inefficient patterns. Understanding optimization enables handling large-scale data. Performance is essential for production data science.

Best practices include efficient data types (using category for strings, smaller numeric types), vectorized operations (avoiding loops), proper indexing (using .loc[] and .iloc[] correctly), and chunked processing for very large files. Following best practices ensures efficient code. Understanding best practices enables professional data science. Best practices are essential for maintainable code.

Understanding memory usage and execution time helps optimize Pandas operations—use df.memory_usage() to check memory, profile code to identify bottlenecks, and optimize data types to reduce memory. Memory optimization enables handling larger datasets. Understanding memory enables efficient processing. Memory management is important for large datasets.

Vectorized operations are orders of magnitude faster than loops—use Pandas/NumPy operations instead of Python loops, use .apply() only when necessary, and leverage broadcasting. Vectorization is fundamental to Pandas performance. Understanding vectorization enables efficient code. Vectorization is essential for performance.

Data type optimization reduces memory usage—use category dtype for repeated strings, use smaller integer types (int8, int16) when appropriate, use float32 instead of float64 when precision allows. Type optimization enables larger datasets. Understanding type optimization enables memory efficiency. Type optimization is important for large datasets.

Best practices summary: use appropriate data types, prefer vectorized operations, use .query() for complex filtering, process large datasets in chunks, monitor memory usage, use proper indexing methods, and avoid modifying data in place unnecessarily. Understanding performance optimization enables efficient data science. Performance optimization is essential for large-scale work.

Key Concepts

  • Performance optimization is crucial for large datasets.
  • Best practices include efficient data types and vectorized operations.
  • Understanding memory usage and execution time helps optimization.
  • Vectorized operations are much faster than loops.
  • Data type optimization reduces memory usage.

Learning Objectives

Master

  • Optimizing data types for memory efficiency
  • Using vectorized operations instead of loops
  • Profiling and measuring performance
  • Applying best practices for efficient Pandas code

Develop

  • Understanding performance optimization principles
  • Designing efficient data processing workflows
  • Appreciating performance's role in data science

Tips

  • Use category dtype for repeated strings to save memory.
  • Use vectorized operations instead of loops—they're much faster.
  • Use df.memory_usage() to check memory consumption.
  • Process very large files in chunks using chunksize parameter.

Common Pitfalls

  • Using loops instead of vectorized operations, losing performance.
  • Not optimizing data types, wasting memory.
  • Not profiling code, optimizing wrong parts.
  • Modifying data in place unnecessarily, causing issues.

Summary

  • Performance optimization is crucial for large datasets.
  • Best practices include efficient types and vectorized operations.
  • Understanding memory and execution time enables optimization.
  • Following best practices ensures efficient, maintainable code.
  • Performance optimization is essential for production data science.

Exercise

Optimize Pandas operations for performance and implement best practices.

import pandas as pd
import numpy as np
import time
import psutil
import os

# Create a large dataset for performance testing
np.random.seed(42)
n_rows = 100000

large_data = {
    'ID': range(n_rows),
    'Category': np.random.choice(['A', 'B', 'C', 'D', 'E'], n_rows),
    'Value': np.random.randn(n_rows),
    'Status': np.random.choice(['Active', 'Inactive'], n_rows),
    'Date': pd.date_range('2020-01-01', periods=n_rows, freq='H')
}

df = pd.DataFrame(large_data)
print(f"Dataset size: {df.shape}")
print(f"Memory usage: {df.memory_usage(deep=True).sum() / 1024**2:.2f} MB")

# 1. Memory optimization
print("\n=== Memory Optimization ===")
print("Original data types:")
print(df.dtypes)

# Optimize data types
df_optimized = df.copy()
df_optimized['Category'] = df_optimized['Category'].astype('category')
df_optimized['Status'] = df_optimized['Status'].astype('category')
df_optimized['Value'] = df_optimized['Value'].astype('float32')

print("\nOptimized data types:")
print(df_optimized.dtypes)

print(f"\nMemory usage before optimization: {df.memory_usage(deep=True).sum() / 1024**2:.2f} MB")
print(f"Memory usage after optimization: {df_optimized.memory_usage(deep=True).sum() / 1024**2:.2f} MB")

# 2. Performance comparison - loops vs vectorized operations
print("\n=== Performance Comparison ===")

# Slow method - using loops
start_time = time.time()
df['Value_Squared_Loop'] = 0
for i in range(len(df)):
    df.loc[i, 'Value_Squared_Loop'] = df.loc[i, 'Value'] ** 2
loop_time = time.time() - start_time

# Fast method - vectorized operation
start_time = time.time()
df['Value_Squared_Vectorized'] = df['Value'] ** 2
vectorized_time = time.time() - start_time

print(f"Loop method time: {loop_time:.4f} seconds")
print(f"Vectorized method time: {vectorized_time:.4f} seconds")
print(f"Speed improvement: {loop_time/vectorized_time:.1f}x faster")

# 3. Efficient filtering
print("\n=== Efficient Filtering ===")

# Inefficient method
start_time = time.time()
filtered_inefficient = df[df['Category'] == 'A']
inefficient_time = time.time() - start_time

# Efficient method with query
start_time = time.time()
filtered_efficient = df.query('Category == "A"')
efficient_time = time.time() - start_time

print(f"Inefficient filtering time: {inefficient_time:.4f} seconds")
print(f"Efficient filtering time: {efficient_time:.4f} seconds")

# 4. GroupBy optimization
print("\n=== GroupBy Optimization ===")

# Standard groupby
start_time = time.time()
grouped_standard = df.groupby('Category')['Value'].agg(['mean', 'std', 'count'])
standard_time = time.time() - start_time

# Optimized groupby with categorical
df_cat = df.copy()
df_cat['Category'] = df_cat['Category'].astype('category')
start_time = time.time()
grouped_optimized = df_cat.groupby('Category')['Value'].agg(['mean', 'std', 'count'])
optimized_time = time.time() - start_time

print(f"Standard groupby time: {standard_time:.4f} seconds")
print(f"Optimized groupby time: {optimized_time:.4f} seconds")

# 5. Chunked processing for very large datasets
print("\n=== Chunked Processing ===")

def process_chunk(chunk):
    """Process a chunk of data"""
    chunk['Processed_Value'] = chunk['Value'] * 2 + 1
    return chunk

# Simulate chunked processing
chunk_size = 10000
chunks = []
start_time = time.time()

for chunk in pd.read_csv('sales_data.csv', chunksize=chunk_size):
    processed_chunk = process_chunk(chunk)
    chunks.append(processed_chunk)

chunked_time = time.time() - start_time
print(f"Chunked processing time: {chunked_time:.4f} seconds")

# 6. Memory-efficient operations
print("\n=== Memory-Efficient Operations ===")

# Memory-intensive operation
start_time = time.time()
memory_usage_before = psutil.Process(os.getpid()).memory_info().rss / 1024**2

# Create a large intermediate result
large_result = df.groupby('Category').apply(lambda x: x.copy())

memory_usage_after = psutil.Process(os.getpid()).memory_info().rss / 1024**2
memory_time = time.time() - start_time

print(f"Memory usage before: {memory_usage_before:.2f} MB")
print(f"Memory usage after: {memory_usage_after:.2f} MB")
print(f"Memory increase: {memory_usage_after - memory_usage_before:.2f} MB")

# 7. Best practices summary
print("\n=== Best Practices Summary ===")
print("1. Use appropriate data types (category for strings, int8/int16 for small integers)")
print("2. Prefer vectorized operations over loops")
print("3. Use .query() for complex filtering")
print("4. Use categorical data types for repeated values")
print("5. Process large datasets in chunks")
print("6. Monitor memory usage during operations")
print("7. Use .loc[] and .iloc[] instead of chained indexing")
print("8. Avoid modifying data in place when possible")

Code Editor

Output