Back to Curriculum

Data Loading and Exploration

📚 Lesson 2 of 10 ⏱️ 60 min

Data Loading and Exploration

60 min

Pandas can read data from various sources including CSV, Excel, JSON, SQL databases, Parquet, and more, enabling you to work with data from diverse origins. Each format has specific functions (pd.read_csv(), pd.read_excel(), pd.read_json(), pd.read_sql()) with options for customization. Understanding data loading enables accessing your data. Data loading is the first step in any analysis.

Understanding your data structure is crucial for effective analysis—knowing columns, data types, shape, and basic statistics helps you plan your analysis. Use df.info() to see data types and memory usage, df.describe() for statistical summaries, df.shape for dimensions, and df.head()/df.tail() for data preview. Understanding structure enables appropriate analysis. Data structure knowledge is essential.

Data exploration helps identify patterns, missing values, and potential issues before analysis. Exploration includes checking for missing values (df.isnull().sum()), examining distributions (df.describe()), identifying outliers, checking data types, and understanding relationships between variables. Understanding exploration enables quality analysis. Exploration prevents errors later.

Pandas provides exploration methods: df.info() (data types, non-null counts), df.describe() (statistical summary), df.value_counts() (value frequencies), df.unique() (unique values), and df.corr() (correlations). These methods provide quick insights. Understanding exploration methods enables efficient data understanding. Exploration methods are essential tools.

Handling different file formats requires understanding format-specific options: CSV (delimiters, encoding, headers), Excel (sheet names, ranges), JSON (orient, lines), SQL (queries, connections). Each format has unique characteristics. Understanding format options enables loading data correctly. Format knowledge is important for data access.

Best practices include always exploring data before analysis, checking for missing values and data types, understanding data structure, saving exploration results, and documenting data characteristics. Understanding data loading and exploration enables effective analysis. Exploration is essential for quality work.

Key Concepts

  • Pandas can read data from various sources (CSV, Excel, JSON, databases).
  • Understanding data structure is crucial for effective analysis.
  • Data exploration identifies patterns, missing values, and issues.
  • Pandas provides methods for data exploration (info, describe, value_counts).
  • Different file formats require specific loading options.

Learning Objectives

Master

  • Loading data from various sources (CSV, Excel, JSON, SQL)
  • Exploring data structure and characteristics
  • Identifying missing values and data quality issues
  • Using exploration methods effectively

Develop

  • Understanding data quality assessment
  • Designing data exploration workflows
  • Appreciating exploration's role in analysis

Tips

  • Always explore data first: use df.info(), df.describe(), df.head().
  • Check for missing values: df.isnull().sum() or df.isna().sum().
  • Use df.value_counts() to understand categorical distributions.
  • Save exploration results for documentation.

Common Pitfalls

  • Not exploring data before analysis, missing data quality issues.
  • Not checking for missing values, causing errors in calculations.
  • Not understanding data types, causing unexpected behavior.
  • Not reading format-specific options, loading data incorrectly.

Summary

  • Pandas can read data from various sources with format-specific functions.
  • Understanding data structure is crucial for effective analysis.
  • Data exploration identifies patterns, missing values, and issues.
  • Exploration methods provide quick insights into data.
  • Understanding data loading and exploration enables effective analysis.

Exercise

Load data from different sources and explore the dataset.

import pandas as pd
import numpy as np

# Create sample data and save to CSV
sample_data = {
    'Date': pd.date_range('2024-01-01', periods=100, freq='D'),
    'Product': np.random.choice(['A', 'B', 'C'], 100),
    'Sales': np.random.randint(100, 1000, 100),
    'Region': np.random.choice(['North', 'South', 'East', 'West'], 100),
    'Customer_ID': range(1, 101)
}

df = pd.DataFrame(sample_data)
df.to_csv('sales_data.csv', index=False)

# Load data from CSV
df_loaded = pd.read_csv('sales_data.csv')
print("Loaded data shape:", df_loaded.shape)
print("\nFirst few rows:")
print(df_loaded.head())

# Convert Date column to datetime
df_loaded['Date'] = pd.to_datetime(df_loaded['Date'])
print("\nData types after conversion:")
print(df_loaded.dtypes)

# Basic exploration
print("\nUnique values in Product column:")
print(df_loaded['Product'].unique())
print("\nValue counts for Region:")
print(df_loaded['Region'].value_counts())
print("\nSales statistics by region:")
print(df_loaded.groupby('Region')['Sales'].describe())

# Check for missing values
print("\nMissing values:")
print(df_loaded.isnull().sum())

Code Editor

Output