Introduction
cURL is one of the most powerful and versatile tools for downloading files from the internet. Whether you need to download a single file, batch download multiple files, or implement resumable downloads, cURL provides comprehensive functionality for all file transfer needs.
This guide covers everything from basic file downloads to advanced techniques including authentication, progress monitoring, and error handling.
Basic File Downloads
Simple File Download
The most basic way to download a file with cURL:
# Download and save with original filename curl -O https://example.com/file.zip # Download and save with custom filename curl -o myfile.zip https://example.com/file.zip # Download without saving (display content) curl https://example.com/file.txt
Download with Progress Bar
# Show progress bar instead of progress meter curl -# -O https://example.com/largefile.zip # Silent mode (no progress, no errors) curl -s -O https://example.com/file.zip # Show only errors curl -S -s -O https://example.com/file.zip
Download to Specific Directory
# Create directory if it doesn't exist and download curl --create-dirs -o downloads/files/document.pdf https://example.com/document.pdf # Download multiple files to specific directory cd downloads curl -O https://example.com/file1.zip curl -O https://example.com/file2.zip
Advanced Download Techniques
Resumable Downloads
Resume interrupted downloads:
# Resume download if file partially exists curl -C - -O https://example.com/largefile.zip # Resume from specific byte position curl -C 1024 -O https://example.com/largefile.zip # Automatic resume with retry curl --retry 3 --retry-delay 2 -C - -O https://example.com/largefile.zip
Download with Authentication
# Basic HTTP authentication curl -u username:password -O https://secure.example.com/file.zip # Bearer token authentication curl -H "Authorization: Bearer your-token-here" -O https://api.example.com/file.zip # Using .netrc file for credentials curl -n -O https://secure.example.com/file.zip # API key authentication curl -H "X-API-Key: your-api-key" -O https://api.example.com/data.json
Download with Custom Headers
# Custom User-Agent
curl -H "User-Agent: MyApp/1.0" -O https://example.com/file.zip
# Multiple custom headers
curl -H "Accept: application/octet-stream" \
-H "X-Custom-Header: value" \
-H "Referer: https://referrer.com" \
-O https://example.com/file.zip
Batch Downloads
Download Multiple Files
# Download multiple URLs in one command
curl -O https://example.com/file1.zip -O https://example.com/file2.zip
# Download files with pattern
curl -O https://example.com/file[1-10].zip
# Download files with different extensions
curl -O https://example.com/document.{pdf,doc,txt}
Download from File List
# Create a URL list file
echo "https://example.com/file1.zip" > urls.txt
echo "https://example.com/file2.zip" >> urls.txt
echo "https://example.com/file3.zip" >> urls.txt
# Download all files from list
xargs -n 1 curl -O < urls.txt
# Download with parallel processing
cat urls.txt | xargs -n 1 -P 4 curl -O
# Using while loop for more control
while read url; do
echo "Downloading: $url"
curl -O "$url"
done < urls.txt
Parallel Downloads
# Download files in parallel (GNU parallel)
parallel -j 4 curl -O {} :::: urls.txt
# Using xargs for parallel downloads
cat urls.txt | xargs -n 1 -P 8 curl -O
# Background downloads with bash
curl -O https://example.com/file1.zip &
curl -O https://example.com/file2.zip &
curl -O https://example.com/file3.zip &
wait # Wait for all downloads to complete
Download Monitoring and Control
Progress Monitoring
# Detailed progress information
curl --progress-bar -O https://example.com/largefile.zip
# Write progress to file
curl -# -o file.zip https://example.com/file.zip 2> progress.log
# Custom progress format
curl -w "Downloaded %{size_download} bytes in %{time_total} seconds\n" \
-o file.zip https://example.com/file.zip
# Show download speed and time
curl -w "@curl-format.txt" -o file.zip https://example.com/file.zip
# curl-format.txt content:
# Total time: %{time_total}\n
# Download speed: %{speed_download} bytes/sec\n
# Downloaded: %{size_download} bytes\n
Speed Limiting and Timeouts
# Limit download speed (1MB/s) curl --limit-rate 1M -O https://example.com/largefile.zip # Set connection timeout (30 seconds) curl --connect-timeout 30 -O https://example.com/file.zip # Set maximum time for entire operation (10 minutes) curl --max-time 600 -O https://example.com/file.zip # Retry on failure curl --retry 5 --retry-delay 3 --retry-max-time 300 -O https://example.com/file.zip
Specialized Download Scenarios
FTP Downloads
# Anonymous FTP download curl -O ftp://ftp.example.com/path/to/file.zip # FTP with authentication curl -u username:password -O ftp://ftp.example.com/file.zip # FTP passive mode curl --ftp-pasv -O ftp://ftp.example.com/file.zip # Download entire FTP directory (requires additional tools) wget -r ftp://username:[email protected]/directory/
SFTP Downloads
# SFTP with password
curl -u username:password -O sftp://server.example.com/path/file.zip
# SFTP with private key
curl --key ~/.ssh/id_rsa -O sftp://server.example.com/path/file.zip
# SFTP with public key authentication
curl --pubkey ~/.ssh/id_rsa.pub --key ~/.ssh/id_rsa \
-O sftp://server.example.com/path/file.zip
Downloads with Cookies
# Use cookie jar file curl -b cookies.txt -c cookies.txt -O https://example.com/protected/file.zip # Send specific cookie curl -H "Cookie: sessionid=abc123; token=xyz789" -O https://example.com/file.zip # Login and download with session # First, login and save cookies curl -c cookies.txt -d "username=user&password=pass" https://example.com/login # Then use cookies for download curl -b cookies.txt -O https://example.com/protected/file.zip
Error Handling and Validation
Download Verification
# Check if download was successful
if curl -f -O https://example.com/file.zip; then
echo "Download successful"
else
echo "Download failed with status: $?"
fi
# Verify file size matches expected
expected_size=$(curl -sI https://example.com/file.zip | grep -i content-length | awk '{print $2}' | tr -d '\r')
actual_size=$(stat -f%z file.zip 2>/dev/null || stat -c%s file.zip 2>/dev/null)
if [ "$expected_size" = "$actual_size" ]; then
echo "File size verification passed"
else
echo "File size mismatch: expected $expected_size, got $actual_size"
fi
Checksum Verification
# Download file and its checksum
curl -O https://example.com/file.zip
curl -O https://example.com/file.zip.sha256
# Verify SHA256 checksum (Linux/macOS)
sha256sum -c file.zip.sha256
# Alternative verification
expected_hash=$(cat file.zip.sha256 | awk '{print $1}')
actual_hash=$(sha256sum file.zip | awk '{print $1}')
if [ "$expected_hash" = "$actual_hash" ]; then
echo "Checksum verification passed"
else
echo "Checksum verification failed"
exit 1
fi
Robust Download Script
#!/bin/bash
download_file() {
local url="$1"
local output="$2"
local max_retries=3
local retry_delay=5
for ((i=1; i<=max_retries; i++)); do
echo "Attempt $i of $max_retries: Downloading $url"
if curl -f -C - --retry 2 --retry-delay 2 \
--connect-timeout 30 --max-time 3600 \
-o "$output" "$url"; then
echo "Download successful: $output"
# Verify file is not empty
if [ -s "$output" ]; then
return 0
else
echo "Downloaded file is empty, retrying..."
rm -f "$output"
fi
else
echo "Download attempt $i failed"
fi
if [ $i -lt $max_retries ]; then
echo "Waiting $retry_delay seconds before retry..."
sleep $retry_delay
fi
done
echo "All download attempts failed for: $url"
return 1
}
# Usage
download_file "https://example.com/largefile.zip" "largefile.zip"
Best Practices
Security Considerations
Security best practices:
- Always use HTTPS when possible
- Verify SSL certificates (avoid --insecure unless necessary)
- Store credentials securely (use .netrc or environment variables)
- Validate downloaded files with checksums
- Be cautious with executable downloads
Performance Optimization
Performance tips:
- Use parallel downloads for multiple files
- Implement resumable downloads for large files
- Set appropriate timeouts to avoid hanging
- Use HTTP/2 when available (--http2)
- Consider compression (--compressed)
Storage Management
# Check available disk space before download
available_space=$(df . | tail -1 | awk '{print $4}')
file_size=$(curl -sI https://example.com/largefile.zip | grep -i content-length | awk '{print $2}' | tr -d '\r')
if [ "$file_size" -gt "$available_space" ]; then
echo "Insufficient disk space for download"
exit 1
fi
# Download to temporary location and move when complete
temp_file=$(mktemp)
curl -o "$temp_file" https://example.com/file.zip
mv "$temp_file" "final_location.zip"
Troubleshooting Common Issues
SSL/TLS Issues
# Skip SSL verification (not recommended for production) curl --insecure -O https://example.com/file.zip # Use specific SSL/TLS version curl --tlsv1.2 -O https://example.com/file.zip # Specify custom CA certificate bundle curl --cacert custom-ca.pem -O https://example.com/file.zip # Debug SSL issues curl -v --trace-ascii trace.txt https://example.com/file.zip
Network Issues
# Use specific network interface curl --interface eth0 -O https://example.com/file.zip # Use IPv4 only curl -4 -O https://example.com/file.zip # Use IPv6 only curl -6 -O https://example.com/file.zip # Use proxy curl --proxy http://proxy.example.com:8080 -O https://example.com/file.zip # Debug network issues curl -v --trace-time --trace trace.log -O https://example.com/file.zip
File Permission Issues
# Create directories with specific permissions
curl --create-dirs -o downloads/restricted/file.zip https://example.com/file.zip
chmod 755 downloads/restricted
# Download with specific file permissions
curl -o file.zip https://example.com/file.zip
chmod 644 file.zip
# Handle permission denied errors
if ! curl -o /restricted/path/file.zip https://example.com/file.zip 2>/dev/null; then
echo "Permission denied, trying alternative location..."
curl -o ~/Downloads/file.zip https://example.com/file.zip
fi
Integration Examples
Download with Logging
#!/bin/bash
log_download() {
local url="$1"
local output="$2"
local log_file="download.log"
echo "[$(date)] Starting download: $url" >> "$log_file"
if curl -w "Time: %{time_total}s, Speed: %{speed_download} bytes/s, Size: %{size_download} bytes\n" \
-o "$output" "$url" 2>>"$log_file"; then
echo "[$(date)] Download completed: $output" >> "$log_file"
return 0
else
echo "[$(date)] Download failed: $url" >> "$log_file"
return 1
fi
}
# Usage
log_download "https://example.com/file.zip" "file.zip"
Integration with Backup Scripts
#!/bin/bash
backup_and_download() {
local backup_url="$1"
local backup_dir="/backups/$(date +%Y%m%d)"
mkdir -p "$backup_dir"
cd "$backup_dir"
# Download backup files
curl -f -O "$backup_url/database.sql.gz" || {
echo "Failed to download database backup"
exit 1
}
curl -f -O "$backup_url/files.tar.gz" || {
echo "Failed to download files backup"
exit 1
}
# Verify backups
if [ -s "database.sql.gz" ] && [ -s "files.tar.gz" ]; then
echo "Backup download completed successfully"
echo "Database: $(stat -c%s database.sql.gz) bytes"
echo "Files: $(stat -c%s files.tar.gz) bytes"
else
echo "Backup verification failed"
exit 1
fi
}
# Usage
backup_and_download "https://backups.example.com/daily"
Conclusion
cURL is an incredibly powerful tool for file downloads, offering extensive options for customization, error handling, and automation. By mastering the techniques covered in this guide, you can handle virtually any file download scenario efficiently and reliably.
Remember to always consider security, implement proper error handling, and verify your downloads to ensure data integrity. For production environments, always test your download scripts thoroughly and implement appropriate logging and monitoring.