When you’re working with Amazon S3 and need to move lots of files around, there’s a handy command (recursive) that makes the job way easier.
Instead of copying files one by one, you can use a special flag that handles everything in one go, including all the folders and subfolders inside.
This guide walks you through everything step by step. You’ll learn what recursive does, see copy-pastable examples, filter files, understand command differences, and get tips to avoid mistakes.
Even if you’re just getting started with AWS or looking to sharpen your skills, this breakdown keeps things simple and practical so you can get the job done confidently.
What does –recursive Mean in aws s3 cp?
Think of –recursive as the “go deep” switch for your copy command. It tells AWS CLI to not just grab top-level files but to dig into all subfolders, copying everything it finds.
How it descends into subdirectories
Picture a filing cabinet with multiple drawers, and each drawer has folders, and some of those folders have more folders inside. Without –recursive, the command only looks at whatever’s sitting right in front.
With –recursive turned on, it opens every drawer, checks every folder, and keeps going deeper until there’s nothing left to explore. Every file at every level gets copied.
Single file vs. directory copying
Here’s where the difference really matters:
- Copying a Single File: No
--recursiveneeded. You’re just pointing at one specific file and saying, “copy this.” Simple and direct. - Copying a Directory or Folder: Now you must use
--recursive. Otherwise, the command sees a folder, shrugs, and does nothing. The flag is what unlocks the ability to handle groups of files.
A Quick Note About S3 “Folders.”
Here’s something that trips people up: S3 doesn’t actually have folders the way your computer does. What looks like a folder is really just part of the file’s name, called a “prefix.”
When you see my-folder/file.txt, S3 treats my-folder/ as text at the start of the filename, not as a separate container.
But don’t worry, --recursive handles these prefixes just like real folders, so you can work with them the same way.
You can find the official AWS documentation here.
aws s3 cp recursive Command Syntax (Official AWS Format)
Here’s the basic structure you’ll be working with:
aws s3 cp --recursive
It looks straightforward, but getting the source and destination right makes all the difference between a successful copy and a head-scratching error message.
Understanding options
The source is where your files are coming from, and you’ve got two choices:
- Local path: This is a folder on your computer, like
/home/user/documents/projectorC:\Users\Photos. Just point to wherever the files live on your machine. - S3 path: This starts with
s3://followed by your bucket name and any prefix, likes3://my-bucket/data/reports. This tells AWS you’re pulling from cloud storage.
Understanding options
The destination works the same way, it’s just where the files are going:
- Local path – Download files to a specific spot on your computer
- S3 path – Upload or copy files to a bucket in the cloud
You can mix and match. Local to S3, S3 to local, or even S3 to another S3 bucket—all fair game.
When Trailing Slashes Matter
This one catches a lot of people. That little / at the end of a path? It actually changes what happens.
- With a trailing slash (
s3://my-bucket/folder/): The contents of the folder get copied into the destination - Without a trailing slash (
s3://my-bucket/folder): The folder itself gets copied as a new folder at the destination
Think of it like moving boxes. With the slash, you’re unpacking the box and putting the items inside. Without it, you’re moving the whole box.
Common Beginner Mistakes
Here are the slip-ups that happen all the time:
- Forgetting the Folder Name at The Destination: If you want files to land in a specific folder, you need to include that folder name in the path. Otherwise, everything dumps into the root.
- Wrong Path Format: Typing
s3://my-bucket-nameWhen you meants3://my-bucket-name/or vice versa, can send files to unexpected places - Mixing up Source and Destination: Double-check which one comes first. The source always goes before the destination.
- Missing the
--recursiveFlag Entirely: The command will just sit there and do nothing if you’re trying to copy a directory without it
Common aws s3 cp recursive Use Cases

Let’s look at the three scenarios people run into most often. These examples cover the bread-and-butter tasks you’ll actually use in real work.
Upload a Local Directory to an S3 Bucket
aws s3 cp /path/to/local/directory s3://your-bucket-name/directory-name/ --recursive
This is probably the most common use case, taking a folder from your computer and pushing it up to S3.
How the Directory Structure Is Preserved
Good news: the folder structure stays exactly the way it is. If your local directory has subfolders three levels deep with files scattered throughout, that entire tree gets recreated in S3.
A file /project/assets/images/logo.png on your computer will land at s3://your-bucket/directory-name/assets/images/logo.png. Nothing gets flattened or reorganized.
Tip about destination path naming
Pay attention to what you name the destination. In the example above, directory-name/ becomes the parent folder in S3.
If you want your files to live directly in the bucket root, just use s3://your-bucket-name/ instead. And that trailing slash? It matters; keep it there to avoid confusion.
Download an Entire S3 Folder to Local
aws s3 cp s3://your-bucket-name/directory-name/ /local/path/ --recursive
Now flip it around—pulling everything from S3 down to your machine.
When to Use This Instead of Zip Downloads
Sure, you could zip up files in S3 and download the archive, but that adds extra steps and eats up time. If you need the files in their original structure and are ready to use them immediately, --recursive gets you there faster.
No unzipping, no waiting for compression, just direct file transfers.
Large Dataset Advantage
This really shines when dealing with big collections of files. Instead of clicking through the AWS console and selecting files manually (which gets old fast), one command grabs thousands of files while you grab coffee.
The CLI handles it all in the background, and you can even walk away, it’ll keep running until everything’s downloaded.
Copy Files Between Two S3 Buckets
aws s3 cp s3://source-bucket/prefix/ s3://destination-bucket/prefix/ --recursive
Sometimes files need to be moved from one bucket to another, maybe for backups, moving between environments, or reorganizing storage.
Cross-Bucket and Same-Region Behavior
This works whether both buckets are in the same AWS region or different ones. Same region? Transfers are typically faster and don’t rack up data transfer fees.
Different regions? It’ll still work, but expect it to take longer and potentially cost more depending on your AWS pricing plan.
IAM Permission Reminder
Here’s where people hit walls: you need the right permissions on both buckets. The AWS credentials you’re using must have read access to the source bucket and write access to the destination bucket.
If the command fails with an access denied error, that’s usually the culprit. Double-check those IAM policies before assuming something else is broken.
aws s3 cp recursive with Include and Exclude Filters
Sometimes you don’t want to copy everything, maybe just certain file types, or everything except a few folders. That’s where include and exclude filters come in handy.
Filter Basics
Before jumping into examples, there are two critical things to understand about how filters work.
Order matters (--exclude First, Then --include)
This trips up a lot of people. The sequence you put these flags in actually changes what gets copied.
AWS processes filters in order, so if you want to be selective, the pattern is almost always: exclude everything first, then include what you want.
Think of it like saying “ignore all of this… except for these specific things.”
Supported Wildcard Patterns
Filters use wildcards to match files:
*matches everything (like*.jpgfor all JPG files)?matches a single character*/matches directories
You can get pretty specific with these patterns to target exactly what you need.
Example: Copy Only .txt Files
aws s3 cp s3://mybucket/ . --recursive --exclude "*" --include "*.txt"
Here’s what’s happening: first, --exclude "*" tells the command to ignore absolutely everything. Then --include "*.txt" says, “actually, go ahead and grab any file ending in .txt.”
The result? Only text files get copied, and everything else gets left behind.
This is perfect when you’re working with a bucket full of mixed file types but only need one kind—like pulling logs, reports, or configuration files.
Example: Exclude .git Directory
aws s3 cp . s3://your-bucket-name/ --recursive --exclude ".git/*"
Now flip the approach. Instead of choosing what to include, this excludes a specific folder—in this case, the .git A directory that contains version control data.
This is super useful when uploading project folders. You want all your code and assets in S3, but there’s no reason to copy over Git metadata, node_modules, or other development-specific folders that just bloat your storage.
Add --exclude for each one you want to skip.
You can stack multiple excludes, too:
aws s3 cp . s3://your-bucket-name/ --recursive --exclude ".git/*" --exclude "node_modules/*" --exclude "*.log"
Now you’re skipping Git files, dependencies, and log files all in one command. Clean and efficient.
Dry Run – Test aws s3 cp recursive Without Copying Files
aws s3 cp . s3://your-bucket-name/ --recursive --dryrun
The --dryrun flag is like a rehearsal before the real performance. It shows exactly what would happen without actually moving a single file.
Why --dryrun is Critical for Production
Production environments don’t forgive mistakes easily. One wrong command can overwrite critical files, fill up storage with duplicates, or trigger unexpected costs.
Running a dry run first is like checking your work before hitting submit on a big test; it catches problems while there’s still time to fix them.
In production, the stakes are high. Maybe hundreds of users depend on those files, or compliance rules mean certain data can’t be touched.
A dry run lets you see the exact list of files that will be affected, which paths they’ll land in, and whether anything looks off.
If the output shows files going to the wrong bucket or more items than expected, you can stop and adjust before any damage is done.
Prevents Accidental Overwrites
Here’s a common scenario: there are already files in the destination with the same names. Without checking first, the copy command will happily overwrite them, no warning, no confirmation prompt. Those old versions? Gone.
A dry run reveals this ahead of time. The output will show which files are about to be replaced, giving you a chance to decide if that’s actually what should happen.
Maybe those existing files need to be backed up first, or maybe the whole operation needs to target a different folder. Either way, you’ll know before it’s too late.
Safe verification step
Think of --dryrun as a safety net. It answers questions like:
- Are all the files going to the right place?
- Is the filter working the way it should?
- Are there way more (or fewer) files than expected?
- Did that exclude pattern actually catch the folders it’s supposed to skip?
Once the dry run output looks good, just remove the --dryrun flag and run the command for real. It takes an extra 30 seconds but saves hours of cleanup or recovery work when something goes sideways.
Especially with --recursive touching potentially thousands of files, that verification step is absolutely worth it.
At the End
Now you’ve got the full picture of how --recursive works and when to use it.
From uploading entire project folders to downloading datasets or moving files between buckets, this command handles bulk operations that would otherwise take forever to do manually.
Remember to use filters when you only need specific files, always run a dry run before big production moves, and double-check those IAM permissions to avoid frustrating access errors.
The difference between cp and sync matters too, pick the right tool for the job. With practice, these commands become second nature, and you’ll wonder how you ever managed S3 without them.
Ready to put this into action? Open your terminal, start with a small test folder, and try out these commands yourself. Hands-on practice beats reading every time.
