11 Useful Python Scripts to Automate Boring Everyday Tasks in Windows 10/11

11 Useful Python Scripts to Automate Boring Everyday Tasks in Windows 10/11


Let's be honest for a second. How much of your day is actually spent doing real work? And how much is spent on the same mind-numbing, repetitive tasks over and over again? You know what I'm talking about—renaming files, cleaning up downloads, backing stuff up, finding duplicates, organizing screenshots. None of this is hard. It's just... boring. And it steals time from things that actually matter.

I've been there. Staring at a downloads folder with 800 files wondering where my life went wrong. But here's the thing: these tasks are perfect for automation. Python makes it stupidly easy to write scripts that handle all this grunt work for you. You don't need to be a programming genius. You just need to copy, paste, and run.

Before We Start: What You'll Need

These scripts work on Windows 10 and 11. You'll need Python installed—if you don't have it, grab it from python.org. Make sure you check "Add Python to PATH" during installation. That's it.

For each script, I'll list any extra libraries you need. Most use only built-in modules, a few need pip installs. I'll tell you exactly what to type.

Alright, let's dive in.

#1: Automatic File Organizer

The pain point: Your Downloads folder is a disaster. I'm not judging, mine is too. PDFs, images, installers, random Word docs, ZIP files—everything thrown together like a yard sale. Finding anything means scrolling through hundreds of files. Cleaning it manually? That's a Saturday afternoon you'll never get back.

What the script does: Watches a folder (like Downloads) and automatically sorts files into organized subfolders based on file type. Images go to an Images folder, documents to Documents, videos to Videos, etc. It can also sort by date—like putting all files from November into a "2025-11" folder. It runs continuously in the background or you can trigger it manually.

How it works: The script uses the watchdog library to monitor folder changes. When a new file appears, it checks the extension, figures out which category it belongs to (using a mapping dictionary you can customize), creates the target folder if needed, and moves the file. It handles duplicates by appending numbers—so "resume.pdf" becomes "resume_1.pdf" if a file with that name already exists. It also preserves original timestamps so you don't lose metadata.

The script:

import os
import shutil
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler

# Define file categories and their extensions
FILE_CATEGORIES = {
    'Images': ['.jpg', '.jpeg', '.png', '.gif', '.bmp', '.svg', '.webp'],
    'Documents': ['.pdf', '.docx', '.doc', '.txt', '.xlsx', '.pptx', '.csv'],
    'Archives': ['.zip', '.rar', '.7z', '.tar', '.gz'],
    'Videos': ['.mp4', '.avi', '.mkv', '.mov', '.wmv'],
    'Music': ['.mp3', '.wav', '.flac', '.aac'],
    'Programs': ['.exe', '.msi', '.bat', '.sh'],
    'Code': ['.py', '.js', '.html', '.css', '.cpp', '.java']
}

class FileOrganizerHandler(FileSystemEventHandler):
    def __init__(self, watch_folder):
        self.watch_folder = watch_folder
        
    def on_created(self, event):
        if not event.is_directory:
            self.organize_file(event.src_path)
    
    def on_modified(self, event):
        if not event.is_directory:
            self.organize_file(event.src_path)
    
    def organize_file(self, file_path):
        time.sleep(2)  # Wait for file to finish downloading/writing
        
        filename = os.path.basename(file_path)
        file_ext = os.path.splitext(filename)[1].lower()
        
        # Find category for this extension
        destination = None
        for category, extensions in FILE_CATEGORIES.items():
            if file_ext in extensions:
                destination = os.path.join(self.watch_folder, category)
                break
        
        if destination is None:
            destination = os.path.join(self.watch_folder, 'Other')
        
        # Create destination folder if it doesn't exist
        os.makedirs(destination, exist_ok=True)
        
        # Handle duplicate filenames
        dest_path = os.path.join(destination, filename)
        counter = 1
        while os.path.exists(dest_path):
            name, ext = os.path.splitext(filename)
            dest_path = os.path.join(destination, f"{name}_{counter}{ext}")
            counter += 1
        
        # Move the file
        shutil.move(file_path, dest_path)
        print(f"Moved: {filename} -> {os.path.basename(destination)}")

def start_monitoring(folder_to_watch):
    event_handler = FileOrganizerHandler(folder_to_watch)
    observer = Observer()
    observer.schedule(event_handler, folder_to_watch, recursive=False)
    observer.start()
    print(f"Monitoring folder: {folder_to_watch}")
    print("Press Ctrl+C to stop")
    
    try:
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        observer.stop()
    observer.join()

if __name__ == "__main__":
    folder = r"C:\Users\YourUsername\Downloads"
    start_monitoring(folder)

How to use: Save as organize_downloads.py. Install watchdog if needed: pip install watchdog. Change the folder path to wherever you want to monitor. Run it. Leave it running in the background and watch your chaos disappear.

#2: Batch File Renamer

The pain point: You've got 400 vacation photos named "IMG_4829.jpg" through "IMG_5229.jpg". Or a folder of client documents with inconsistent naming like "report final (3).docx". Renaming them one by one is soul-crushing. Windows bulk rename is weak. You need flexibility.

What the script does: Renames hundreds of files at once using patterns. Add prefixes like "ProjectX_", add suffixes, replace text (change "draft" to "final"), add sequential numbers, insert dates, combine patterns—whatever you need. It shows you a preview before actually renaming so you don't screw up.

How it works: Scans the folder, gets all files (or filtered by extension), applies your renaming rules using Python string manipulation and regular expressions, generates a preview list showing old names and new names, waits for your confirmation, then performs the rename. It includes rollback capability—it saves the original names so you can undo if something goes sideways.

The script:

import os
import re
from datetime import datetime

def batch_rename(folder_path, pattern_type, **kwargs):
    """
    pattern_type: 'prefix', 'suffix', 'replace', 'number', 'date', or 'regex'
    kwargs: depends on pattern type
    """
    files = [f for f in os.listdir(folder_path) if os.path.isfile(os.path.join(folder_path, f))]
    files.sort()
    
    changes = []
    
    for i, filename in enumerate(files):
        name, ext = os.path.splitext(filename)
        new_name = name
        
        if pattern_type == 'prefix':
            new_name = kwargs['prefix'] + name
        elif pattern_type == 'suffix':
            new_name = name + kwargs['suffix']
        elif pattern_type == 'replace':
            new_name = name.replace(kwargs['old'], kwargs['new'])
        elif pattern_type == 'number':
            new_name = f"{kwargs['base']}_{i+1:0{kwargs.get('digits', 3)}d}"
        elif pattern_type == 'date':
            date_str = datetime.now().strftime(kwargs.get('date_format', '%Y%m%d'))
            new_name = f"{date_str}_{name}"
        elif pattern_type == 'regex':
            new_name = re.sub(kwargs['pattern'], kwargs['replacement'], name)
        
        new_filename = new_name + ext
        
        # Handle duplicates
        final_new = new_filename
        counter = 1
        while os.path.exists(os.path.join(folder_path, final_new)):
            name_part, ext_part = os.path.splitext(new_filename)
            final_new = f"{name_part}_{counter}{ext_part}"
            counter += 1
        
        old_path = os.path.join(folder_path, filename)
        new_path = os.path.join(folder_path, final_new)
        
        changes.append((old_path, new_path, filename, final_new))
    
    # Preview
    print("\nPreview of changes:")
    for old_path, new_path, old_name, new_name in changes:
        print(f"{old_name} -> {new_name}")
    
    confirm = input("\nProceed with rename? (y/n): ")
    if confirm.lower() == 'y':
        for old_path, new_path, _, _ in changes:
            os.rename(old_path, new_path)
        print("Rename complete!")
        
        # Save undo info
        with open('rename_undo.txt', 'w') as f:
            for old_path, new_path, old_name, new_name in changes:
                f.write(f"{new_path}|{old_path}\n")
        print("Undo info saved to rename_undo.txt")
    else:
        print("Cancelled")

# Example usage
if __name__ == "__main__":
    folder = r"C:\Users\YourUsername\Pictures\Vacation"
    
    print("Choose pattern type:")
    print("1. Add prefix")
    print("2. Add suffix")
    print("3. Replace text")
    print("4. Add sequential numbers")
    print("5. Add date")
    print("6. Regex pattern")
    
    choice = input("Enter choice (1-6): ")
    
    if choice == '1':
        prefix = input("Enter prefix: ")
        batch_rename(folder, 'prefix', prefix=prefix)
    elif choice == '2':
        suffix = input("Enter suffix: ")
        batch_rename(folder, 'suffix', suffix=suffix)
    elif choice == '3':
        old = input("Text to replace: ")
        new = input("Replace with: ")
        batch_rename(folder, 'replace', old=old, new=new)
    elif choice == '4':
        base = input("Base name (e.g., 'image'): ")
        digits = input("Number of digits (default 3): ") or '3'
        batch_rename(folder, 'number', base=base, digits=int(digits))
    elif choice == '5':
        date_format = input("Date format (e.g., %Y%m%d, default %Y%m%d): ") or '%Y%m%d'
        batch_rename(folder, 'date', date_format=date_format)
    elif choice == '6':
        pattern = input("Regex pattern: ")
        replacement = input("Replacement: ")
        batch_rename(folder, 'regex', pattern=pattern, replacement=replacement)

How to use: Save as batch_renamer.py. Change the folder path to your target. Run it, choose a pattern, and watch the magic. The undo file saves in the same folder—if you mess up, you can write a small script to read it and revert.

#3: Smart Backup Manager

The pain point: You know you should backup your important files. But manual copying is tedious. You forget what changed. You end up with multiple copies everywhere and no idea which one is latest. Then your hard drive dies and you lose everything. Don't be that person.

What the script does: Creates intelligent incremental backups. Only copies new or modified files, so it's fast. Compresses backups to save space. Maintains version history—you can go back to how things looked last week. Automatically cleans up old backups based on your rules. And it can run on a schedule so you never think about it.

How it works: Scans your source folder and compares files with the last backup using modification times and MD5 hashes. Files that are new or changed get added. It uses Python's zipfile module to create compressed archives with timestamps in the filename. It keeps a manifest of what's in each backup. You can set retention—like keep daily backups for 7 days, weekly for a month, etc. Old ones get deleted automatically.

The script:

import os
import shutil
import hashlib
import json
from datetime import datetime, timedelta
import zipfile

class BackupManager:
    def __init__(self, source_folder, backup_folder, manifest_file='backup_manifest.json'):
        self.source = source_folder
        self.backup_root = backup_folder
        self.manifest_file = manifest_file
        self.manifest = self.load_manifest()
        
        os.makedirs(backup_folder, exist_ok=True)
    
    def load_manifest(self):
        if os.path.exists(self.manifest_file):
            with open(self.manifest_file, 'r') as f:
                return json.load(f)
        return {'backups': [], 'file_history': {}}
    
    def save_manifest(self):
        with open(self.manifest_file, 'w') as f:
            json.dump(self.manifest, f, indent=2)
    
    def get_file_hash(self, filepath):
        """Calculate MD5 hash of a file"""
        hash_md5 = hashlib.md5()
        with open(filepath, "rb") as f:
            for chunk in iter(lambda: f.read(4096), b""):
                hash_md5.update(chunk)
        return hash_md5.hexdigest()
    
    def find_changed_files(self):
        """Find files that are new or modified since last backup"""
        changed = []
        file_history = self.manifest.get('file_history', {})
        
        for root, dirs, files in os.walk(self.source):
            for file in files:
                filepath = os.path.join(root, file)
                rel_path = os.path.relpath(filepath, self.source)
                
                # Get current file info
                mtime = os.path.getmtime(filepath)
                file_hash = self.get_file_hash(filepath)
                
                # Check if file exists in history
                if rel_path in file_history:
                    last_hash, last_mtime = file_history[rel_path]
                    if file_hash != last_hash:
                        changed.append((rel_path, filepath, 'modified'))
                        file_history[rel_path] = (file_hash, mtime)
                else:
                    changed.append((rel_path, filepath, 'new'))
                    file_history[rel_path] = (file_hash, mtime)
        
        return changed
    
    def create_backup(self):
        """Create a new backup with changed files"""
        changed = self.find_changed_files()
        
        if not changed:
            print("No changes detected. No backup needed.")
            return
        
        timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
        backup_name = f"backup_{timestamp}.zip"
        backup_path = os.path.join(self.backup_root, backup_name)
        
        print(f"Creating backup: {backup_name}")
        print(f"Files to backup: {len(changed)}")
        
        with zipfile.ZipFile(backup_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
            for rel_path, full_path, status in changed:
                print(f"  {status}: {rel_path}")
                zipf.write(full_path, rel_path)
        
        # Update manifest
        backup_info = {
            'name': backup_name,
            'timestamp': timestamp,
            'file_count': len(changed),
            'files': [rel_path for rel_path, _, _ in changed]
        }
        self.manifest['backups'].append(backup_info)
        self.save_manifest()
        
        print(f"Backup complete: {backup_path}")
        return backup_path
    
    def clean_old_backups(self, days_to_keep=30):
        """Remove backups older than specified days"""
        cutoff = datetime.now() - timedelta(days=days_to_keep)
        kept = []
        removed = []
        
        for backup in self.manifest['backups']:
            backup_time = datetime.strptime(backup['timestamp'], '%Y%m%d_%H%M%S')
            if backup_time < cutoff:
                backup_path = os.path.join(self.backup_root, backup['name'])
                if os.path.exists(backup_path):
                    os.remove(backup_path)
                    removed.append(backup['name'])
            else:
                kept.append(backup)
        
        self.manifest['backups'] = kept
        self.save_manifest()
        
        if removed:
            print(f"Removed {len(removed)} old backups: {', '.join(removed)}")
    
    def restore_backup(self, backup_name, restore_path):
        """Restore a specific backup to a location"""
        backup_file = os.path.join(self.backup_root, backup_name)
        if not os.path.exists(backup_file):
            print(f"Backup not found: {backup_name}")
            return
        
        os.makedirs(restore_path, exist_ok=True)
        
        with zipfile.ZipFile(backup_file, 'r') as zipf:
            zipf.extractall(restore_path)
        
        print(f"Restored {backup_name} to {restore_path}")

if __name__ == "__main__":
    source = r"C:\Users\YourUsername\Documents\Important"
    backup_dest = r"D:\Backups"
    
    manager = BackupManager(source, backup_dest)
    
    print("1. Create backup")
    print("2. Clean old backups")
    print("3. List backups")
    
    choice = input("Choose: ")
    
    if choice == '1':
        manager.create_backup()
    elif choice == '2':
        days = int(input("Days to keep (default 30): ") or 30)
        manager.clean_old_backups(days)
    elif choice == '3':
        for backup in manager.manifest['backups']:
            print(f"{backup['name']} - {backup['file_count']} files")

How to use: Save as backup_manager.py. Set your source and destination folders. Run it. First time does a full backup, after that only changes. Schedule it with Task Scheduler for automatic backups. You'll never lose important files again.

#4: Duplicate File Finder

The pain point: Your hard drive is mysteriously full. You've got photos scattered everywhere, duplicate downloads, multiple copies of the same document. Finding duplicates manually is like finding needles in a haystack. You need something that actually compares file contents, not just names.

What the script does: Scans folders and finds exact duplicate files regardless of filename. Groups them together, shows you total wasted space, and lets you safely delete duplicates with multiple safety checks. It's thorough but careful—you won't accidentally delete your only copy.

How it works: First pass groups files by size (if sizes differ, they can't be duplicates). Second pass calculates MD5 hashes for files with the same size—identical hashes mean identical content. It groups duplicates, calculates space savings, and presents an interactive menu for deletion. You choose which copies to keep (oldest, newest, or manually select). Deleted files go to Recycle Bin by default, so you can recover if you mess up.

The script:

import os
import hashlib
from collections import defaultdict
import send2trash  # pip install send2trash

def get_file_hash(filepath, blocksize=65536):
    """Calculate MD5 hash of a file"""
    hasher = hashlib.md5()
    with open(filepath, 'rb') as f:
        buf = f.read(blocksize)
        while buf:
            hasher.update(buf)
            buf = f.read(blocksize)
    return hasher.hexdigest()

def find_duplicates(folders):
    """Find duplicate files in given folders"""
    # First pass: group by size
    size_map = defaultdict(list)
    
    print("Scanning files...")
    total_files = 0
    for folder in folders:
        for root, dirs, files in os.walk(folder):
            for file in files:
                filepath = os.path.join(root, file)
                try:
                    size = os.path.getsize(filepath)
                    size_map[size].append(filepath)
                    total_files += 1
                except (OSError, PermissionError):
                    continue
    
    print(f"Scanned {total_files} files")
    
    # Second pass: hash files with same size
    hash_map = defaultdict(list)
    potential_duplicates = 0
    
    for size, filelist in size_map.items():
        if len(filelist) > 1:
            for filepath in filelist:
                file_hash = get_file_hash(filepath)
                hash_map[(size, file_hash)].append(filepath)
                potential_duplicates += 1
    
    # Filter to actual duplicates (more than one file with same size+hash)
    duplicates = {k: v for k, v in hash_map.items() if len(v) > 1}
    
    return duplicates

def calculate_space_wasted(duplicates):
    """Calculate total wasted space from duplicates"""
    total = 0
    for (size, _), files in duplicates.items():
        total += size * (len(files) - 1)
    return total

def delete_duplicates_interactive(duplicates):
    """Interactive deletion with safety"""
    print(f"\nFound {len(duplicates)} duplicate groups")
    wasted = calculate_space_wasted(duplicates)
    print(f"Total wasted space: {wasted / (1024**3):.2f} GB")
    
    for idx, ((size, hash_val), files) in enumerate(duplicates.items()):
        print(f"\n--- Group {idx+1} ---")
        print(f"Size: {size / (1024**2):.2f} MB")
        print(f"Files ({len(files)}):")
        
        for i, filepath in enumerate(files):
            mtime = os.path.getmtime(filepath)
            print(f"  {i+1}. {filepath}")
            print(f"     Modified: {datetime.fromtimestamp(mtime).strftime('%Y-%m-%d %H:%M')}")
        
        print("\nOptions:")
        print("  k - keep all (skip this group)")
        print("  n - keep newest, delete others")
        print("  o - keep oldest, delete others")
        print("  m - manually choose which to keep")
        
        choice = input("Choose: ").lower()
        
        if choice == 'n':
            newest = max(files, key=os.path.getmtime)
            to_delete = [f for f in files if f != newest]
        elif choice == 'o':
            oldest = min(files, key=os.path.getmtime)
            to_delete = [f for f in files if f != oldest]
        elif choice == 'm':
            keep_indices = input("Enter numbers to keep (comma separated): ")
            keep_indices = [int(x.strip())-1 for x in keep_indices.split(',')]
            to_delete = [f for i, f in enumerate(files) if i not in keep_indices]
        else:
            continue
        
        print(f"\nWill delete {len(to_delete)} files:")
        for f in to_delete:
            print(f"  {f}")
        
        confirm = input("Proceed? (y/n): ")
        if confirm.lower() == 'y':
            for f in to_delete:
                send2trash.send2trash(f)
                print(f"Deleted: {f}")
        else:
            print("Skipped")

if __name__ == "__main__":
    from datetime import datetime
    
    folders = []
    print("Enter folders to scan (blank to finish):")
    while True:
        folder = input("Folder path: ")
        if not folder and folders:
            break
        if os.path.exists(folder):
            folders.append(folder)
        elif folder:
            print("Folder doesn't exist, try again")
    
    if not folders:
        folders = [r"C:\Users\YourUsername"]
    
    duplicates = find_duplicates(folders)
    
    if duplicates:
        delete_duplicates_interactive(duplicates)
    else:
        print("No duplicates found!")

How to use: Save as duplicate_finder.py. Install send2trash: pip install send2trash. Run it, point to folders to scan. It'll find duplicates and walk you through cleanup. Recycle bin safety means you can recover if you delete something important by accident.

#5: Desktop Screenshot Organizer

The pain point: Screenshots pile up like nobody's business. "Screenshot 2025-11-11 192612.png", "Screenshot 2025-11-11 193845.png" — cryptic names, all in one folder, impossible to find anything. They're useful for a week, then they're just clutter. But manually sorting or deleting them is tedious.

What the script does: Automatically organizes screenshots by date into monthly folders (like "Screenshots/2025/November"). Can archive or delete screenshots older than a certain time. Can even extract text from screenshots using OCR so you can search for that error message you screenshotted six months ago.

How it works: Monitors your screenshots folder (usually C:\Users\YourUsername\Pictures\Screenshots). Reads file creation dates from EXIF data or filenames. Creates organized folder structure and moves files. Optional OCR using pytesseract to extract text and create a searchable index.

The script:

import os
import shutil
import time
from datetime import datetime
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import PIL.Image
import pytesseract  # pip install pytesseract pillow

class ScreenshotHandler(FileSystemEventHandler):
    def __init__(self, watch_folder, organize_by_date=True, enable_ocr=False):
        self.watch_folder = watch_folder
        self.organize_by_date = organize_by_date
        self.enable_ocr = enable_ocr
        self.ocr_index = {}
        
        if enable_ocr:
            self.ocr_file = os.path.join(watch_folder, 'screenshot_index.txt')
            self.load_ocr_index()
    
    def load_ocr_index(self):
        if os.path.exists(self.ocr_file):
            with open(self.ocr_file, 'r') as f:
                for line in f:
                    parts = line.strip().split('|', 1)
                    if len(parts) == 2:
                        self.ocr_index[parts[0]] = parts[1]
    
    def save_ocr_index(self):
        with open(self.ocr_file, 'w') as f:
            for path, text in self.ocr_index.items():
                f.write(f"{path}|{text}\n")
    
    def extract_date_from_filename(self, filename):
        """Try to extract date from common screenshot filename patterns"""
        # Windows format: Screenshot 2025-11-11 192612.png
        import re
        match = re.search(r'(\d{4}-\d{2}-\d{2})', filename)
        if match:
            return datetime.strptime(match.group(1), '%Y-%m-%d')
        return None
    
    def organize_file(self, file_path):
        time.sleep(1)  # Wait for file to be fully written
        
        filename = os.path.basename(file_path)
        file_ext = os.path.splitext(filename)[1].lower()
        
        # Only process image files
        if file_ext not in ['.png', '.jpg', '.jpeg', '.gif', '.bmp']:
            return
        
        # Get file creation time
        creation_time = os.path.getctime(file_path)
        file_date = datetime.fromtimestamp(creation_time)
        
        # Try to get date from filename if it's more accurate
        name_date = self.extract_date_from_filename(filename)
        if name_date:
            file_date = name_date
        
        if self.organize_by_date:
            # Create folder structure: Year/Month
            year_folder = os.path.join(self.watch_folder, str(file_date.year))
            month_folder = os.path.join(year_folder, file_date.strftime('%B'))
            os.makedirs(month_folder, exist_ok=True)
            
            # Move file
            new_path = os.path.join(month_folder, filename)
            
            # Handle duplicates
            counter = 1
            while os.path.exists(new_path):
                name, ext = os.path.splitext(filename)
                new_filename = f"{name}_{counter}{ext}"
                new_path = os.path.join(month_folder, new_filename)
                counter += 1
            
            shutil.move(file_path, new_path)
            print(f"Organized: {filename} -> {file_date.year}/{file_date.strftime('%B')}")
            
            # OCR if enabled
            if self.enable_ocr:
                self.extract_text_from_image(new_path)
    
    def extract_text_from_image(self, image_path):
        try:
            img = PIL.Image.open(image_path)
            text = pytesseract.image_to_string(img)
            if text.strip():
                self.ocr_index[image_path] = text.strip()
                self.save_ocr_index()
                print(f"OCR completed for {os.path.basename(image_path)}")
        except Exception as e:
            print(f"OCR failed: {e}")
    
    def on_created(self, event):
        if not event.is_directory:
            self.organize_file(event.src_path)
    
    def on_modified(self, event):
        if not event.is_directory:
            self.organize_file(event.src_path)
    
    def clean_old_screenshots(self, days_old=30, delete=False):
        """Archive or delete screenshots older than specified days"""
        cutoff = datetime.now() - timedelta(days=days_old)
        archive_folder = os.path.join(self.watch_folder, 'Archive')
        
        for root, dirs, files in os.walk(self.watch_folder):
            for file in files:
                filepath = os.path.join(root, file)
                file_ext = os.path.splitext(file)[1].lower()
                
                if file_ext in ['.png', '.jpg', '.jpeg']:
                    mtime = datetime.fromtimestamp(os.path.getmtime(filepath))
                    
                    if mtime < cutoff:
                        if delete:
                            os.remove(filepath)
                            print(f"Deleted old screenshot: {filepath}")
                        else:
                            os.makedirs(archive_folder, exist_ok=True)
                            dest = os.path.join(archive_folder, file)
                            counter = 1
                            while os.path.exists(dest):
                                name, ext = os.path.splitext(file)
                                dest = os.path.join(archive_folder, f"{name}_{counter}{ext}")
                                counter += 1
                            shutil.move(filepath, dest)
                            print(f"Archived: {filepath}")

def start_monitoring(folder, enable_ocr=False):
    event_handler = ScreenshotHandler(folder, organize_by_date=True, enable_ocr=enable_ocr)
    observer = Observer()
    observer.schedule(event_handler, folder, recursive=False)
    observer.start()
    
    print(f"Monitoring screenshots folder: {folder}")
    print("Press Ctrl+C to stop")
    
    try:
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        observer.stop()
    observer.join()

if __name__ == "__main__":
    screenshots_folder = os.path.join(os.path.expanduser("~"), "Pictures", "Screenshots")
    
    if not os.path.exists(screenshots_folder):
        screenshots_folder = input("Screenshots folder not found. Enter path: ")
    
    print("1. Start monitoring (organize new screenshots)")
    print("2. Clean old screenshots")
    
    choice = input("Choose: ")
    
    if choice == '1':
        ocr = input("Enable OCR? (y/n): ").lower() == 'y'
        start_monitoring(screenshots_folder, enable_ocr=ocr)
    elif choice == '2':
        days = int(input("Days old to archive (default 30): ") or 30)
        delete = input("Delete instead of archive? (y/n): ").lower() == 'y'
        handler = ScreenshotHandler(screenshots_folder)
        handler.clean_old_screenshots(days, delete)

How to use: Save as screenshot_organizer.py. Install pytesseract and pillow if you want OCR: pip install pytesseract pillow. You'll also need Tesseract OCR installed from GitHub. Run it, choose monitoring mode, and your screenshots will stay organized forever.

#6: System Cleaner - Temp Files and Junk

The pain point: Windows accumulates junk. Temp files, cache, logs, old installers. Your disk fills up gradually and you have no idea where the space went. CCleaner exists but do you really trust it? And it's got ads now.

What the script does: Safely cleans temporary files, browser caches, recycle bin, and other junk. Shows you what it's going to delete first. Calculates space freed. Has safety modes so you don't delete something important.

The script:

import os
import shutil
import tempfile
from pathlib import Path

def get_size(path):
    if os.path.isfile(path):
        return os.path.getsize(path)
    total = 0
    for root, dirs, files in os.walk(path):
        for f in files:
            fp = os.path.join(root, f)
            if os.path.exists(fp):
                total += os.path.getsize(fp)
    return total

def clean_temp_files():
    """Clean Windows temp folders"""
    temp_folders = [
        tempfile.gettempdir(),
        os.path.join(os.environ.get('WINDIR', 'C:\\Windows'), 'Temp'),
        os.path.join(os.environ.get('LOCALAPPDATA', ''), 'Temp'),
    ]
    
    total_freed = 0
    files_removed = 0
    
    for folder in temp_folders:
        if not os.path.exists(folder):
            continue
        
        print(f"\nScanning: {folder}")
        size_before = get_size(folder)
        
        for root, dirs, files in os.walk(folder, topdown=False):
            for file in files:
                try:
                    filepath = os.path.join(root, file)
                    if os.path.exists(filepath):
                        file_size = os.path.getsize(filepath)
                        os.remove(filepath)
                        total_freed += file_size
                        files_removed += 1
                except (PermissionError, OSError):
                    continue
            
            # Try to remove empty directories
            try:
                if root != folder:  # Don't delete the root temp folder
                    os.rmdir(root)
            except:
                pass
    
    return total_freed, files_removed

def clean_browser_cache():
    """Clean common browser caches"""
    browsers = {
        'Chrome': os.path.join(os.environ.get('LOCALAPPDATA', ''), 'Google', 'Chrome', 'User Data', 'Default', 'Cache'),
        'Edge': os.path.join(os.environ.get('LOCALAPPDATA', ''), 'Microsoft', 'Edge', 'User Data', 'Default', 'Cache'),
        'Firefox': os.path.join(os.environ.get('APPDATA', ''), 'Mozilla', 'Firefox', 'Profiles')
    }
    
    total_freed = 0
    
    for browser, cache_path in browsers.items():
        if os.path.exists(cache_path):
            print(f"\nCleaning {browser} cache...")
            size_before = get_size(cache_path)
            
            try:
                if browser == 'Firefox':
                    # Firefox has profile folders
                    for profile in os.listdir(cache_path):
                        profile_cache = os.path.join(cache_path, profile, 'cache2')
                        if os.path.exists(profile_cache):
                            shutil.rmtree(profile_cache, ignore_errors=True)
                else:
                    shutil.rmtree(cache_path, ignore_errors=True)
                
                size_after = get_size(cache_path)
                freed = size_before - size_after
                total_freed += freed
                print(f"Freed: {freed / (1024**2):.2f} MB")
            except Exception as e:
                print(f"Error cleaning {browser}: {e}")
    
    return total_freed

def clean_recycle_bin():
    """Empty recycle bin (requires winshell)"""
    try:
        import winshell
        size_before = sum([os.path.getsize(item) for item in winshell.recycle_bin() if os.path.exists(item)])
        winshell.recycle_bin().empty(confirm=False, show_progress=False, sound=False)
        print(f"Recycle bin emptied. Freed approximately {size_before / (1024**2):.2f} MB")
        return size_before
    except ImportError:
        print("winshell not installed. Install with: pip install winshell")
        return 0
    except Exception as e:
        print(f"Error emptying recycle bin: {e}")
        return 0

def clean_downloads_folder(days_old=30):
    """Clean old files from Downloads folder"""
    downloads = os.path.join(os.path.expanduser("~"), "Downloads")
    cutoff = time.time() - (days_old * 24 * 60 * 60)
    
    freed = 0
    count = 0
    
    for file in os.listdir(downloads):
        filepath = os.path.join(downloads, file)
        if os.path.isfile(filepath):
            mtime = os.path.getmtime(filepath)
            if mtime < cutoff:
                size = os.path.getsize(filepath)
                try:
                    os.remove(filepath)
                    freed += size
                    count += 1
                except:
                    pass
    
    return freed, count

if __name__ == "__main__":
    import time
    
    print("=== System Cleaner ===")
    print("This script will clean temporary files and junk.")
    print("It's safe but you can review what will be deleted.\n")
    
    print("Options:")
    print("1. Clean temp files")
    print("2. Clean browser cache")
    print("3. Empty recycle bin")
    print("4. Clean old downloads")
    print("5. Do all of the above")
    
    choice = input("\nChoose (1-5): ")
    
    total_freed = 0
    
    if choice in ['1', '5']:
        print("\n--- Cleaning Temp Files ---")
        freed, files = clean_temp_files()
        total_freed += freed
        print(f"Removed {files} files, freed {freed / (1024**2):.2f} MB")
    
    if choice in ['2', '5']:
        print("\n--- Cleaning Browser Cache ---")
        freed = clean_browser_cache()
        total_freed += freed
    
    if choice in ['3', '5']:
        print("\n--- Emptying Recycle Bin ---")
        freed = clean_recycle_bin()
        total_freed += freed
    
    if choice in ['4', '5']:
        print("\n--- Cleaning Old Downloads ---")
        days = int(input("Delete files older than how many days? (default 30): ") or 30)
        freed, count = clean_downloads_folder(days)
        total_freed += freed
        print(f"Deleted {count} old files, freed {freed / (1024**2):.2f} MB")
    
    print(f"\nTotal space freed: {total_freed / (1024**3):.2f} GB")

How to use: Save as system_cleaner.py. Install winshell if you want recycle bin emptying: pip install winshell. Run as administrator for best results. It shows you everything before deleting.

#7: Folder Size Analyzer

The pain point: Your disk is full and you don't know why. You need to see which folders are eating all the space. Windows folder properties are slow and only show one folder at a time. You need a tree view with sizes.

What the script does: Scans a directory and shows you folder sizes in a nice tree view. Sorts by size so the biggest space hogs are at the top. Exports to CSV if you want to analyze further.

The script:

import os
import argparse
from pathlib import Path

def get_folder_size(folder):
    """Calculate total size of a folder"""
    total = 0
    try:
        for entry in os.scandir(folder):
            if entry.is_file():
                total += entry.stat().st_size
            elif entry.is_dir():
                total += get_folder_size(entry.path)
    except (PermissionError, OSError):
        pass
    return total

def scan_folder(root_path, max_depth=3, current_depth=0):
    """Recursively scan folder and return size info"""
    if current_depth > max_depth:
        return None
    
    result = []
    try:
        items = []
        for entry in os.scandir(root_path):
            if entry.is_dir():
                size = get_folder_size(entry.path)
                items.append({
                    'name': entry.name,
                    'path': entry.path,
                    'size': size,
                    'type': 'folder'
                })
            elif entry.is_file():
                items.append({
                    'name': entry.name,
                    'path': entry.path,
                    'size': entry.stat().st_size,
                    'type': 'file'
                })
        
        # Sort by size descending
        items.sort(key=lambda x: x['size'], reverse=True)
        
        for item in items[:10]:  # Show top 10 at each level
            result.append({
                'name': item['name'],
                'size': item['size'],
                'type': item['type'],
                'depth': current_depth,
                'path': item['path']
            })
            
            # Recursively scan subfolders
            if item['type'] == 'folder' and current_depth < max_depth:
                subfolders = scan_folder(item['path'], max_depth, current_depth + 1)
                if subfolders:
                    result.extend(subfolders)
    
    except (PermissionError, OSError):
        pass
    
    return result

def format_size(size):
    """Format bytes to human readable"""
    for unit in ['B', 'KB', 'MB', 'GB', 'TB']:
        if size < 1024.0:
            return f"{size:.2f} {unit}"
        size /= 1024.0
    return f"{size:.2f} PB"

def print_tree(results, root_path):
    """Print results as tree"""
    print(f"\nScanning: {root_path}")
    print("=" * 60)
    
    for item in results:
        indent = "  " * item['depth']
        size_str = format_size(item['size'])
        item_type = "📁" if item['type'] == 'folder' else "📄"
        print(f"{indent}{item_type} {item['name']} - {size_str}")
    
    # Calculate total
    total = sum(item['size'] for item in results if item['depth'] == 0)
    print("\n" + "=" * 60)
    print(f"Total scanned size: {format_size(total)}")

def export_to_csv(results, output_file):
    """Export results to CSV"""
    import csv
    with open(output_file, 'w', newline='', encoding='utf-8') as f:
        writer = csv.writer(f)
        writer.writerow(['Path', 'Name', 'Size (bytes)', 'Size (human)', 'Type', 'Depth'])
        for item in results:
            writer.writerow([
                item['path'],
                item['name'],
                item['size'],
                format_size(item['size']),
                item['type'],
                item['depth']
            ])
    print(f"Exported to {output_file}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Analyze folder sizes')
    parser.add_argument('path', nargs='?', default=os.getcwd(), help='Folder to analyze')
    parser.add_argument('--depth', type=int, default=3, help='Max depth to scan')
    parser.add_argument('--export', help='Export to CSV file')
    
    args = parser.parse_args()
    
    if not os.path.exists(args.path):
        print(f"Path not found: {args.path}")
        sys.exit(1)
    
    print("Scanning... this may take a while for large folders")
    results = scan_folder(args.path, max_depth=args.depth)
    
    if results:
        print_tree(results, args.path)
        
        if args.export:
            export_to_csv(results, args.export)
    else:
        print("No results or permission denied")

How to use: Save as folder_analyzer.py. Run from command line: python folder_analyzer.py "C:\YourFolder" --depth 3. Add --export output.csv to save results.

#8: Clipboard History Manager

The pain point: You copy something, then copy something else, and the first thing is gone forever. Windows doesn't have proper clipboard history by default (Windows+V exists but it's limited and forgets on reboot). You need something that saves everything you copy.

What the script does: Runs in background and saves everything you copy to a SQLite database. Hotkey to bring up history and paste old items. Search through clipboard history. Keeps items even after reboot.

The script:

import tkinter as tk
from tkinter import ttk
import pyperclip  # pip install pyperclip
import keyboard   # pip install keyboard
import sqlite3
import threading
import time
from datetime import datetime

class ClipboardManager:
    def __init__(self):
        self.db_file = 'clipboard_history.db'
        self.current_text = ''
        self.running = True
        self.window = None
        
        self.init_database()
        self.start_monitoring()
        self.setup_hotkey()
    
    def init_database(self):
        conn = sqlite3.connect(self.db_file)
        c = conn.cursor()
        c.execute('''
            CREATE TABLE IF NOT EXISTS history (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                content TEXT,
                timestamp DATETIME,
                copied_from TEXT
            )
        ''')
        conn.commit()
        conn.close()
    
    def save_to_history(self, content):
        if not content or content == self.current_text:
            return
        
        self.current_text = content
        conn = sqlite3.connect(self.db_file)
        c = conn.cursor()
        c.execute(
            'INSERT INTO history (content, timestamp) VALUES (?, ?)',
            (content, datetime.now())
        )
        conn.commit()
        conn.close()
        
        # Keep only last 1000 items
        conn = sqlite3.connect(self.db_file)
        c = conn.cursor()
        c.execute('DELETE FROM history WHERE id NOT IN (SELECT id FROM history ORDER BY timestamp DESC LIMIT 1000)')
        conn.commit()
        conn.close()
    
    def monitor_clipboard(self):
        recent = ''
        while self.running:
            try:
                text = pyperclip.paste()
                if text and text != recent and isinstance(text, str):
                    recent = text
                    self.save_to_history(text)
            except:
                pass
            time.sleep(0.5)
    
    def start_monitoring(self):
        thread = threading.Thread(target=self.monitor_clipboard, daemon=True)
        thread.start()
    
    def show_history(self):
        if self.window and self.window.winfo_exists():
            self.window.lift()
            return
        
        self.window = tk.Tk()
        self.window.title("Clipboard History")
        self.window.geometry("600x400")
        self.window.attributes('-topmost', True)
        
        # Search box
        search_frame = ttk.Frame(self.window)
        search_frame.pack(fill='x', padx=5, pady=5)
        
        ttk.Label(search_frame, text="Search:").pack(side='left')
        search_var = tk.StringVar()
        search_entry = ttk.Entry(search_frame, textvariable=search_var)
        search_entry.pack(side='left', fill='x', expand=True, padx=5)
        
        # Listbox with scrollbar
        frame = ttk.Frame(self.window)
        frame.pack(fill='both', expand=True, padx=5, pady=5)
        
        scrollbar = ttk.Scrollbar(frame)
        scrollbar.pack(side='right', fill='y')
        
        self.listbox = tk.Listbox(frame, yscrollcommand=scrollbar.set)
        self.listbox.pack(side='left', fill='both', expand=True)
        
        scrollbar.config(command=self.listbox.yview)
        
        # Load history
        self.load_history()
        
        # Bind events
        def on_search(*args):
            self.load_history(search_var.get())
        
        search_var.trace('w', on_search)
        
        def on_double_click(event):
            selection = self.listbox.curselection()
            if selection:
                idx = selection[0]
                content = self.listbox.get(idx).split(' - ', 1)[1] if ' - ' in self.listbox.get(idx) else ''
                pyperclip.copy(content)
                self.window.destroy()
        
        self.listbox.bind('', on_double_click)
        
        # Instructions
        ttk.Label(self.window, text="Double-click to copy, Esc to close", font=('', 9)).pack(pady=5)
        
        # Bind escape key
        self.window.bind('', lambda e: self.window.destroy())
        
        search_entry.focus()
        self.window.mainloop()
    
    def load_history(self, search_term=''):
        self.listbox.delete(0, tk.END)
        
        conn = sqlite3.connect(self.db_file)
        c = conn.cursor()
        
        if search_term:
            c.execute(
                'SELECT content, timestamp FROM history WHERE content LIKE ? ORDER BY timestamp DESC LIMIT 100',
                (f'%{search_term}%',)
            )
        else:
            c.execute('SELECT content, timestamp FROM history ORDER BY timestamp DESC LIMIT 100')
        
        items = []
        for content, timestamp in c.fetchall():
            # Truncate long content for display
            display = content[:50] + '...' if len(content) > 50 else content
            # Clean newlines
            display = display.replace('\n', ' ')
            time_str = datetime.fromisoformat(timestamp).strftime('%H:%M %d/%m')
            items.append(f"{time_str} - {display}")
        
        conn.close()
        
        for item in items:
            self.listbox.insert(tk.END, item)
    
    def setup_hotkey(self):
        keyboard.add_hotkey('ctrl+shift+v', self.show_history)

if __name__ == "__main__":
    print("Clipboard History Manager started")
    print("Press Ctrl+Shift+V to show history")
    print("Press Ctrl+C to stop")
    
    manager = ClipboardManager()
    
    try:
        # Keep main thread alive
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        manager.running = False
        print("\nStopped")

How to use: Save as clipboard_manager.py. Install requirements: pip install pyperclip keyboard. Run it. It sits in background. Press Ctrl+Shift+V anytime to bring up history. Double-click to copy old item.

#9: Automated Folder Sync

The pain point: You need to keep two folders in sync—maybe a project folder and a backup, or a local folder and a network drive. Robocopy exists but command line is intimidating and you want something simpler with preview and undo.

What the script does: Synchronizes two folders one-way or two-way. Shows preview of changes before syncing. Can run on schedule. Logs everything. Has dry-run mode to see what would happen.

The script:

import os
import shutil
import filecmp
import argparse
from datetime import datetime

class FolderSync:
    def __init__(self, source, dest, two_way=False, log_file='sync_log.txt'):
        self.source = os.path.abspath(source)
        self.dest = os.path.abspath(dest)
        self.two_way = two_way
        self.log_file = log_file
        
        if not os.path.exists(self.source):
            raise ValueError(f"Source folder not found: {self.source}")
        
        os.makedirs(self.dest, exist_ok=True)
    
    def log(self, message):
        timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        log_entry = f"[{timestamp}] {message}"
        print(log_entry)
        
        with open(self.log_file, 'a', encoding='utf-8') as f:
            f.write(log_entry + '\n')
    
    def get_relative_path(self, full_path, base_path):
        return os.path.relpath(full_path, base_path)
    
    def compare_folders(self):
        """Compare source and dest and return lists of differences"""
        source_files = {}
        dest_files = {}
        
        # Walk source
        for root, dirs, files in os.walk(self.source):
            for file in files:
                full_path = os.path.join(root, file)
                rel_path = self.get_relative_path(full_path, self.source)
                source_files[rel_path] = full_path
        
        # Walk dest
        for root, dirs, files in os.walk(self.dest):
            for file in files:
                full_path = os.path.join(root, file)
                rel_path = self.get_relative_path(full_path, self.dest)
                dest_files[rel_path] = full_path
        
        changes = {
            'new_in_source': [],
            'new_in_dest': [],
            'modified': [],
            'deleted_in_source': [],
            'deleted_in_dest': []
        }
        
        # Find files in source but not in dest
        for rel_path, src_path in source_files.items():
            if rel_path not in dest_files:
                changes['new_in_source'].append((rel_path, src_path, None))
            else:
                # Compare files
                if not filecmp.cmp(src_path, dest_files[rel_path], shallow=False):
                    changes['modified'].append((rel_path, src_path, dest_files[rel_path]))
        
        # Find files in dest but not in source
        for rel_path, dst_path in dest_files.items():
            if rel_path not in source_files:
                changes['new_in_dest'].append((rel_path, None, dst_path))
        
        return changes
    
    def preview_changes(self):
        """Show what would be synced"""
        changes = self.compare_folders()
        
        print("\n=== PREVIEW OF CHANGES ===")
        
        if changes['new_in_source']:
            print(f"\nNew files to copy to destination ({len(changes['new_in_source'])}):")
            for rel_path, _, _ in changes['new_in_source']:
                print(f"  + {rel_path}")
        
        if changes['modified']:
            print(f"\nModified files to sync ({len(changes['modified'])}):")
            for rel_path, _, _ in changes['modified']:
                print(f"  * {rel_path}")
        
        if changes['new_in_dest']:
            if self.two_way:
                print(f"\nNew files to copy to source ({len(changes['new_in_dest'])}):")
                for rel_path, _, _ in changes['new_in_dest']:
                    print(f"  + {rel_path} (to source)")
            else:
                print(f"\nFiles in destination not in source ({len(changes['new_in_dest'])}):")
                for rel_path, _, _ in changes['new_in_dest']:
                    print(f"  ? {rel_path} (will be ignored in one-way sync)")
        
        return changes
    
    def sync(self, dry_run=False):
        """Perform sync"""
        changes = self.compare_folders()
        
        if dry_run:
            self.preview_changes()
            return
        
        self.log(f"Starting sync: {self.source} -> {self.dest}")
        
        # Copy new files from source to dest
        for rel_path, src_path, _ in changes['new_in_source']:
            dst_path = os.path.join(self.dest, rel_path)
            os.makedirs(os.path.dirname(dst_path), exist_ok=True)
            shutil.copy2(src_path, dst_path)
            self.log(f"Copied (new): {rel_path}")
        
        # Copy modified files
        for rel_path, src_path, dst_path in changes['modified']:
            shutil.copy2(src_path, dst_path)
            self.log(f"Copied (updated): {rel_path}")
        
        # Two-way sync: copy from dest to source
        if self.two_way:
            for rel_path, _, dst_path in changes['new_in_dest']:
                src_path = os.path.join(self.source, rel_path)
                os.makedirs(os.path.dirname(src_path), exist_ok=True)
                shutil.copy2(dst_path, src_path)
                self.log(f"Copied to source: {rel_path}")
        
        self.log("Sync completed")
        
        # Summary
        total = (len(changes['new_in_source']) + 
                len(changes['modified']) + 
                (len(changes['new_in_dest']) if self.two_way else 0))
        print(f"\nSync complete. {total} files processed.")

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Sync two folders')
    parser.add_argument('source', help='Source folder')
    parser.add_argument('dest', help='Destination folder')
    parser.add_argument('--two-way', action='store_true', help='Two-way sync')
    parser.add_argument('--dry-run', action='store_true', help='Preview only')
    
    args = parser.parse_args()
    
    try:
        syncer = FolderSync(args.source, args.dest, two_way=args.two_way)
        
        if args.dry_run:
            syncer.preview_changes()
        else:
            print("\n" + "="*50)
            print("FOLDER SYNC")
            print("="*50)
            print(f"Source: {args.source}")
            print(f"Dest:   {args.dest}")
            print(f"Mode:   {'Two-way' if args.two_way else 'One-way'}")
            print("="*50 + "\n")
            
            syncer.sync()
    
    except Exception as e:
        print(f"Error: {e}")

How to use: Save as folder_sync.py. Run: python folder_sync.py "C:\Source" "D:\Backup". Add --dry-run to preview, --two-way for bidirectional sync.

#10: Network Speed Monitor

The pain point: Your internet feels slow but you're not sure. Task Manager shows network usage but not in real-time detail. You want to see upload/download speeds, track usage over time, get alerts when speed drops.

What the script does: Monitors network interfaces and shows real-time upload/download speeds. Can log to file, show graphs, alert on thresholds. Runs in system tray.

The script:

import psutil  # pip install psutil
import time
import os
from datetime import datetime
import argparse

def get_network_speed(interface=None):
    """Get current network speeds"""
    counters = psutil.net_io_counters(pernic=True)
    
    if interface:
        if interface in counters:
            return counters[interface]
        else:
            print(f"Interface {interface} not found")
            return None
    
    # Sum all interfaces
    total = psutil.net_io_counters()
    return total

def format_speed(bytes_per_sec):
    """Format speed for display"""
    if bytes_per_sec < 1024:
        return f"{bytes_per_sec:.1f} B/s"
    elif bytes_per_sec < 1024 * 1024:
        return f"{bytes_per_sec / 1024:.1f} KB/s"
    elif bytes_per_sec < 1024 * 1024 * 1024:
        return f"{bytes_per_sec / (1024 * 1024):.1f} MB/s"
    else:
        return f"{bytes_per_sec / (1024 * 1024 * 1024):.1f} GB/s"

def monitor_network(interface=None, interval=1, log_file=None, alert_threshold=None):
    """Monitor network speeds"""
    print(f"Monitoring network{' on ' + interface if interface else ''}")
    print(f"Update interval: {interval}s")
    print("Press Ctrl+C to stop\n")
    
    last_recv = 0
    last_sent = 0
    last_time = time.time()
    
    try:
        while True:
            stats = get_network_speed(interface)
            if not stats:
                break
            
            current_time = time.time()
            time_diff = current_time - last_time
            
            if last_recv > 0:
                download_speed = (stats.bytes_recv - last_recv) / time_diff
                upload_speed = (stats.bytes_sent - last_sent) / time_diff
                
                timestamp = datetime.now().strftime('%H:%M:%S')
                download_str = format_speed(download_speed)
                upload_str = format_speed(upload_speed)
                
                # Clear line and print
                print(f"\r[{timestamp}] ↓ {download_str}  ↑ {upload_str}    ", end='', flush=True)
                
                # Log if requested
                if log_file:
                    with open(log_file, 'a') as f:
                        f.write(f"{datetime.now().isoformat()},{download_speed},{upload_speed}\n")
                
                # Alert if threshold exceeded
                if alert_threshold:
                    if download_speed > alert_threshold:
                        print(f"\n⚠️ High download: {download_str}")
                    if upload_speed > alert_threshold:
                        print(f"\n⚠️ High upload: {upload_str}")
            
            last_recv = stats.bytes_recv
            last_sent = stats.bytes_sent
            last_time = current_time
            
            time.sleep(interval)
    
    except KeyboardInterrupt:
        print("\n\nMonitoring stopped")

def list_interfaces():
    """List available network interfaces"""
    counters = psutil.net_io_counters(pernic=True)
    print("Available interfaces:")
    for interface in counters.keys():
        print(f"  - {interface}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Network speed monitor')
    parser.add_argument('--interface', help='Network interface to monitor')
    parser.add_argument('--interval', type=int, default=1, help='Update interval (seconds)')
    parser.add_argument('--log', help='Log file path (CSV format)')
    parser.add_argument('--alert', type=float, help='Alert threshold in bytes/sec')
    parser.add_argument('--list', action='store_true', help='List available interfaces')
    
    args = parser.parse_args()
    
    if args.list:
        list_interfaces()
    else:
        monitor_network(
            interface=args.interface,
            interval=args.interval,
            log_file=args.log,
            alert_threshold=args.alert
        )

How to use: Save as network_monitor.py. Install psutil: pip install psutil. Run: python network_monitor.py. Add --list to see interfaces, --interface "Wi-Fi" to monitor specific one, --log speeds.csv to save data.

#11: Automated Email Sender (with Attachments)

The pain point: You need to send the same email to different people regularly—weekly reports, invoices, updates. Doing it manually is repetitive and error-prone. You forget attachments, miss people, send wrong files.

What the script does: Sends emails from a template, can personalize each recipient, attach files, track what was sent. Reads recipients from CSV. Can run on schedule.

The script:

import smtplib
import csv
import os
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email import encoders
from datetime import datetime
import argparse
import getpass

class EmailSender:
    def __init__(self, smtp_server, smtp_port, username, password, use_tls=True):
        self.smtp_server = smtp_server
        self.smtp_port = smtp_port
        self.username = username
        self.password = password
        self.use_tls = use_tls
    
    def send_email(self, to_addr, subject, body, attachments=None, cc=None, bcc=None):
        """Send a single email"""
        msg = MIMEMultipart()
        msg['From'] = self.username
        msg['To'] = to_addr
        msg['Subject'] = subject
        
        if cc:
            msg['Cc'] = cc
        if bcc:
            msg['Bcc'] = bcc
        
        msg.attach(MIMEText(body, 'html'))  # Using HTML for better formatting
        
        # Attach files
        if attachments:
            for filepath in attachments:
                if os.path.exists(filepath):
                    with open(filepath, 'rb') as f:
                        part = MIMEBase('application', 'octet-stream')
                        part.set_payload(f.read())
                        encoders.encode_base64(part)
                        part.add_header(
                            'Content-Disposition',
                            f'attachment; filename="{os.path.basename(filepath)}"'
                        )
                        msg.attach(part)
                else:
                    print(f"Warning: Attachment not found: {filepath}")
        
        # Send email
        try:
            server = smtplib.SMTP(self.smtp_server, self.smtp_port)
            if self.use_tls:
                server.starttls()
            server.login(self.username, self.password)
            
            # Combine all recipients
            all_recipients = [to_addr]
            if cc:
                all_recipients.extend(cc.split(','))
            if bcc:
                all_recipients.extend(bcc.split(','))
            
            server.send_message(msg)
            server.quit()
            
            print(f"✓ Sent to {to_addr}")
            return True
        
        except Exception as e:
            print(f"✗ Failed to send to {to_addr}: {e}")
            return False
    
    def send_bulk_from_csv(self, csv_file, subject_template, body_template, attachments=None):
        """Send emails using CSV data for personalization"""
        with open(csv_file, 'r', encoding='utf-8') as f:
            reader = csv.DictReader(f)
            rows = list(reader)
        
        print(f"\nSending {len(rows)} emails...")
        print("-" * 50)
        
        results = {'sent': 0, 'failed': 0}
        
        for row in rows:
            # Personalize subject and body
            subject = subject_template
            body = body_template
            
            for key, value in row.items():
                placeholder = f"{{{{{key}}}}}"
                subject = subject.replace(placeholder, value)
                body = body.replace(placeholder, value)
            
            # Get recipient email
            to_addr = row.get('email')
            if not to_addr:
                print(f"Warning: No email for row {row}, skipping")
                results['failed'] += 1
                continue
            
            # Send
            success = self.send_email(to_addr, subject, body, attachments)
            if success:
                results['sent'] += 1
            else:
                results['failed'] += 1
            
            # Small delay to avoid being flagged as spam
            time.sleep(2)
        
        print("-" * 50)
        print(f"Complete: {results['sent']} sent, {results['failed']} failed")
        
        # Log results
        log_file = f"email_log_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt"
        with open(log_file, 'w') as f:
            f.write(f"Sent: {results['sent']}, Failed: {results['failed']}\n")
            f.write(f"Time: {datetime.now().isoformat()}\n")
        print(f"Log saved to {log_file}")

def create_sample_csv():
    """Create a sample CSV file"""
    sample = """email,name,project,date
john@example.com,John Doe,Project Alpha,2025-02-12
jane@example.com,Jane Smith,Project Beta,2025-02-12
bob@example.com,Bob Johnson,Project Gamma,2025-02-12"""
    
    with open('sample_recipients.csv', 'w') as f:
        f.write(sample)
    print("Created sample_recipients.csv")

def create_sample_template():
    """Create a sample email template"""
    template = """

Hello {{name}},

This is your weekly update for project {{project}} as of {{date}}.

Your report is attached.

Best regards,
Automated System

""" with open('sample_template.html', 'w') as f: f.write(template) print("Created sample_template.html") if __name__ == "__main__": parser = argparse.ArgumentParser(description='Bulk email sender') parser.add_argument('--setup', action='store_true', help='Create sample files') parser.add_argument('--csv', help='CSV file with recipients') parser.add_argument('--subject', help='Subject template') parser.add_argument('--body', help='Body template file (HTML)') parser.add_argument('--attachments', nargs='+', help='Files to attach') args = parser.parse_args() if args.setup: create_sample_csv() create_sample_template() print("\nEdit these files with your data and template.") print("Then run: python email_sender.py --csv sample_recipients.csv --subject \"Your Subject\" --body sample_template.html") sys.exit(0) if not all([args.csv, args.subject, args.body]): print("Please provide csv, subject, and body file") print("Run with --setup to create sample files") sys.exit(1) # Read body template with open(args.body, 'r', encoding='utf-8') as f: body_template = f.read() # Get email credentials print("\nEmail Configuration") print("=" * 50) smtp_server = input("SMTP Server (e.g., smtp.gmail.com): ") smtp_port = int(input("SMTP Port (e.g., 587): ")) username = input("Username (email): ") password = getpass.getpass("Password (input hidden): ") sender = EmailSender(smtp_server, smtp_port, username, password) # Send emails sender.send_bulk_from_csv( csv_file=args.csv, subject_template=args.subject, body_template=body_template, attachments=args.attachments )

How to use: Save as email_sender.py. Run python email_sender.py --setup to create sample files. Edit the CSV with your recipients and the HTML template. Then run with your actual files. For Gmail, you'll need an app password, not your regular password.

Wrapping Up

So there you have it—11 Python scripts that'll save you from boring repetitive tasks. Some of these I use every single day. The file organizer alone has saved me hours of manual sorting. The backup manager gives me peace of mind. The clipboard history thing? I can't live without it anymore.

The beauty of these scripts is that they're simple enough to understand and modify. Don't like how something works? Change it. Want to add a feature? Go for it. Python makes this stuff easy.

How to get started:

1. Pick the script that solves your biggest pain point

2. Install Python if you haven't already

3. Install any required libraries (I listed them with each script)

4. Customize the folder paths and settings

5. Run it manually once to make sure it works

6. Set it up with Task Scheduler if you want it to run automatically

FAQs: 

1. Do I need to be a programmer to use these?

Not at all. If you can copy and paste, you can use these. Just save the code as a .py file and run it. If you want to customize things, it helps to know a little Python, but the scripts are designed to be easy to modify.

2. Will these scripts work on Windows 10 and 11?

Yes, all tested on both. Some paths might be slightly different if your username isn't standard, but the scripts use environment variables to handle that.

3. Are these scripts safe? Will they delete my important files?

They're designed to be safe. Most have preview modes or ask for confirmation before deleting anything. The duplicate finder sends files to Recycle Bin instead of permanent deletion. But always test on non-critical files first.

4. How do I run a Python script?

Open Command Prompt, navigate to the folder where you saved the script, and type python scriptname.py. Replace scriptname with the actual filename. Make sure Python is installed and added to PATH.

5. What if I get an error about missing modules?

Each script lists required libraries. Install them with pip: pip install libraryname. For example, pip install watchdog for the file organizer.

6. Can I make these scripts run automatically?

Yes! Use Windows Task Scheduler. Create a basic task, set the trigger (like at startup or daily), and point it to python.exe with your script as an argument.

7. Which script is the most useful?

Honestly, the File Organizer and Clipboard History Manager are the ones people thank me for most. The Backup Manager is a lifesaver if you've ever lost data. Pick based on your biggest headache.

8. Can I modify these scripts for my specific needs?

Absolutely. That's the whole point. Add file types to the organizer, change naming patterns in the renamer, adjust retention periods in the backup script. The code is commented to help you understand what each part does.

9. Will these scripts slow down my computer?

No, they're lightweight. The file organizer uses almost no CPU until a file appears. The network monitor uses a tiny bit of CPU every second. Nothing that'll impact performance.

Post a Comment

Previous Post Next Post

Contact Form