Safe destructive operations through a planning phase

Decide first, act later

Geschrieben von Timo Rieber am 22. November 2025

A file renaming command in mainzelmen listed files from a directory, computed new names, and renamed them in one pass. It worked fine until I needed to show the user what would happen before it happened.

The command had no seam for that. Planning and execution were the same loop - by the time you knew what new_path would be, the file was already renamed. The only way to add a preview was to duplicate the naming logic into a dry-run variant and keep both in sync. That's a maintenance trap dressed up as a feature.

Decisions without consequences

The fix was splitting the command in two. A PlanEnumerationCommand takes a file list, computes every source-target pair, and returns them as immutable FileOperation objects:

@dataclass(frozen=True)
class PlanEnumerationCommand:
    files: list[Path]
    start_value: int

    def execute(self, ctx: Context) -> list[FileOperation]:
        operations = []
        for i, filepath in enumerate(sorted(self.files), self.start_value):
            new_stem = f'{i}.{filepath.stem}'
            new_path = filepath.with_name(f'{new_stem}{filepath.suffix}')
            operations.append(
                FileOperation(source_path=filepath, target_path=new_path)
            )
        return operations
Python

No side effects. The return value is the plan - a list of data objects, each carrying source_path, target_path, and a status defaulting to PENDING. The UI renders this in a table. The user sees every rename before anything moves.

The ExecutePlanCommand receives that list and iterates it. For each operation: check whether the target exists, rename or mark as failed, done. The entire method is a flat loop with a try/except. It makes zero decisions about naming. Every interesting question - what gets renamed, what the new name looks like - was already answered during planning.

Where the pattern scales

The same split showed up again in an inbox module that processes email attachments. The planning phase is more involved there: it checks filing rules, looks up cached outcomes from previous runs, detects whether the target file already exists on disk. Each attachment gets classified by operation type - import, skip, ignore - based on rules, prior outcomes, and filesystem state. That's a lot of branching logic. But it runs without downloading a single attachment or writing a single file.

The execute phase iterates the classified list. IMPORT operations download and write. Everything else gets recorded and skipped. The structure is the same: a pure classification step followed by a mechanical application step.

At the HTTP level, the pattern becomes two endpoints - the same plan-execute split that Terraform made famous. A /plan route returns the operation list. The frontend renders it in a preview modal - a table of current names and proposed names with an "Execute" button. The /execute route receives the confirmed list and runs it. The user always sees before they confirm.

The real payoff isn't the preview UI, though. It's testability. The plan command is pure input-to-output. Give it a list of paths, get back operations. No filesystem setup, no temp directories, no cleanup. The execute command needs a filesystem, but its logic is a flat loop - rename, catch, report. The combinatorial complexity lives entirely in the plan, where it's cheapest to test.