Automating Rendering Workflows with Claude Code Skills
How architects can use Claude Code skills to automate V-Ray, Lumion, Enscape, and Blender rendering pipelines for architectural visualization
Go deeper with Archgyan Academy
Structured BIM and Revit learning paths for architects and students.
Architectural visualization is one of the most time-intensive parts of any project. Between setting up materials, configuring lighting, running batch renders, and post-processing final images, a single presentation set can consume days of work. Most of that time is not spent on creative decisions. It goes to repetitive setup, file management, and manual quality checks that follow the same pattern every single time.
Claude Code skills offer a way to automate these repeatable steps. A skill is a reusable instruction set that Claude Code executes on command, handling everything from generating V-Ray scene files to batch-processing Blender renders to compiling final client deliverables. This post walks through practical skills you can build today, with real code examples and scripts that target the rendering tools architects actually use.
Why Rendering Pipelines Need Automation
The typical architectural rendering workflow involves at least six distinct stages: scene preparation, material assignment, lighting setup, camera placement, rendering, and post-processing. Each stage has its own set of repetitive tasks. Material libraries need to be loaded and mapped to surfaces. Render settings need to be configured for the correct resolution, quality level, and output format. Post-processing requires the same adjustments applied to every image in a set.
The cost of doing this manually is not just time. It is consistency. When you set up 15 camera views by hand, small differences creep in. One view renders at slightly different exposure. Another uses the wrong material override. A third exports as PNG instead of EXR. These inconsistencies create rework during client review, and they compound across project phases.
Automation targets exactly these pain points. A Claude Code skill can enforce consistent render settings across every view, apply the same post-processing pipeline to every output image, and generate batch scripts that queue renders overnight. The creative decisions stay with you. The mechanical execution becomes reliable and fast.
The Rendering Tool Landscape for Architects
Before diving into specific skills, it helps to understand what each major rendering tool exposes for automation.
V-Ray (for SketchUp, Rhino, Revit, and 3ds Max) offers deep scripting access through its scene description files (.vrscene) and the V-Ray Application SDK. You can define materials, lights, cameras, and render settings entirely in text-based .vrscene files, which makes them ideal targets for automated generation.
Lumion takes a different approach. It is a standalone application with limited scripting capabilities, but it does support command-line rendering and scene template files. Automation here focuses on template management, camera path presets, and style library organization.
Enscape integrates tightly with Revit, SketchUp, and Rhino. It exposes view presets and render settings through its configuration files. Automation works best for managing view preset libraries and batch-exporting from the host application.
Blender is the most scriptable option by far. Its entire interface is accessible through Python, and Cycles/EEVEE render settings can be fully controlled via bpy (the Blender Python API). For architects who use Blender for visualization, this opens up comprehensive automation possibilities.
Twinmotion supports command-line export and Datasmith file import from Revit/Rhino, but its automation surface is limited compared to the others. Automation here is primarily about file pipeline management rather than render configuration.
Skill: V-Ray Material Library Manager and Scene Setup
V-Ray’s .vrscene format is plain text, which means Claude Code can generate, modify, and organize scene files directly. A material library manager skill maintains a catalog of architectural materials and inserts them into scenes on demand.
Here is a practical .vrscene snippet for a concrete material that the skill would generate:
// Archgyan Material Library - Exposed Concrete 01
BRDFVRayMtl concrete_exposed_01 {
diffuse=AColor(0.65, 0.63, 0.60, 1.0);
roughness=0.75;
reflect=AColor(0.04, 0.04, 0.04, 1.0);
reflect_glossiness=0.3;
option_use_roughness=1;
bump_map=TexBitmap {
file="//materials/concrete/exposed_concrete_01_bump.jpg";
uvwgen=UVWGenChannel { uvw_channel=1; };
};
bump_mult=2.0;
}
The skill maintains a JSON catalog of materials organized by category:
{
"materials": {
"concrete": {
"exposed_concrete_01": {
"vrscene_file": "materials/concrete/exposed_concrete_01.vrscene",
"preview": "materials/concrete/exposed_concrete_01_preview.jpg",
"tags": ["exterior", "facade", "raw"],
"roughness_range": [0.6, 0.9]
},
"polished_concrete_01": {
"vrscene_file": "materials/concrete/polished_concrete_01.vrscene",
"preview": "materials/concrete/polished_concrete_01_preview.jpg",
"tags": ["interior", "floor", "polished"],
"roughness_range": [0.1, 0.3]
}
},
"wood": {},
"glass": {},
"metal": {}
}
}
When you invoke the skill with a request like “set up a residential interior scene with oak flooring, white walls, and steel fixtures,” Claude Code reads the catalog, selects matching materials, and generates a composite .vrscene file with all material definitions ready to import. This eliminates the manual process of opening the asset browser, searching for each material, loading it, and adjusting parameters one at a time.
The scene setup portion of the skill also handles render settings. It generates a settings block based on the output type you specify:
// Draft quality - fast iteration
SettingsOutput {
img_width=1920;
img_height=1080;
img_file="output/draft_${view_name}.jpg";
img_file_needFrameNumber=0;
}
SettingsImageSampler {
type=3; // Progressive
min_shade_rate=6;
progressive_minSubdivs=1;
progressive_maxSubdivs=50;
progressive_threshold=0.05;
}
For final production renders, the skill switches to higher subdivision counts, lower noise thresholds, and EXR output with render elements (diffuse, reflection, refraction, lighting passes) for compositing.
Skill: Batch Rendering Script Generator
Batch rendering is where automation delivers the most obvious time savings. Instead of manually starting each render and waiting, a batch script queues every view and runs them sequentially or in parallel.
For V-Ray Standalone, the skill generates a batch script like this:
#!/bin/bash
# Generated by Archgyan Rendering Automation Skill
# Project: Riverside Residence
# Date: 2026-04-09
# Views: 8 exterior, 6 interior
VRAY_BIN="/opt/chaos/vray/bin/vray"
SCENE_DIR="./scenes"
OUTPUT_DIR="./output/final"
mkdir -p "$OUTPUT_DIR"
VIEWS=(
"ext_street_view"
"ext_garden_elevation"
"ext_entrance_detail"
"ext_aerial_context"
"int_living_room"
"int_kitchen"
"int_master_bedroom"
"int_bathroom"
)
for view in "${VIEWS[@]}"; do
echo "[$(date +%H:%M:%S)] Starting render: $view"
"$VRAY_BIN" \
-sceneFile="$SCENE_DIR/${view}.vrscene" \
-imgFile="$OUTPUT_DIR/${view}.exr" \
-imgWidth=4000 \
-imgHeight=2250 \
-display=0 \
-progressIncrement=10
echo "[$(date +%H:%M:%S)] Completed: $view"
done
echo "All renders complete. Output in: $OUTPUT_DIR"
For Blender Cycles, the skill generates a Python script that can be run headlessly:
"""
Blender Batch Render Script
Generated by Archgyan Rendering Automation Skill
Usage: blender --background project.blend --python batch_render.py
"""
import bpy
import os
import time
OUTPUT_DIR = "//output/final/"
CAMERA_VIEWS = {
"ext_street": {"camera": "Camera_Street", "samples": 512, "resolution": (4000, 2250)},
"ext_garden": {"camera": "Camera_Garden", "samples": 512, "resolution": (4000, 2250)},
"int_living": {"camera": "Camera_Living", "samples": 256, "resolution": (4000, 2250)},
"int_kitchen": {"camera": "Camera_Kitchen", "samples": 256, "resolution": (4000, 2250)},
"int_detail_01": {"camera": "Camera_Detail_01", "samples": 1024, "resolution": (3000, 3000)},
}
def configure_render_settings(samples, resolution):
scene = bpy.context.scene
scene.render.engine = 'CYCLES'
scene.cycles.device = 'GPU'
scene.cycles.samples = samples
scene.cycles.use_denoising = True
scene.cycles.denoiser = 'OPENIMAGEDENOISE'
scene.render.resolution_x = resolution[0]
scene.render.resolution_y = resolution[1]
scene.render.resolution_percentage = 100
scene.render.image_settings.file_format = 'OPEN_EXR_MULTILAYER'
scene.render.image_settings.color_depth = '32'
# Enable render passes for compositing
view_layer = scene.view_layers[0]
view_layer.use_pass_diffuse_color = True
view_layer.use_pass_glossy_color = True
view_layer.use_pass_emit = True
view_layer.use_pass_shadow = True
view_layer.use_pass_ambient_occlusion = True
def set_active_camera(camera_name):
cam_obj = bpy.data.objects.get(camera_name)
if cam_obj is None:
print(f"WARNING: Camera '{camera_name}' not found, skipping.")
return False
bpy.context.scene.camera = cam_obj
return True
def run_batch():
total = len(CAMERA_VIEWS)
for idx, (view_name, settings) in enumerate(CAMERA_VIEWS.items(), 1):
print(f"\n[{idx}/{total}] Rendering: {view_name}")
start_time = time.time()
if not set_active_camera(settings["camera"]):
continue
configure_render_settings(settings["samples"], settings["resolution"])
output_path = os.path.join(OUTPUT_DIR, view_name)
bpy.context.scene.render.filepath = output_path
bpy.ops.render.render(write_still=True)
elapsed = time.time() - start_time
minutes = int(elapsed // 60)
seconds = int(elapsed % 60)
print(f" Completed in {minutes}m {seconds}s")
print(f"\nBatch complete. {total} views rendered to {OUTPUT_DIR}")
if __name__ == "__main__":
run_batch()
The skill adapts the script based on your inputs. Tell it you need “8 exterior views at 4K with 512 samples and 6 interior views at 256 samples,” and it generates the complete configuration. It also adds GPU detection logic and fallback to CPU rendering when needed.
Skill: Lumion Scene Template and Style Automation
Lumion’s automation surface is more limited than V-Ray or Blender, but there are still significant time savings available through template and style management.
The skill maintains a library of Lumion style configurations as JSON presets:
{
"styles": {
"warm_golden_hour": {
"description": "Warm exterior lighting, late afternoon sun",
"effects": {
"sun_study": {"month": 6, "hour": 17.5, "heading": 245},
"sky_light": {"brightness": 1.2, "shadow_softness": 0.7},
"hyperlight": true,
"color_correction": {"saturation": 1.1, "contrast": 1.05, "brightness": 1.0}
},
"camera": {"fov": 35, "height_offset": 1.6}
},
"overcast_scandinavian": {
"description": "Soft diffused light, neutral tones, Nordic aesthetic",
"effects": {
"real_skies": {"preset": "overcast_02"},
"sky_light": {"brightness": 0.9, "shadow_softness": 1.0},
"color_correction": {"saturation": 0.85, "contrast": 0.95, "temperature": -0.1}
},
"camera": {"fov": 28, "height_offset": 1.5}
},
"night_interior": {
"description": "Warm interior lighting with visible exterior darkness",
"effects": {
"sun_study": {"month": 12, "hour": 20.0},
"sky_light": {"brightness": 0.3},
"interior_lights_boost": 1.5,
"color_correction": {"saturation": 1.0, "contrast": 1.1, "temperature": 0.15}
},
"camera": {"fov": 32, "height_offset": 1.2}
}
}
}
When starting a new project, you tell the skill “set up a Lumion scene for a residential project with golden hour exterior and warm interior styles.” The skill generates a setup checklist with exact parameter values for each effect, organized by view type. This is not full programmatic control of Lumion, but it eliminates the guesswork of configuring effects from scratch every time.
The skill also generates a Lumion camera path template. For walkthroughs, it calculates camera positions based on your floor plan dimensions and generates a keyframe sequence that you paste into Lumion’s animation editor. This turns a 30-minute camera path setup into a 5-minute paste-and-adjust task.
Skill: Enscape View Preset Manager
Enscape stores its view presets and render settings in XML-based configuration files. The skill reads, writes, and organizes these presets across projects.
Here is an example of what the skill generates for a standardized exterior preset:
<?xml version="1.0" encoding="utf-8"?>
<EnscapeViewPreset>
<Name>Exterior_GoldenHour_Standard</Name>
<RenderQuality>Ultra</RenderQuality>
<Resolution>
<Width>3840</Width>
<Height>2160</Height>
</Resolution>
<WhiteMode>false</WhiteMode>
<Exposure>
<AutoExposure>false</AutoExposure>
<ExposureBrightness>9.5</ExposureBrightness>
</Exposure>
<Atmosphere>
<SunBrightness>120</SunBrightness>
<SkyBrightness>80</SkyBrightness>
<ShadowSharpness>0.6</ShadowSharpness>
</Atmosphere>
<OutputSettings>
<FileFormat>PNG</FileFormat>
<IncludeAlpha>true</IncludeAlpha>
</OutputSettings>
</EnscapeViewPreset>
The real power of this skill is cross-project consistency. It maintains a master preset library and can push updates to all active projects simultaneously. When you decide that your firm’s standard exterior exposure should change from 9.5 to 10.0, the skill updates every project’s preset file in one operation.
The skill also generates Enscape batch export scripts for Revit. Since Enscape integrates with Revit’s view system, the skill creates a Dynamo script or IronPython snippet that iterates through named views and triggers Enscape renders for each one:
# IronPython script for Revit + Enscape batch export
import clr
clr.AddReference('RevitAPI')
clr.AddReference('RevitAPIUI')
from Autodesk.Revit.DB import FilteredCollector, View3D, ViewType
doc = __revit__.ActiveUIDocument.Document
views = FilteredCollector(doc).OfClass(View3D).ToElements()
render_views = [v for v in views if v.Name.startswith("RENDER_")]
for view in render_views:
# Activate the view
__revit__.ActiveUIDocument.ActiveView = view
# Trigger Enscape capture (via Enscape API if available)
print("Queued for render: {}".format(view.Name))
Skill: Post-Processing Pipeline Automation
After renders are complete, post-processing typically involves the same adjustments applied to every image: exposure correction, color grading, sharpening, vignette, and sometimes adding people, trees, or atmosphere overlays. Doing this manually in Photoshop for 15 images is tedious and error-prone.
The skill generates Photoshop action scripts (JSX) that automate the entire post-processing pipeline:
// Photoshop JSX - Architectural Post-Processing Pipeline
// Generated by Archgyan Rendering Automation Skill
#target photoshop
var inputFolder = Folder.selectDialog("Select render output folder");
var outputFolder = new Folder(inputFolder.fullName + "/processed");
if (!outputFolder.exists) outputFolder.create();
var files = inputFolder.getFiles(/\.(exr|tif|tiff|png|jpg)$/i);
for (var i = 0; i < files.length; i++) {
var doc = app.open(files[i]);
// Step 1: Exposure adjustment
var exposureLayer = doc.artLayers.add();
// Apply Curves adjustment
applyCurvesAdjustment(doc, [[0, 0], [64, 58], [128, 140], [192, 210], [255, 255]]);
// Step 2: Color temperature warm shift
applyColorBalance(doc, {highlights: [8, 0, -5], midtones: [5, 0, -3]});
// Step 3: Sharpen for output
doc.activeLayer = doc.flatten();
applySmartSharpen(doc, {amount: 80, radius: 1.2, threshold: 0});
// Step 4: Add subtle vignette
applyVignette(doc, {amount: -25, midpoint: 40});
// Step 5: Resize for presentation (if needed)
if (doc.width.as("px") > 3840) {
doc.resizeImage(new UnitValue(3840, "px"), null, 300, ResampleMethod.BICUBICSHARPER);
}
// Save processed version
var outputFile = new File(outputFolder.fullName + "/" + doc.name.replace(/\.[^.]+$/, "_final.jpg"));
var jpgOptions = new JPEGSaveOptions();
jpgOptions.quality = 11;
doc.saveAs(outputFile, jpgOptions, true);
doc.close(SaveOptions.DONOTSAVECHANGES);
}
alert("Processed " + files.length + " images. Output: " + outputFolder.fullName);
function applyCurvesAdjustment(doc, points) {
// Curves adjustment implementation
var desc = new ActionDescriptor();
var curvesDesc = new ActionDescriptor();
var pointList = new ActionList();
for (var p = 0; p < points.length; p++) {
var point = new ActionDescriptor();
point.putDouble(charIDToTypeID("Hrzn"), points[p][0]);
point.putDouble(charIDToTypeID("Vrtc"), points[p][1]);
pointList.putObject(charIDToTypeID("Pnt "), point);
}
curvesDesc.putList(charIDToTypeID("Crv "), pointList);
desc.putObject(charIDToTypeID("Adjs"), charIDToTypeID("Crvs"), curvesDesc);
executeAction(charIDToTypeID("Crvs"), desc, DialogModes.NO);
}
For teams that prefer open-source tools, the skill also generates Python scripts using Pillow or OpenCV for basic post-processing, and ImageMagick command-line pipelines for batch operations:
#!/bin/bash
# ImageMagick batch post-processing pipeline
# Applies: brightness, contrast, warmth, sharpen, vignette
INPUT_DIR="./output/raw"
OUTPUT_DIR="./output/processed"
mkdir -p "$OUTPUT_DIR"
for img in "$INPUT_DIR"/*.{png,jpg,tif}; do
[ -f "$img" ] || continue
filename=$(basename "$img")
output_name="${filename%.*}_final.jpg"
magick "$img" \
-brightness-contrast 5x8 \
-modulate 100,105,102 \
-unsharp 0x1.2+0.8+0.02 \
-gravity center \
-background black \
-vignette 0x150 \
-resize "3840x3840>" \
-quality 92 \
"$OUTPUT_DIR/$output_name"
echo "Processed: $output_name"
done
echo "Done. Output in: $OUTPUT_DIR"
Skill: Rendering Checklist and Quality Assurance
One of the most valuable skills is not about rendering itself. It is about catching mistakes before you hit the render button. A QA checklist skill inspects your scene files and flags common problems that waste render time.
The skill runs through a structured checklist:
rendering_qa_checklist:
scene_integrity:
- check: "All materials assigned"
method: "Scan for default/missing material assignments"
severity: critical
- check: "No overlapping geometry"
method: "Check for z-fighting surfaces within 0.1mm"
severity: warning
- check: "Lights properly configured"
method: "Verify light intensity values are within realistic range"
severity: critical
render_settings:
- check: "Output resolution matches deliverable spec"
method: "Compare render resolution to project requirements"
severity: critical
- check: "File format includes alpha channel"
method: "Verify EXR/PNG alpha for compositing views"
severity: warning
- check: "Render passes enabled"
method: "Check that diffuse, reflection, and shadow passes are active"
severity: info
optimization:
- check: "No hidden high-poly objects consuming memory"
method: "List hidden objects with polygon count > 100k"
severity: warning
- check: "Texture resolution appropriate"
method: "Flag textures > 8K that are not in close-up views"
severity: info
- check: "GI settings match quality tier"
method: "Verify irradiance map/brute force settings for draft vs final"
severity: critical
output_verification:
- check: "Output folder exists and is writable"
method: "Test write permissions on output directory"
severity: critical
- check: "Sufficient disk space"
method: "Estimate output size and verify available space"
severity: critical
- check: "Naming convention followed"
method: "Verify output filenames match project naming standard"
severity: warning
For Blender specifically, the skill generates a Python validation script that runs before the batch render:
"""
Pre-render QA validation for Blender
Run before batch rendering to catch common issues
Usage: blender --background project.blend --python pre_render_qa.py
"""
import bpy
import os
issues = {"critical": [], "warning": [], "info": []}
def check_materials():
"""Flag objects with no material or default material."""
for obj in bpy.data.objects:
if obj.type != 'MESH' or obj.hide_render:
continue
if len(obj.data.materials) == 0:
issues["critical"].append(
f"Object '{obj.name}' has no material assigned"
)
for slot in obj.material_slots:
if slot.material is None:
issues["critical"].append(
f"Object '{obj.name}' has an empty material slot"
)
def check_cameras():
"""Verify all render cameras exist and have valid settings."""
cameras = [obj for obj in bpy.data.objects if obj.type == 'CAMERA']
if len(cameras) == 0:
issues["critical"].append("No cameras found in scene")
return
for cam in cameras:
if cam.data.clip_end < 100:
issues["warning"].append(
f"Camera '{cam.name}' clip end is {cam.data.clip_end}m, "
"may clip distant geometry"
)
if cam.data.lens < 18 or cam.data.lens > 85:
issues["info"].append(
f"Camera '{cam.name}' focal length {cam.data.lens}mm "
"is outside typical architectural range (18-85mm)"
)
def check_render_settings():
"""Validate render configuration."""
scene = bpy.context.scene
if scene.render.engine == 'CYCLES':
if scene.cycles.samples < 64:
issues["warning"].append(
f"Render samples ({scene.cycles.samples}) very low, "
"expect noisy output"
)
if not scene.cycles.use_denoising:
issues["info"].append("Denoising is disabled")
# Check output path
output_path = bpy.path.abspath(scene.render.filepath)
output_dir = os.path.dirname(output_path)
if not os.path.exists(output_dir):
issues["critical"].append(
f"Output directory does not exist: {output_dir}"
)
def check_memory():
"""Estimate scene complexity."""
total_polys = sum(
len(obj.data.polygons)
for obj in bpy.data.objects
if obj.type == 'MESH' and not obj.hide_render
)
if total_polys > 10_000_000:
issues["warning"].append(
f"Scene has {total_polys:,} visible polygons, "
"may require significant memory"
)
# Check for oversized textures
for img in bpy.data.images:
if img.size[0] > 8192 or img.size[1] > 8192:
issues["info"].append(
f"Texture '{img.name}' is {img.size[0]}x{img.size[1]}, "
"consider downscaling for non-hero views"
)
# Run all checks
check_materials()
check_cameras()
check_render_settings()
check_memory()
# Report
print("\n" + "=" * 60)
print("PRE-RENDER QA REPORT")
print("=" * 60)
for severity in ["critical", "warning", "info"]:
if issues[severity]:
print(f"\n[{severity.upper()}] ({len(issues[severity])} issues)")
for issue in issues[severity]:
print(f" - {issue}")
total = sum(len(v) for v in issues.values())
critical_count = len(issues["critical"])
if critical_count > 0:
print(f"\nBLOCKED: {critical_count} critical issues must be resolved before rendering.")
elif total > 0:
print(f"\nPASS WITH WARNINGS: {total} non-critical issues found. Review before rendering.")
else:
print("\nALL CLEAR: No issues detected. Ready to render.")
This script catches the mistakes that silently waste hours of render time: missing materials that produce black surfaces, cameras with incorrect clipping, output directories that do not exist, and scenes that are too heavy for available memory.
Skill: Client Presentation Image Set Compiler
The final step in any rendering workflow is assembling the deliverable set. This means collecting rendered images, applying consistent naming, adding watermarks or title blocks, generating contact sheets, and packaging everything for the client. Doing this by hand for every presentation is slow and invites errors.
The skill automates the entire compilation process:
"""
Client Presentation Compiler
Collects rendered images, applies branding, generates contact sheet, packages deliverable.
"""
import os
import json
from PIL import Image, ImageDraw, ImageFont
from datetime import datetime
class PresentationCompiler:
def __init__(self, config_path):
with open(config_path) as f:
self.config = json.load(f)
def apply_title_block(self, image_path, view_name, output_path):
"""Add firm branding and view information to bottom of image."""
img = Image.open(image_path)
width, height = img.size
# Create title block strip (5% of image height)
bar_height = int(height * 0.05)
bar = Image.new('RGBA', (width, bar_height), (30, 30, 30, 230))
draw = ImageDraw.Draw(bar)
font_size = int(bar_height * 0.45)
try:
font = ImageFont.truetype(self.config["font_path"], font_size)
font_small = ImageFont.truetype(self.config["font_path"], int(font_size * 0.7))
except (OSError, KeyError):
font = ImageFont.load_default()
font_small = font
# Left side: project name and view
project = self.config.get("project_name", "Untitled Project")
draw.text((20, bar_height * 0.15), project, fill="white", font=font)
draw.text((20, bar_height * 0.55), view_name, fill=(180, 180, 180), font=font_small)
# Right side: firm name and date
firm = self.config.get("firm_name", "")
date_str = datetime.now().strftime("%B %Y")
draw.text(
(width - 300, bar_height * 0.15),
firm, fill="white", font=font_small
)
draw.text(
(width - 300, bar_height * 0.55),
date_str, fill=(180, 180, 180), font=font_small
)
# Composite title block onto image
result = Image.new('RGB', (width, height + bar_height))
result.paste(img, (0, 0))
result.paste(bar, (0, height), bar)
result.save(output_path, quality=95)
def generate_contact_sheet(self, image_paths, output_path, cols=3):
"""Create a grid overview of all views."""
thumb_width = 800
thumb_height = 450
padding = 20
rows = (len(image_paths) + cols - 1) // cols
sheet_width = cols * (thumb_width + padding) + padding
sheet_height = rows * (thumb_height + padding + 40) + padding
sheet = Image.new('RGB', (sheet_width, sheet_height), (245, 245, 245))
draw = ImageDraw.Draw(sheet)
for idx, img_path in enumerate(image_paths):
row = idx // cols
col = idx % cols
x = padding + col * (thumb_width + padding)
y = padding + row * (thumb_height + padding + 40)
thumb = Image.open(img_path)
thumb.thumbnail((thumb_width, thumb_height), Image.LANCZOS)
sheet.paste(thumb, (x, y))
# Add view name below thumbnail
view_name = os.path.splitext(os.path.basename(img_path))[0]
draw.text((x, y + thumb_height + 5), view_name, fill=(60, 60, 60))
sheet.save(output_path, quality=90)
def compile(self, render_dir, output_dir):
"""Full compilation pipeline."""
os.makedirs(output_dir, exist_ok=True)
# Collect rendered images
extensions = {'.jpg', '.jpeg', '.png', '.tif', '.tiff'}
images = sorted([
os.path.join(render_dir, f)
for f in os.listdir(render_dir)
if os.path.splitext(f)[1].lower() in extensions
])
branded_images = []
for img_path in images:
view_name = os.path.splitext(os.path.basename(img_path))[0]
output_path = os.path.join(output_dir, f"{view_name}_branded.jpg")
self.apply_title_block(img_path, view_name, output_path)
branded_images.append(output_path)
# Generate contact sheet
contact_path = os.path.join(output_dir, "00_contact_sheet.jpg")
self.generate_contact_sheet(branded_images, contact_path)
print(f"Compiled {len(branded_images)} images to {output_dir}")
print(f"Contact sheet: {contact_path}")
The configuration file for the compiler holds project-specific details:
{
"project_name": "Riverside Residence",
"firm_name": "Studio Architecture",
"font_path": "/fonts/Helvetica-Neue.ttf",
"output_format": "jpg",
"output_quality": 95,
"contact_sheet_columns": 3,
"naming_convention": "{project}_{view_type}_{sequence:02d}"
}
You tell the skill “compile the presentation set for Riverside Residence,” and it collects all rendered images, applies title blocks, generates a contact sheet overview, and organizes everything into a delivery-ready folder. What used to take 45 minutes of Photoshop work happens in seconds.
Working with Blender’s Python API for Architectural Rendering
Blender deserves a deeper look because its Python API (bpy) exposes virtually everything. For architects who use Blender for visualization, this creates opportunities that go beyond simple batch rendering.
Here is a practical example: a script that sets up an architectural lighting rig based on room type and time of day:
"""
Architectural Lighting Setup for Blender
Creates sun + sky lighting based on location and time parameters.
"""
import bpy
import math
def setup_architectural_lighting(
latitude=40.7128,
longitude=-74.0060,
month=6,
hour=17.0,
north_offset=0,
interior=False
):
"""
Configure sun position and world lighting for architectural rendering.
Args:
latitude: Site latitude (default: New York)
longitude: Site longitude
month: Month of year (1-12)
hour: Hour of day (0-24, decimal for minutes)
north_offset: Rotation offset for north direction in degrees
interior: If True, adds area lights for window bounced light
"""
scene = bpy.context.scene
# Enable Sun Position addon
bpy.ops.preferences.addon_enable(module="sun_position")
# Configure sun
sun_props = scene.sun_pos_properties
sun_props.usage_mode = 'NORMAL'
sun_props.latitude = latitude
sun_props.longitude = longitude
sun_props.month = month
sun_props.time = hour
sun_props.UTC_zone = -5 # EST
sun_props.north_offset = math.radians(north_offset)
# Create sun lamp if it does not exist
sun = bpy.data.objects.get("Sun_Architectural")
if sun is None:
bpy.ops.object.light_add(type='SUN', location=(0, 0, 10))
sun = bpy.context.active_object
sun.name = "Sun_Architectural"
sun.data.energy = 5.0
sun.data.angle = math.radians(0.545) # Realistic sun disc size
# Configure world (sky)
world = scene.world
if world is None:
world = bpy.data.worlds.new("World_Architectural")
scene.world = world
world.use_nodes = True
nodes = world.node_tree.nodes
links = world.node_tree.links
nodes.clear()
# Sky Texture node
sky_node = nodes.new('ShaderNodeTexSky')
sky_node.sky_type = 'NISHITA'
sky_node.sun_elevation = math.radians(max(5, 90 - abs(latitude - 23.5)))
sky_node.sun_rotation = math.radians(hour * 15 - 180)
sky_node.altitude = 0
sky_node.air_density = 1.0
sky_node.dust_density = 0.5
bg_node = nodes.new('ShaderNodeBackground')
bg_node.inputs['Strength'].default_value = 1.0
output_node = nodes.new('ShaderNodeOutputWorld')
links.new(sky_node.outputs['Color'], bg_node.inputs['Color'])
links.new(bg_node.outputs['Background'], output_node.inputs['Surface'])
# For interiors, add area lights simulating window bounce
if interior:
add_window_bounce_lights(scene)
print(f"Lighting configured: Lat {latitude}, Month {month}, Hour {hour}")
def add_window_bounce_lights(scene):
"""Add soft area lights to simulate light bouncing through windows."""
# Find objects named 'Window_*' and place area lights at their locations
windows = [
obj for obj in bpy.data.objects
if obj.name.startswith("Window_") and obj.type == 'MESH'
]
for window in windows:
light_name = f"Bounce_{window.name}"
existing = bpy.data.objects.get(light_name)
if existing:
continue
bpy.ops.object.light_add(
type='AREA',
location=window.location
)
light = bpy.context.active_object
light.name = light_name
light.data.energy = 50
light.data.size = 2.0
light.data.color = (1.0, 0.95, 0.9) # Slightly warm
light.rotation_euler = window.rotation_euler
print(f"Added {len(windows)} window bounce lights")
# Usage
setup_architectural_lighting(
latitude=48.8566, # Paris
month=9, # September
hour=16.5, # 4:30 PM
north_offset=15, # North is 15 degrees rotated
interior=True
)
This script replaces 15 to 20 minutes of manual lighting setup. You specify the site location, time of day, and whether it is an interior or exterior shot, and the script configures everything: sun position, sky texture, light intensity, and supplementary bounce lights for interiors.
Another common task is camera setup. Architects need consistent focal lengths, heights, and compositions across views. Here is a script that creates a standardized camera set:
def create_architectural_cameras(views_config):
"""
Create cameras from a configuration list.
views_config example:
[
{"name": "Ext_Street", "location": (20, -15, 1.7), "target": (0, 0, 4), "lens": 35},
{"name": "Int_Living", "location": (5, 3, 1.5), "target": (0, 0, 1.2), "lens": 24},
]
"""
for view in views_config:
# Create camera
cam_data = bpy.data.cameras.new(name=view["name"])
cam_data.lens = view.get("lens", 35)
cam_data.clip_start = 0.1
cam_data.clip_end = 1000
cam_data.sensor_width = 36 # Full frame
cam_obj = bpy.data.objects.new(view["name"], cam_data)
bpy.context.collection.objects.link(cam_obj)
# Position camera
cam_obj.location = view["location"]
# Point camera at target
target = view.get("target", (0, 0, 0))
direction = (
target[0] - view["location"][0],
target[1] - view["location"][1],
target[2] - view["location"][2],
)
rot_quat = mathutils.Vector(direction).to_track_quat('-Z', 'Y')
cam_obj.rotation_euler = rot_quat.to_euler()
print(f"Created camera: {view['name']} at {view['location']}, lens={view['lens']}mm")
Comparison Table: Automation Capabilities Across Rendering Tools
Not every rendering tool offers the same level of automation access. This table summarizes what is possible with each tool, so you can decide where automation skills will deliver the most value for your workflow.
| Capability | V-Ray | Blender Cycles | Lumion | Enscape | Twinmotion |
|---|---|---|---|---|---|
| Scene file scripting | Full (.vrscene text) | Full (Python bpy) | Limited | Partial (XML presets) | Minimal |
| Material automation | Full (text-based defs) | Full (Python nodes) | None | None | None |
| Batch rendering | CLI + scene files | CLI + Python | CLI export only | Via host app | CLI export only |
| Render settings control | Full programmatic | Full programmatic | Template-based | XML presets | Limited |
| Camera path scripting | Via host app SDK | Full Python | Manual only | Via host app | Manual only |
| Post-processing integration | Render elements export | Compositor nodes | Built-in effects | Built-in effects | Built-in effects |
| Headless operation | Yes (V-Ray Standalone) | Yes (background mode) | No | No | Limited |
| API/SDK available | V-Ray App SDK | bpy (Python) | None public | Limited | Datasmith import |
| Best automation target | Material + batch render | Full pipeline | Style templates | View presets | File pipeline |
The key takeaway: if you want end-to-end automation, Blender and V-Ray are your best options. For Lumion, Enscape, and Twinmotion, automation is most effective at the preparation and post-processing stages rather than inside the rendering tool itself.
Getting Started: Your First Rendering Automation Skill
If you have never built a Claude Code skill before, start with something simple and immediately useful. The best first skill is a batch render script generator, because it has the clearest time savings and the simplest inputs.
Here is how to set it up step by step.
Step 1: Create the skill file. In your project directory, create a file called .claude/skills/batch-render.md:
# Batch Render Generator
Generate batch rendering scripts for architectural visualization projects.
## Inputs
- Rendering engine (V-Ray Standalone, Blender Cycles, or both)
- List of view names and their camera names
- Quality tier (draft, review, final)
- Output directory path
- Resolution (default: 4000x2250 for exteriors, 4000x2250 for interiors)
## Quality Presets
### Draft
- V-Ray: Progressive sampler, 50 subdivisions, noise threshold 0.05
- Blender: 64 samples, denoising on, OptiX denoiser
### Review
- V-Ray: Progressive sampler, 200 subdivisions, noise threshold 0.02
- Blender: 256 samples, denoising on, OpenImageDenoise
### Final
- V-Ray: Bucket sampler, adaptive threshold 0.005, full GI
- Blender: 512-1024 samples, denoising on, OpenImageDenoise
## Output
Generate a complete script file (.sh or .py) ready to execute.
Include logging, time estimates, and error handling.
Step 2: Test with a real project. Open Claude Code in your project directory and ask: “Generate a batch render script for 5 exterior views and 3 interior views using Blender Cycles at review quality.”
Claude Code reads the skill definition, asks for view names if you have not provided them, and generates a complete Python script with all the configuration discussed earlier in this post.
Step 3: Iterate and expand. Once the basic batch render skill works, add related skills one at a time. The post-processing pipeline is a natural second skill, followed by the QA checklist. Each skill builds on the previous one, and within a week you will have a connected pipeline that takes you from scene setup to client deliverables with minimal manual intervention.
The most important principle is to automate what you repeat, not what you do once. If you configure V-Ray materials from scratch for every project, build the material library skill first. If post-processing is your biggest time sink, start there. The goal is not to automate everything at once. It is to eliminate the specific bottlenecks in your rendering workflow, one skill at a time, so you can spend more time on the design decisions that actually require your expertise.
Level up your skills
Ready to learn hands-on?
- Project-based Revit & BIM courses for architects
- Go from beginner to confident professional
- Video lessons you can follow at your own pace