Grasshopper and Computational Design Automation with Claude Code Skills
Build Claude Code skills for Grasshopper workflows - Python scripting, definition docs, parametric validation, and data tree management.
Go deeper with Archgyan Academy
Structured BIM and Revit learning paths for architects and students.
Why Computational Design Workflows Need Claude Code Skills
Grasshopper definitions grow fast. A parametric facade study that starts as a dozen components on a Monday can balloon into a 200-node graph by Friday, with unlabelled clusters, mystery number sliders, and data trees that nobody fully understands. Computational designers spend a surprising amount of time not on design itself but on the overhead that surrounds it - documenting definitions, validating outputs, preparing data for downstream tools, and tracking which parameter combinations actually produced the geometry sitting on the partner’s desk.
Claude Code skills address this overhead directly. A skill is a reusable markdown instruction file that lives in your project’s .claude/skills/ directory. When invoked, Claude Code reads the instructions and executes a multi-step workflow - reading files, running scripts, writing outputs - all within the context of your project. For computational design, that means you can build skills that generate GHPython boilerplate, auto-document complex definitions, validate data tree structures before they hit a Revit pipeline, and log every design iteration so nothing gets lost.
The payoff is consistency. When every team member invokes the same /validate-data-trees skill, the validation rules are identical. When a new designer joins the project, they run /document-definition and get an up-to-date map of every cluster, input, and output in the current Grasshopper file - without asking the person who built it.
This guide walks through six complete Claude Code skills purpose-built for Grasshopper and computational design workflows, plus advanced patterns for chaining them together. Every skill is a ready-to-use markdown file you can drop into your project today.
Setting Up Grasshopper Projects for Claude Code
Before building skills, you need a project structure that Claude Code can read. Grasshopper .gh files are binary, so Claude Code cannot parse them directly. The workaround is to establish conventions for exporting metadata alongside your definitions.
Recommended folder structure:
project-root/
├── .claude/
│ └── skills/
│ ├── ghpython-generator.md
│ ├── definition-documenter.md
│ ├── data-tree-validator.md
│ ├── constraint-checker.md
│ ├── revit-data-prep.md
│ └── iteration-logger.md
├── gh-definitions/
│ ├── facade-panelization.gh
│ ├── facade-panelization.json # Exported metadata
│ └── facade-panelization.log # Iteration log
├── gh-scripts/
│ ├── attractor_field.py
│ ├── mesh_analysis.py
│ └── data_tree_utils.py
├── gh-docs/
│ └── facade-panelization.md # Generated documentation
├── gh-validation/
│ └── tree-schemas/
│ └── facade-output.json # Expected data tree structures
└── CLAUDE.md
Exporting definition metadata: Use a GHPython component inside your definition that writes a JSON summary whenever the definition solves. This summary should include all named parameters, their current values, input/output component names, and cluster descriptions. A minimal export script looks like this:
import json
import os
import datetime
metadata = {
"definition_name": "facade-panelization",
"exported_at": str(datetime.datetime.now()),
"parameters": {
"panel_width": {"value": panel_width, "type": "float", "range": [0.3, 2.0]},
"panel_height": {"value": panel_height, "type": "float", "range": [0.3, 3.0]},
"attractor_strength": {"value": attractor_strength, "type": "float", "range": [0.0, 1.0]},
"seed": {"value": seed, "type": "int", "range": [0, 9999]}
},
"clusters": ["PanelGenerator", "AttractorField", "MeshSubdivision"],
"outputs": ["panel_geometry", "panel_areas", "color_map"]
}
output_path = os.path.join(os.path.dirname(ghdoc.Path), "facade-panelization.json")
with open(output_path, "w") as f:
json.dump(metadata, f, indent=2)
With this convention in place, every skill in this guide can read the JSON metadata and operate on your definitions without needing to parse the .gh binary.
Skill: GHPython Script Generator
This skill generates complete GHPython component scripts for common computational design tasks. Instead of writing boilerplate from scratch every time you need an attractor field, mesh analysis routine, or data tree manipulation, you describe what you need and the skill produces a ready-to-paste script.
# GHPython Script Generator
Generate production-ready GHPython scripts for Grasshopper components.
## Trigger
Use when the user says "generate ghpython", "write a grasshopper script",
"create a python component", or describes a computational geometry task.
## Instructions
1. Read the user's description of what the GHPython component should do.
2. Determine the script category:
- geometry_generation: Creating points, curves, surfaces, meshes
- data_processing: Manipulating data trees, lists, sorting
- attractor_systems: Point/curve/surface attractors with falloff
- mesh_operations: Subdivision, analysis, vertex coloring
- file_io: Reading/writing CSV, JSON, or external data
- interop: Preparing data for export to Revit, Tekla, or IFC
3. Generate a complete GHPython script that includes:
- A docstring header explaining inputs, outputs, and behavior
- Type hints in comments for all Grasshopper inputs
- Proper import statements (rhinoscriptsyntax, Rhino.Geometry, ghpythonlib)
- Input validation with clear error messages
- The core logic
- Output assignment to Grasshopper output parameters
4. Follow these conventions:
- Use `rhinoscriptsyntax` as `rs` for simple geometry operations
- Use `Rhino.Geometry` namespace for performance-critical operations
- Use `ghpythonlib.treehelpers` for data tree input/output
- Always include a `try/except` block at the top level
- Name outputs descriptively: `panel_geo`, `area_values`, not `a`, `b`, `c`
- Add inline comments for any non-obvious math
5. Save the generated script to `gh-scripts/<descriptive_name>.py`.
6. Print a summary: script name, inputs required, outputs produced,
and which Grasshopper component type to use (GhPython Script).
## Output Format
A `.py` file in `gh-scripts/` with full docstring and inline documentation.
## Example Output
For a request like "generate an attractor field script":
```python
"""
Attractor Field Generator
Inputs:
points (list of Point3d): Base points to evaluate
attractors (list of Point3d): Attractor point locations
max_dist (float): Maximum influence distance
falloff (float): Falloff exponent (1.0 = linear, 2.0 = quadratic)
Outputs:
values (list of float): Normalized influence values per point
colors (list of Color): Gradient colors for visualization
"""
import Rhino.Geometry as rg
import System.Drawing as sd
try:
if not points or not attractors:
raise ValueError("Both points and attractors inputs are required")
if max_dist is None or max_dist <= 0:
max_dist = 50.0
if falloff is None or falloff <= 0:
falloff = 2.0
values = []
for pt in points:
min_distance = float("inf")
for att in attractors:
d = pt.DistanceTo(att)
if d < min_distance:
min_distance = d
normalized = max(0.0, 1.0 - (min_distance / max_dist))
values.append(pow(normalized, falloff))
# Generate color gradient from blue (low) to red (high)
colors = []
for v in values:
r = int(v * 255)
b = int((1.0 - v) * 255)
colors.append(sd.Color.FromArgb(r, 50, b))
except Exception as e:
print("Error: {}".format(str(e)))
values = []
colors = []
This skill eliminates the blank-page problem that slows down GHPython development. You describe intent, and the skill handles the boilerplate, imports, error handling, and documentation.
## Skill: Definition Documentation Generator
Large Grasshopper definitions become illegible quickly. This skill reads the exported JSON metadata from your definition and produces a structured markdown document that describes every parameter, cluster, input, and output.
```markdown
# Definition Documentation Generator
Auto-generate documentation for Grasshopper definitions from exported metadata.
## Trigger
Use when the user says "document this definition", "generate gh docs",
"write definition documentation", or "describe this grasshopper file".
## Instructions
1. Ask the user which definition to document, or detect from context.
2. Read the metadata JSON file from `gh-definitions/<name>.json`.
3. If no JSON exists, report the error and suggest running the metadata
export component inside Grasshopper first.
4. Generate a markdown document with these sections:
### Definition Overview
- Name, description, last export timestamp
- Total parameter count, cluster count, output count
### Parameters
A table with columns: Name | Type | Current Value | Valid Range | Description
- For each parameter in the metadata, fill in all columns
- If range is missing, note "unconstrained" and flag it as a warning
### Clusters
For each cluster listed in the metadata:
- Name and purpose (infer from name if no description provided)
- Inputs consumed and outputs produced
### Outputs
For each output:
- Name, data type, expected structure (single item, list, data tree)
- Downstream consumers if known
### Warnings
- Parameters without defined ranges
- Clusters without descriptions
- Outputs with ambiguous names
5. Save the document to `gh-docs/<definition-name>.md`.
6. Print a summary of what was documented and any warnings found.
## Output Format
Markdown file in `gh-docs/` with full parameter tables and cluster descriptions.
Running this skill after every major definition change keeps your documentation in sync without any manual effort. It is especially valuable when handing off definitions between team members or revisiting a project after several months.
Skill: Data Tree Structure Validator
Data trees are the single most confusing aspect of Grasshopper for new and experienced users alike. This skill validates that the data tree outputs from your definition match an expected structure, catching mismatches before they propagate downstream into panels, surfaces, or Revit geometry.
# Data Tree Structure Validator
Validate Grasshopper data tree outputs against expected structure schemas.
## Trigger
Use when the user says "validate data trees", "check tree structure",
"verify grasshopper output", or "tree schema check".
## Instructions
1. Read the target schema from `gh-validation/tree-schemas/<name>.json`.
Schema format:
```json
{
"output_name": "panel_geometry",
"expected_structure": {
"depth": 2,
"branch_pattern": "{i;j}",
"branch_count_range": [10, 500],
"items_per_branch_range": [1, 50],
"item_type": "Brep",
"allow_null_items": false
}
}
-
Read the actual output data from the definition’s metadata JSON. Look for the
tree_outputssection which the export component writes:{ "tree_outputs": { "panel_geometry": { "depth": 2, "branch_count": 120, "items_per_branch": [4, 4, 4, 3, 4], "item_type": "Brep", "null_count": 0 } } } -
Compare actual vs expected:
- Depth matches expected depth
- Branch count falls within branch_count_range
- Items per branch fall within items_per_branch_range
- Item type matches expected type
- Null items present only if allow_null_items is true
-
Generate a validation report:
- PASS items in green (prefix with checkmark)
- FAIL items in red (prefix with X) with explanation
- WARNING items for near-boundary values (within 10% of range limits)
-
Save the report to
gh-validation/<definition-name>-report.md. -
If any FAIL items exist, print them prominently so the user sees the problems immediately.
Schema Creation Helper
If no schema exists yet, offer to create one from the current actual output.
Read the tree_outputs from metadata and generate a schema with reasonable
ranges (current value +/- 20% for counts, same type and depth).
Save to gh-validation/tree-schemas/<output-name>.json.
This skill is particularly powerful in team environments where one designer builds the definition and another consumes its outputs. The schema file becomes a contract - if the upstream definition changes its tree structure, the validator catches it immediately.
## Skill: Parametric Constraint Checker
Parametric models break silently. A panel width of zero, a negative offset distance, or a subdivision count that creates millions of faces - these errors produce geometry that looks wrong but does not throw an error. This skill checks all parameters against defined constraints and flags violations.
```markdown
# Parametric Constraint Checker
Verify that all parameters in a Grasshopper definition fall within
valid ranges and satisfy inter-parameter relationships.
## Trigger
Use when the user says "check constraints", "validate parameters",
"parameter range check", or "verify definition inputs".
## Instructions
1. Read the definition metadata from `gh-definitions/<name>.json`.
2. For each parameter, check:
- Value falls within the defined range (min/max)
- Value is of the correct type (int, float, bool, string)
- Value is not null or undefined unless explicitly optional
3. Check inter-parameter relationships. Read the `constraints` section
of the metadata if present:
```json
{
"constraints": [
{
"rule": "panel_width * panel_count <= facade_width",
"description": "Panels must fit within facade width",
"severity": "error"
},
{
"rule": "subdivision_u * subdivision_v <= 10000",
"description": "Total subdivisions must not exceed 10000 for performance",
"severity": "warning"
},
{
"rule": "min_panel_size <= panel_width",
"description": "Panel width must exceed minimum fabrication size",
"severity": "error"
}
]
}
-
Evaluate each constraint by substituting current parameter values. Use Python-style expression evaluation.
-
Generate a report with three sections:
- Range Violations: Parameters outside their min/max
- Constraint Violations: Failed inter-parameter rules
- Performance Warnings: Combinations that may cause slow solves (e.g., high subdivision counts, large point clouds)
-
Save to
gh-validation/<definition-name>-constraints.md. -
If any errors (not just warnings) are found, print a bold summary line: “CONSTRAINT CHECK FAILED - X errors found”
Constraint File Bootstrap
If no constraints section exists in the metadata, analyze the parameters and suggest reasonable constraints based on common patterns:
- Width/height values: minimum 0.01, maximum 1000.0
- Count values: minimum 1, maximum 10000
- Ratio values: minimum 0.0, maximum 1.0
- Cross-parameter products for subdivision: cap at 50000 total cells
Save suggested constraints back into the metadata JSON.
Running this skill before a long batch solve or before sending geometry to fabrication catches the kind of silent failures that waste hours of compute time or produce unusable shop drawings.
## Skill: Grasshopper-to-Revit Data Prep
The bridge between Grasshopper and Revit - whether through Rhino.Inside.Revit or file-based exchange - is where many computational design workflows fail. Data types do not match, units are wrong, or element categories are missing. This skill validates your Grasshopper outputs against what Revit expects before you attempt the transfer.
```markdown
# Grasshopper-to-Revit Data Prep
Validate and prepare Grasshopper output data for Rhino.Inside.Revit
or file-based Revit interop workflows.
## Trigger
Use when the user says "prep for revit", "validate revit data",
"check rhino inside output", or "grasshopper to revit".
## Instructions
1. Read the definition metadata from `gh-definitions/<name>.json`.
2. Check the `revit_mapping` section if present:
```json
{
"revit_mapping": {
"panel_geometry": {
"revit_category": "Walls",
"revit_family": "Curtain Wall Panel",
"required_parameters": ["Mark", "Area", "Panel_Type"],
"unit_system": "millimeters"
},
"structural_grid": {
"revit_category": "Structural Framing",
"revit_family": "Steel Beam",
"required_parameters": ["Mark", "Length", "Section_Size"],
"unit_system": "millimeters"
}
}
}
-
For each mapped output, validate:
- Geometry type is compatible with the Revit category (e.g., Breps for Walls, Curves for Structural Framing)
- All required Revit parameters have corresponding data in the output
- Unit system matches (flag if Grasshopper is in meters but Revit mapping expects millimeters)
- No null geometry items that would create failed Revit elements
- Data tree structure matches expected branch-per-element pattern
-
Generate a preparation checklist:
- Unit conversion required: yes/no (with conversion factor)
- Missing parameters: list with suggested default values
- Geometry compatibility: pass/fail per output
- Estimated Revit element count from branch/item counts
- Potential issues: coincident geometry, zero-area panels, etc.
-
If unit conversion is needed, generate a GHPython conversion script and save it to
gh-scripts/unit_convert_<definition>.py. -
Save the full report to
gh-docs/<definition>-revit-prep.md.
Common Revit Mapping Issues
Flag these automatically:
- Rhino surfaces that are not trimmed (Revit needs closed Breps)
- Open curves mapped to wall-based categories (need closed curves)
- Mixed geometry types in a single output branch
- Panel areas below 0.01 sqm (likely degenerate geometry)
- More than 5000 elements in a single transfer (performance warning)
This skill acts as a pre-flight checklist for the Grasshopper-to-Revit handoff. Running it before every transfer catches the data mismatches that would otherwise surface as cryptic Revit errors or missing elements.
## Skill: Design Iteration Logger
Parametric design involves hundreds of iterations. Without a log, you cannot trace back to the parameter combination that produced the geometry your client approved last Tuesday. This skill captures a snapshot of every parameter state, output metric, and timestamp into a structured log.
```markdown
# Design Iteration Logger
Track parametric design iterations with full parameter snapshots
and output metrics.
## Trigger
Use when the user says "log this iteration", "save design state",
"record parameters", "snapshot this design", or "track iteration".
## Instructions
1. Read the current definition metadata from `gh-definitions/<name>.json`.
2. Read the existing iteration log from `gh-definitions/<name>.log`
(JSON lines format, one JSON object per line). Create the file
if it does not exist.
3. Construct a new log entry:
```json
{
"iteration": 47,
"timestamp": "2026-04-05T14:23:00",
"definition": "facade-panelization",
"parameters": {
"panel_width": 0.8,
"panel_height": 1.2,
"attractor_strength": 0.65,
"seed": 42
},
"metrics": {
"total_panels": 324,
"avg_panel_area": 0.96,
"min_panel_area": 0.41,
"max_panel_area": 1.44,
"total_surface_area": 311.04,
"solve_time_seconds": 2.3
},
"notes": "",
"tags": []
}
-
Ask the user if they want to add notes or tags to this iteration. Common tags: “client_review”, “structural_check”, “fabrication_test”, “baseline”, “final_candidate”.
-
Append the entry to the log file.
-
Print a summary:
- Iteration number
- Key parameter changes from previous iteration (diff)
- Key metric changes from previous iteration
- Total iterations logged for this definition
Log Analysis Commands
If the user asks “show iteration history” or “compare iterations”:
- Read the full log file.
- Present a table of iterations with columns: Iteration | Timestamp | Key Params Changed | Panel Count | Avg Area
- If asked to compare two specific iterations, show a side-by-side diff of all parameters and metrics.
- If asked for “best iteration by X”, sort the log by the requested metric and return the top entry.
The iteration log becomes invaluable during design reviews. Instead of relying on memory or screenshot folders, you have a structured, searchable record of every parameter state and the geometric outcomes it produced. When a client says "go back to that version from two weeks ago," you can find it in seconds.
## Advanced Patterns: Chaining Computational Design Skills
Individual skills are useful. Chained skills are transformative. Claude Code allows you to invoke multiple skills in sequence within a single conversation, building workflows that mirror your actual design process.
**Example chain: Full design validation before client review**
User: Run the pre-review checklist for facade-panelization
Claude Code executes:
- /check-constraints - Verify all parameters are within bounds
- /validate-data-trees - Confirm output tree structures are correct
- /prep-for-revit - Validate Revit transfer readiness
- /log-iteration - Snapshot current state with “client_review” tag
- /document-definition - Generate updated documentation
You can create a **meta-skill** that orchestrates this chain:
```markdown
# Pre-Review Checklist
Run all validation and documentation skills before a client review.
## Instructions
1. Run the Parametric Constraint Checker on the active definition.
If any errors are found, STOP and report them. Do not continue
until constraints pass.
2. Run the Data Tree Structure Validator on all outputs.
Report any FAIL items but continue to step 3.
3. Run the Grasshopper-to-Revit Data Prep if the definition has
a revit_mapping section. Skip if no Revit mapping exists.
4. Run the Design Iteration Logger with the tag "client_review".
5. Run the Definition Documentation Generator to produce fresh docs.
6. Generate a final summary report that combines all results:
- Constraint status: PASS / FAIL (with count)
- Data tree status: PASS / FAIL (with count)
- Revit prep status: READY / NOT READY / N/A
- Iteration logged: #N with timestamp
- Documentation updated: yes/no
Save to `gh-docs/<definition>-review-checklist.md`.
This chain ensures nothing gets missed before a presentation. Every parameter is validated, every output is checked, the current state is logged, and the documentation is current - all from a single command.
Tips for Computational Design Skill Development
Building effective skills for Grasshopper workflows requires understanding both the tool and the domain. Here are patterns that work well in practice.
Start with your pain points. Track what you do repeatedly over a week. If you find yourself manually checking panel counts, copying parameter values into spreadsheets, or explaining your definition structure to colleagues, those are skill candidates.
Keep metadata exports lightweight. The JSON export component in your Grasshopper definition should write only what skills need - parameter names, values, ranges, output summaries, and tree structure metadata. Do not try to serialize actual geometry. Skills work with metadata, not mesh vertices.
Version your schemas. When you update a data tree schema in gh-validation/tree-schemas/, add a version field. Skills should check the schema version and warn if the definition was exported with an older schema format.
Use tags consistently. Define a standard set of iteration tags for your team and document them in your project’s CLAUDE.md. Common tags include: baseline, client_review, structural_ok, fabrication_ready, archive, and rejected.
Test skills on small definitions first. Before running a validation skill on a 500-component definition, test it on a simple 10-component definition with known good and known bad outputs. This helps you tune the skill’s error messages and thresholds.
Write skills for your GHPython library. If your team maintains a library of shared GHPython scripts, create a skill that indexes the library, searches by function name or description, and inserts the correct import statements. This saves significant time over manual file browsing.
Common Mistakes and How to Avoid Them
Mistake 1: Trying to parse .gh files directly. Grasshopper files are binary XML archives. Do not write skills that attempt to read them. Instead, use the JSON metadata export approach described above. The metadata is plain text, version-controllable, and easy for Claude Code to process.
Mistake 2: Hardcoding parameter ranges in skills. Ranges belong in the definition metadata, not in the skill instructions. If you hardcode “panel_width must be between 0.3 and 2.0” in the skill, you have to update the skill every time the constraint changes. Instead, read ranges from the metadata and let the definition be the single source of truth.
Mistake 3: Skipping the iteration log before making changes. It is tempting to adjust parameters and solve immediately. But without logging the previous state, you lose the ability to compare or revert. Make logging a habit - run /log-iteration before and after significant parameter changes.
Mistake 4: Ignoring unit mismatches. Rhino defaults to the document unit (often meters or millimeters depending on the template). Revit uses internal units (feet) for its API but displays in project units. Skills that prepare data for Revit must always check and convert units. Failing to do this produces geometry that is either microscopic or enormous when it arrives in Revit.
Mistake 5: Creating overly complex skills. A skill that tries to validate constraints, check data trees, generate documentation, prep for Revit, and log iterations all in one file is hard to maintain and debug. Keep skills focused on a single responsibility and use meta-skills to chain them when you need a multi-step workflow.
Mistake 6: Not including your project’s CLAUDE.md context. Your CLAUDE.md file should describe the folder structure, naming conventions, and file formats used in your project. Skills inherit this context automatically when Claude Code runs in the project directory. Without it, skills may look for files in the wrong locations or use incorrect naming patterns.
Getting Started and Next Steps
Computational design is one of the areas where Claude Code skills deliver the highest return. The workflows are repetitive, the validation rules are well-defined, and the cost of undetected errors is high - a single data tree mismatch can waste hours of Revit processing or produce facade panels that do not fit the building.
To get started, pick one skill from this guide that addresses your most pressing pain point. If you spend a lot of time writing GHPython boilerplate, start with the Script Generator. If your team struggles with definition handoffs, start with the Documentation Generator. If your Grasshopper-to-Revit pipeline breaks frequently, start with the Data Prep skill.
Create the .claude/skills/ directory in your project, add the skill markdown file, set up the metadata export component in your Grasshopper definition, and run the skill. Refine the instructions based on your first few runs - add your specific parameter names, adjust the validation thresholds, and customize the output formats to match your team’s conventions.
Once one skill is working reliably, add the next. Within a few weeks, you will have a suite of automation tools that handle the tedious parts of computational design while you focus on the design itself.
For structured courses on Grasshopper, parametric design, and computational workflows for architecture, visit the Archgyan Academy course catalog. Our courses cover practical, project-based computational design skills that pair well with the automation techniques described in this guide.
Level up your skills
Ready to learn hands-on?
- Project-based Revit & BIM courses for architects
- Go from beginner to confident professional
- Video lessons you can follow at your own pace