Luma AI for Architects: 3D Capture and NeRF for Existing Buildings
How architects use Luma AI for 3D site capture, building documentation, and NeRF-based visualization of existing conditions.
Go deeper with Archgyan Academy
Structured BIM and Revit learning paths for architects and students.
Documenting existing buildings has always been one of the most tedious parts of architectural practice. You show up with a tape measure, a laser distance meter, maybe a camera, and spend hours sketching floor plans on graph paper while trying to capture every dimension, material, and spatial relationship. Even with modern tools like total stations or terrestrial LiDAR scanners, the process demands specialized equipment, trained operators, and significant post-processing time.
Luma AI changes this equation fundamentally. Using nothing more than a smartphone, you can walk through and around a building, capture video footage, and let neural radiance field (NeRF) technology reconstruct the scene as a navigable 3D environment. The result is not a flat panorama or a collection of photographs. It is a volumetric capture that preserves spatial depth, material appearance, and lighting conditions in a way that traditional photography simply cannot.
This guide covers the practical workflow for using Luma AI on architectural projects, from capture technique on site to exporting usable geometry for your design tools.
What Is NeRF Technology and Why It Matters for Architecture
NeRF stands for Neural Radiance Field, a machine learning technique first published by researchers at UC Berkeley in 2020. The core idea is deceptively simple: given a set of photographs taken from different positions around a scene, a neural network learns to predict what the scene looks like from any viewpoint, including positions where no photograph was taken.
Traditional photogrammetry also reconstructs 3D scenes from photographs, but it works by matching individual pixels across images and triangulating their positions in space. This produces point clouds and meshes, but struggles with reflective surfaces, transparent materials, and fine details like foliage or wire fences. NeRF takes a fundamentally different approach. Instead of matching pixels, it trains a neural network to represent the entire scene as a continuous volumetric function. For any point in 3D space and any viewing direction, the network predicts the color and density at that location.
For architects, this distinction matters in several practical ways. NeRF captures handle complex materials like glass, polished stone, and water far better than photogrammetry. They preserve lighting conditions and atmospheric effects that disappear in traditional 3D scanning. And they produce results from casual phone video rather than requiring carefully planned photo sets with specific overlap ratios.
Luma AI built a commercial platform around NeRF technology, making it accessible through a smartphone app and web interface. Their system handles the heavy computation on cloud servers, so you do not need a workstation with multiple GPUs to process your captures.
How Luma AI Works: From Phone Video to 3D Scene
The Luma AI workflow has three stages: capture, processing, and viewing or export. Each stage is straightforward, but understanding what happens at each step helps you capture better source material.
Capture happens through the Luma AI app on iOS or Android. You record video while walking through or around your subject. The app uses your phone’s built-in sensors - accelerometer, gyroscope, and LiDAR on supported iPhones - to track your position and orientation as you move. Some users also upload pre-recorded video or photo sets through the web interface.
Processing happens on Luma’s cloud servers. Your video is uploaded, frames are extracted, camera positions are estimated, and the NeRF model is trained on your specific scene. This typically takes between 10 and 30 minutes depending on the complexity and length of your capture. You receive a notification when processing completes.
Viewing and export happen through the Luma web viewer or through direct file downloads. You can navigate the reconstructed scene freely in your browser, share interactive links with clients or team members, and export the scene as a mesh file for use in other software.
The entire pipeline requires no 3D modeling skill, no specialized equipment beyond a phone, and no manual processing on your part. The quality of the output depends almost entirely on the quality of your capture technique, which is where architectural field experience becomes a genuine advantage.
Capture Techniques for Architectural Exteriors
Exterior captures are where Luma AI shines brightest. Buildings have distinct geometry, varied materials, and clear spatial boundaries that NeRF models handle well. The following techniques will produce significantly better results than casual video recording.
Walk a complete orbit. Start at one corner of the building and walk slowly around the full perimeter. Maintain a consistent distance of roughly 5 to 10 meters from the facade. Keep the camera pointed at the building throughout, not at the ground or the sky. A complete orbit gives the NeRF model views from every angle, which is critical for accurate reconstruction.
Vary your distance. After completing one orbit at medium range, do a second pass closer to the building, focusing on details like entries, window assemblies, material transitions, and ornamentation. Then do a third pass from further back to capture the building in context with its site. These multiple distances give the model information at different scales.
Move slowly and steadily. NeRF reconstruction depends on extracting sharp frames from your video. Fast movement causes motion blur, which degrades results dramatically. Walk at roughly half your normal pace. Avoid sudden direction changes or camera rotations.
Overlap your coverage. Every part of the building should appear in frames taken from at least three different positions. As a practical rule, if you are walking along a facade, the camera should see at least 30 to 40 percent of what it saw in the previous second of video. Insufficient overlap creates gaps or artifacts in the reconstruction.
Handle corners deliberately. Building corners are the most common place for reconstruction failures. When you reach a corner, slow down further and sweep the camera through a wider arc so both adjacent facades are visible simultaneously in several frames. This gives the model the geometric information it needs to connect the two surfaces.
Capture site context. Include surrounding elements like the ground plane, adjacent buildings, landscaping, and street furniture in your video. These elements help anchor the reconstruction spatially and provide useful context for design work.
Interior Scanning: Room-by-Room Strategy
Interior captures are more challenging than exteriors because spaces are smaller, lighting is mixed, and there are more occluded areas. A systematic approach prevents the frustration of getting halfway through processing only to discover a room was captured poorly.
Work room by room. Rather than walking through an entire building in one continuous take, capture each room as its own video. Start in the doorway facing into the room, then walk slowly along the perimeter, keeping the camera aimed at the center of the room. At each corner, pause and sweep to show how walls connect. Then move to the center and do a slow rotation to capture the ceiling and floor.
Address doorways and transitions. The connection between rooms is critical for spatial understanding. When moving from one room to the next, stand in the doorway and slowly pan from one room to the other, making sure both spaces are visible in the same frames. This gives the model the geometric link between adjacent captures.
Manage lighting carefully. Mixed lighting - daylight from windows combined with artificial interior lighting - creates color inconsistencies that can confuse the reconstruction. If possible, turn on all interior lights and close blinds to create more uniform conditions. If you must work with strong daylight contrast, make separate captures of the bright window wall and the darker interior, ensuring overlap between them.
Handle tight spaces. Bathrooms, closets, and narrow corridors are difficult because you cannot get far enough from surfaces for good coverage. Use your phone’s wide-angle camera if available. Move very slowly and make sure to capture floor-to-ceiling views from multiple positions along the length of the space.
Document ceiling conditions. In renovation work, ceiling conditions matter as much as walls and floors. Point your camera upward deliberately at several positions in each room. Exposed structure, mechanical runs, and ceiling heights all need to appear in enough frames for the model to reconstruct them.
Export Formats: Mesh, Point Cloud, and Gaussian Splats
Luma AI provides several export options, each suited to different downstream workflows.
OBJ mesh export produces a triangulated mesh with an accompanying texture map. This is the most widely compatible format. You can import OBJ files into virtually any 3D application, including SketchUp, Rhino, Blender, and even Revit through intermediary tools. The mesh quality varies with capture quality, but expect roughly architectural-sketch-level detail rather than survey-grade precision. Mesh exports work well for massing studies, context modeling, and visual reference.
PLY point cloud format provides a colored point cloud rather than a continuous surface. Point clouds are useful when you need to take measurements or when you plan to model over the captured data rather than use it directly. Rhino and CloudCompare handle PLY files natively. For Revit workflows, you can convert PLY to RCP format using Autodesk ReCap.
Gaussian splat format is Luma’s native high-fidelity format. Gaussian splats represent the scene as millions of tiny 3D elements, each with position, color, opacity, and shape properties. The visual quality of splat exports far exceeds mesh exports, preserving material appearances and lighting effects that triangulated meshes lose. However, splat files require specialized viewers and are not directly importable into most design tools yet. They are excellent for client presentations and spatial documentation where visual fidelity matters more than geometric editability.
USDZ export works specifically for Apple devices, allowing you to view the capture in AR on an iPhone or iPad. This is surprisingly useful for site visits where you want to compare the captured condition with current reality, or for showing clients the existing building state in context.
Integration with Revit, SketchUp, and Rhino
Getting Luma AI captures into your primary design tool requires understanding what each tool can accept and what level of quality to expect.
Revit Workflow
Revit does not natively import OBJ or PLY files, so the workflow requires an intermediate step. The most practical approach is to convert the Luma mesh export to an RVT-compatible format.
- Export the OBJ mesh from Luma AI
- Open the OBJ in Autodesk ReCap or Blender
- For ReCap: convert to RCP/RCS point cloud format, then link into Revit as a point cloud
- For Blender: clean the mesh, export as FBX, and import into Revit as a generic model
The point cloud workflow is generally more useful for Revit projects because you can snap to points while modeling existing conditions. The mesh import works when you need a visual reference in views but do not need to interact with the geometry directly.
SketchUp Workflow
SketchUp handles OBJ imports directly, making it the simplest integration path.
- Export the OBJ mesh from Luma AI
- In SketchUp, go to File and then Import, select OBJ format
- The mesh imports as a group that you can position and scale in your model
For large captures, the mesh can be heavy. Consider using a mesh reduction tool like MeshLab to simplify the geometry before importing. Reducing from 500,000 triangles to 50,000 will make SketchUp far more responsive while preserving the overall form.
Rhino Workflow
Rhino is the most flexible option for working with Luma exports. It handles OBJ meshes, PLY point clouds, and even custom import scripts for splat data.
- OBJ import: File, Import, select the OBJ file. The textured mesh appears directly in Rhino’s viewport.
- PLY import: Use the PointCloudImport command or drag and drop the PLY file. You get a colored point cloud that you can snap to while modeling.
- For Grasshopper users: point cloud data can feed directly into algorithmic workflows for surface fitting, analysis, or parametric modeling over existing conditions.
Comparison with LiDAR Scanning and Traditional Photogrammetry
Understanding where Luma AI fits among existing capture technologies helps you choose the right tool for each project.
Terrestrial LiDAR (FARO, Leica BLK360) produces survey-grade point clouds with millimeter accuracy. Registration between scan positions is precise, and the resulting data is suitable for construction documents and fabrication. However, LiDAR equipment costs between $15,000 and $100,000, requires training to operate effectively, and scan processing can take hours or days. LiDAR remains the right choice when dimensional accuracy matters for code compliance, structural assessment, or tight-tolerance renovation work.
Photogrammetry (Agisoft Metashape, RealityCapture) reconstructs geometry from overlapping photographs. Quality ranges from adequate to excellent depending on photo count, overlap, and processing settings. Equipment cost is moderate since any good camera works, but processing demands significant computing resources and expertise. Photogrammetry excels at textured surface reconstruction and works well at building scale.
Luma AI / NeRF offers the lowest barrier to entry. A phone, an internet connection, and 15 minutes of walking produce a usable 3D capture. Accuracy is lower than LiDAR or carefully executed photogrammetry, typically within 2 to 5 percent dimensional error for building-scale subjects. But the speed and accessibility make it practical for situations where traditional methods are not justified.
The practical guidance is simple: use LiDAR when you need measurements you can stamp drawings with, use photogrammetry when you need detailed textured models for large or complex subjects, and use Luma AI for quick site documentation, context captures, design reference, and any situation where getting a 3D capture at all is better than getting none because the budget did not allow for specialized equipment.
Use Cases in Architectural Practice
Renovation and Adaptive Reuse
Before renovating an existing building, you need thorough documentation of current conditions. Luma AI lets a single architect walk through the building during an initial site visit and come back to the office with a navigable 3D record. This capture becomes a reference throughout design development, reducing return visits and helping team members who never visited the site understand the existing spatial conditions.
Historic Preservation
Historic buildings often have irregular geometry, ornamental details, and material conditions that are difficult to capture in conventional measured drawings. NeRF captures preserve visual richness that point clouds and meshes lose. The color, texture, and lighting of historic interiors appear in the capture exactly as they exist, making it easier to document character-defining features and communicate preservation priorities.
Site Documentation and Context Modeling
For new construction projects, capturing the surrounding built environment provides context for design studies. Rather than modeling neighboring buildings from scratch, capture them with Luma AI and import the meshes as context geometry. This is especially valuable for urban infill projects where the relationship to adjacent buildings drives design decisions.
Client Communication
Sharing an interactive 3D capture with a client is far more effective than sharing photographs. The client can navigate the space themselves, look at whatever interests them, and develop a genuine spatial understanding before design begins. Luma’s web viewer requires no software installation. You simply send a link.
Construction Progress Documentation
Capturing a site at regular intervals during construction creates a 3D record of progress that goes far beyond standard construction photography. These captures can document concealed conditions before they are covered, record actual as-built positions of structure and MEP rough-ins, and provide evidence for dispute resolution if needed.
Pricing and Plans
Luma AI offers a free tier that is genuinely useful for occasional captures. As of early 2026, the pricing structure includes:
- Free tier: Limited captures per month with standard processing quality and watermarked exports
- Pro plan: Unlimited captures, higher processing quality, full-resolution exports, and priority processing - typically around $30 per month
- API access: For teams building Luma captures into custom workflows or applications, API pricing is usage-based
For a solo practitioner doing a few site captures per month, the free tier may suffice for initial evaluation. For regular use across projects, the Pro plan pays for itself the first time it saves you a return site visit. Check the Luma AI pricing page for current rates, as these change frequently.
Limitations and Challenges
No technology is perfect, and understanding Luma AI’s limitations prevents frustration and wasted time on site.
Reflective and transparent surfaces remain the hardest challenge for NeRF technology. Large glass facades, mirrors, polished marble floors, and bodies of water confuse the reconstruction because reflections change with viewing angle in ways the model struggles to learn. The practical workaround is to capture reflective areas from as many angles as possible, but expect some artifacts.
Large sites require patience. A single building capture takes 5 to 15 minutes of walking. A campus or large commercial complex might require dozens of individual captures that you then need to manage as separate scenes. Luma does not currently offer automated stitching of multiple captures into a unified model.
Dimensional accuracy is limited. Do not rely on Luma captures for construction dimensions. Error rates of 2 to 5 percent at building scale mean that a 10-meter wall might measure anywhere from 9.5 to 10.5 meters in the reconstruction. For schematic design reference this is fine. For construction documents it is not.
Moving objects create artifacts. People walking through your capture, cars driving past, or trees swaying in wind all create ghosting and distortion. Try to capture when the scene is as static as possible. Early mornings work well for exterior captures of occupied buildings.
Indoor lighting extremes. Very dark spaces and very bright windows in the same scene challenge the phone’s camera sensor. The NeRF model inherits whatever the camera recorded, so blown-out windows and dark corners in your video become blown-out and dark in the reconstruction.
Processing requires internet. All computation happens on Luma’s servers. You need to upload your video, which can be several gigabytes for a thorough building capture. On slow or metered connections, this becomes a practical barrier.
Best Practices for Quality Captures
These field-tested practices consistently produce better results:
-
Charge your phone fully before a capture session. Video recording with GPS and sensor tracking drains batteries fast. Bring a power bank for extended sessions.
-
Clean your lens. This sounds trivial, but a smudged phone lens softens every frame and degrades reconstruction quality. Wipe it before every capture.
-
Use the rear camera, not the selfie camera. The rear camera has a better sensor, wider dynamic range, and on supported iPhones, a LiDAR scanner that significantly improves depth estimation.
-
Lock exposure and focus if your phone supports it. Auto-exposure changes as you move cause brightness flickering that confuses the reconstruction. Tap and hold on a mid-tone area to lock settings before you start recording.
-
Capture in overcast conditions for exteriors when possible. Even, diffuse lighting eliminates harsh shadows and reduces dynamic range challenges. Direct sunlight creates strong contrasts that are harder for both the camera and the NeRF model to handle.
-
Plan your path before recording. Walk the route once without recording to identify obstacles, plan where you will turn, and note areas that need extra coverage. Then record in one smooth take.
-
Capture video at the highest quality your phone allows. 4K resolution provides more detail for the reconstruction to work with. Avoid slow-motion modes since they reduce resolution.
-
Include a scale reference. Place a meter stick or a known object in the scene so you can calibrate dimensions after reconstruction. This is especially important if you plan to take measurements from the exported model.
-
Capture during low-traffic times. Fewer moving objects means fewer artifacts. For public buildings, early morning or weekend captures often produce the cleanest results.
-
Record an overlap pass. After your main capture, record a quick supplementary video focused on any areas where walls meet, where ceiling heights change, or where materials transition. These junctions are where reconstruction quality matters most.
Getting Started with Your First Capture
The best way to learn Luma AI is to capture a building you know well. Start with your own office or home. You already know the spatial layout, which makes it easy to evaluate how accurately the reconstruction represents reality.
Download the Luma AI app, create an account, and walk through the exterior of the building following the orbit technique described above. Upload and wait for processing. When the result arrives, navigate through it in the web viewer and compare what you see with what you know about the building. This calibration exercise teaches you more about capture technique in 30 minutes than reading about it ever could.
For architects looking to deepen their skills with digital tools for practice, including 3D capture workflows, BIM integration, and computational design, explore the course catalog at Archgyan Academy.
NeRF-based 3D capture is still a young technology, but it has already reached a level of quality and accessibility that makes it practical for everyday architectural work. The barrier to capturing existing conditions in 3D has dropped from tens of thousands of dollars in equipment to a phone you already own. That shift changes how architects can engage with existing buildings, and the practices that figure out how to use this capability effectively will have a real advantage in renovation, preservation, and context-sensitive design work.
Level up your skills
Ready to learn hands-on?
- Project-based Revit & BIM courses for architects
- Go from beginner to confident professional
- Video lessons you can follow at your own pace