Renovato/ ai
mode 03 · image → 3D

Scene-ready modelsfrom a render.

Image-to-3D AI for architecture is a class of generative models that converts a single render or photograph into a textured 3D mesh — typically exported as GLB, FBX, or USDZ with baked PBR materials. Renovato exposes this as a node in its atlas, ready to drop into Three.js, Unreal Engine, Unity, or ARKit. One source render, one click, one downloadable scene-ready asset.

routed across
GLB · web, ARFBX · Unreal, UnityUSDZ · ARKit, Reality ComposerPBR · materials baked in

How it works

  1. 01

    Drop a render

    Drag in any architectural image. Front-facing volumes work best; multi-view inputs improve geometry quality.

  2. 02

    Generate the mesh

    Renovato runs an image-to-3D pipeline tuned for architectural scale, producing a textured mesh with PBR materials baked in. Median wall-clock: 8-15 seconds.

  3. 03

    Export to your engine

    Download as GLB (web, AR), FBX (Unreal, Unity), or USDZ (ARKit, Reality Composer). Drop into the engine — materials, normals, and UVs come along.

Presets in this mode

Each preset is named, parameterised, and routed to the engine that fits the task.

3D.01Massing model
3D.02Façade detail
3D.03Interior scene
3D.04Site context

What it's for

for AR/VR-curious studioscase 01

Client AR preview

Send the partner a USDZ link they tap on iOS — the building appears in their living room at scale. No app, no plugin, no rebuild.

for studio portfolioscase 02

Three.js portfolio scene

Embed a rotatable 3D version of your hero project on the studio website. GLB drops straight into Three or React Three Fiber.

for real-time studioscase 03

Game-engine context model

Drop the FBX into Unreal or Unity to use as a backdrop or context model in a real-time presentation. PBR materials carry over.

for early-design phasescase 04

Massing study from a sketch

Sketch a volume study, render a single frame, and convert that frame to a 3D model in one click. Useful at the brief stage when the BIM model doesn't exist yet.

Frequently asked

01

What is image-to-3D AI for architecture?

Image-to-3D AI for architecture is a class of generative models that converts a render or photograph into a textured 3D mesh, typically exported as GLB, FBX, or USDZ with baked PBR materials. Renovato exposes this as a node in its atlas, scene-ready for Three.js, Unreal, Unity, and ARKit.

02

What export formats does Renovato support?

GLB (web and AR), FBX (Unreal and Unity), and USDZ (ARKit and Reality Composer). PBR materials — base colour, metallic, roughness, normal, ambient occlusion — are baked into the output.

03

Does Renovato export BIM/IFC?

Not yet. Renovato's image-to-3D pipeline targets visualization formats. BIM/IFC export is on the roadmap. For BIM workflows, export GLB and import into a tool that handles the conversion.

04

How much detail does the output have?

Output is scene-ready for visualization — typically 5-50k polygons depending on the source complexity, with 2K texture maps. It is not a survey-grade reconstruction; it is a presentation asset.

05

How long does generation take?

Median wall-clock is 8-15 seconds per model. Complex scenes with multiple volumes can take up to 30 seconds. Cost is typically 3-5 credits per run.

06

Can the 3D model be reanimated as a video?

The 3D output is a static mesh. To produce video from it, render frames in your engine — or chain Renovato's image-to-video preset on the source render directly.

07

Does the 3D model stay linked to the source render?

Yes — it sits as a node in the atlas with an edge back to the source. Change the source, regenerate downstream. The atlas keeps the lineage.

explore other modes
compared to
ch. 09 · begin

Drop a render.
See every version.

Open the studio, drop a render, watch Renovato relight, repopulate, animate and rebuild it. 60 free credits to start. No card.