3D Neural Synthesis
_
Global
2023

This research introduces a novel, 3D machine-learning, aided design approach for early design stages. Integrating language within a multimodal framework grants designers greater control and agency in generating 3D forms. The proposed method leverages Stable Diffusion and Runway's Gen1 through the generation of 3D Neural Radiance Fields (NeRFs), surpassing the limitations of 2D image-based outcomes in aiding the design process. This paper presents a flexible machine-learning workflow taught to students in a conference workshop, and outlines the multimodal methods used - between text, image, video, and NeRFs. The resultant NeRF design outcomes are contextualized within a Unity agent-based, virtual environment for architectural simulation, and are experienced with real-time VFX augmentations. This hybridized design process ultimately highlights the importance of feedback loops and control within machine-learning, aided-design processes.


Read full research here

Project Team:
Collaborators:

George Guida, Daniel Escobar & Carlos Navarro

Location:
Global
Status:
Mentioned in:
#
Tags
No items found.