Diffusions in Architecture: Artificial Intelligence and Image Generators
by Matias del Campo, Lev Manovich
Diffusion Models: A Historical Continuum
Nahmad Vazquez Alicia
In 1985, with the first version of Microsoft Windows (Windows1.0), a raster graphics editor Paint came loaded with the software. Paint rapidly became one of the most used applications in the early versions of Windows and introduced many to painting on a computer for the first time.1 Graphic editor software has evolved since. Many versions and different companies, such as Adobe and its Photoshop, suite have made painting and graphics manipulation on the computer more sophisticated and powerful, allowing for operations of ever‐increasing complexity. Architects have incorporated these tools as part of their repertoire and are used today in every architecture school and professional practice without judgment.
Although new functions get added with each release, the menu on the left side of the software of what was then known as Paint in 1985 compared to the left menu of Photoshop CC in 2023 has seen little evolution.2 The tools used for graphics manipulation remained constant for almost 40 years (image 01). In 2014, the emergence of GANs3 and their popularity allowed artists and architects to engage with image creation differently. Dataset collection and curation with the purposes of training became an alternative to generating new images. Vector operations and feature visualization through altering specific neurons became new image manipulation tools.
In the lead‐up to diffusion models, which started emerging from research ...