Engineering and Technology
Nature-inspired architected materials using unsupervised deep learning
S. C. Shen and M. J. Buehler
Natural materials achieve outstanding mechanical performance through hierarchical architectures, motivating bioinspired design for engineered materials. While advances in additive manufacturing have enabled complex architected materials, most machine learning-assisted design methods are supervised and limited by labeled data, pre-defined design spaces, and "black box" outputs that are hard to manipulate directly. Unsupervised generative models such as GANs learn from unlabeled data and produce unlimited novel outputs via a latent space that captures data distributions and enables controlled exploration. The authors propose using StyleGAN, with disentangled latent spaces (z and intermediate w), to generate and control hierarchical, nature-inspired microstructures. The research question is whether an unsupervised StyleGAN, trained on leaf micrographs, can generate, manipulate, and optimize nature-inspired architected materials in 2D and 3D, and whether latent-space operations (projection, encoding, style-mixing, interpolations) and downstream optimization (CNN surrogate + genetic algorithm) can produce functionally improved designs beyond the original dataset.
Prior work has used supervised and probabilistic models (MLPs, CNNs, RNNs, Gaussian Processes) and genetic algorithms to predict or optimize material properties such as strength, toughness, failure mode, yield stress, and yield strain. These approaches require extensive labeled data and constrain outputs to pre-defined spaces. Unsupervised GANs provide unlabeled-data-driven generation with latent spaces enabling novel design synthesis. StyleGAN offers multi-scale control via an intermediate w-space for feature disentanglement and mixing. In architected materials, previous computational frameworks generated tileable microstructures and metamaterials via topology optimization, Voronoi-based methods, star-shaped metrics, discrete growth processes, shape-matching, and combinatorial search for isotropic elastic patterns. While powerful, many rely on pre-computed unit cells, pre-defined families, or mathematical abstractions that may miss statistical nuances found in nature. An example GAN application in materials generated 2D architectures approaching theoretical composite modulus bounds. The present study builds on these by leveraging StyleGAN’s continuous latent space for on-the-fly generation, smooth transitions, gradient stacking, and nature-inspired variability that can improve behaviors like delocalized buckling.
Generative model and dataset: The authors trained Nvidia’s StyleGAN2 on a custom dataset of 95 optical micrographs of leaves collected and imaged at varying magnifications and resolutions. StyleGAN operates in a 512-D z-space and a 512-D intermediate w-space injected at 18 generator layers, enabling scale-specific control and feature disentanglement.
Unit cell creation and 2D tiling: From generated leaf-like images, binary processing, island removal, and XY mirroring (OpenCV) produced symmetric, tileable "leaf-inspired" unit cells emulating 2D open-celled foams. A library of 1000 randomly seeded images and unit cells was generated. Unit cells were tiled in X and Y to create large-area materials; wall thickness could be post-processed to generate graded 2D materials. Mechanical behavior of printed samples was examined via stress–strain tests.
Image manipulation in latent space: Two methods guided design within the latent space: (1) Projection: backpropagation-based optimization of a single w-vector to match a target image, retaining generator characteristics but with limited fidelity for off-distribution targets. (2) Encoding: layer-wise optimization of different w-vectors (one per layer) to increase fidelity at the cost of drifting from pure latent representations; early stopping maintains leaf-like nuances. Style-mixing: specifying different w-vectors for coarse (early layers) and fine (later layers) features to combine desired structural traits from multiple sources.
Latent-space interpolation and reduced-dimension coordinate systems: The authors defined a 2D coordinate system spanned by three latent vectors a, b, c, with basis vectors x=(b−a) and y=(c−a). New points were synthesized as d=a + xx + yy, with x,y∈[0,1], enabling smooth structural gradients between specified unit cells. These interpolations were used to create designed gradients, segmented trajectories, and periodic paths for controlled property distributions. The Img2Architecture tool mapped grayscale pixel values (0–1) to latent interpolations between two chosen unit cells, reconstructing images as spatially varying architectures, with Gaussian interpolation ensuring continuity.
3D architectures via inverse tomography: Small-step latent walks produced continuous image slices stacked along Z (and combined with XY tiling) to create 3D graded architectures. Examples included periodic alternation between lowest- and highest-density unit cells and interpolation among major structural families to produce complex connected open-cell structures. Models were exported as STL (trimesh) and 3D-printed.
Simulation pipeline and surrogate modeling: For high-throughput mechanics, 2D coarse-grained models were generated per structure by tiling a hexagonal unit cell of particles for each solid pixel. Uniaxial compression simulations used LAMMPS with Lennard-Jones interactions, 2D periodic boundaries, NVT ensemble, timestep 0.001 ps, strain rate −0.01/ps. Effective modulus was computed from the linear elastic region of stress–strain curves via regression with quality thresholds; structures with insufficient connectivity (failing criteria) were assigned modulus 0. A dataset of 5000 StyleGAN-generated structures was created with density and normalized effective modulus (normalized by a solid square’s modulus). A four-layer CNN (PyTorch; Conv+ReLU+MaxPool x4, then two fully connected layers) was trained on images to predict normalized modulus; data split 70/10/20 train/val/test; trained 100 epochs; achieved R^2=0.971 on test.
Genetic algorithm (GA) optimization in latent space: A GA operated directly on StyleGAN’s 18-layer w-space, where each individual comprised 18 vectors (512×1 each). Initialization used 10 "pure" individuals (same w-vector for all layers). Each generation created offspring by (i) linear interpolation between parents’ genes, (ii) gene-level crossover across 18 layers, (iii) Gaussian point mutations of vector entries, and added new randomly seeded pure individuals. Population turnover maintained size 10 (adding 7, removing 3 random and 4 least fit) to emphasize exploration. The CNN surrogate evaluated individuals, enabling sub-second evaluations versus ~3 min per simulation. Four fitness schemes were tested: maximize modulus; equally weighted modulus and density; randomly weighted modulus and density per evaluation (RWGA); maximize modulus/density ratio. Over 1000 generations, the algorithm tracked the 50 top-modulus and 50 Pareto-efficient individuals for each scheme; selected individuals were validated with simulations.
Fabrication and testing: 2D samples were 3D-printed using Stratasys Connex3 Objet500 (FLX-MK-S50-DM), Ultimaker S3 (TPU, PLA), and Anycubic Photon Mono X. Compression tests used a 5 kN Instron at 30 mm/min.
Design space expansion: The StyleGAN was retrained on a mixed dataset of leaf venation and trabecular bone (equalized grayscale to reduce color bias), enabling generation and style-mixing of leaf-like, bone-like, and hybrid images to produce novel microstructures with varying degrees of hierarchy.
• StyleGAN trained on 95 leaf micrographs produced a 512-D latent space enabling generation of diverse, tileable unit cells with multiple hierarchical levels, visually grouping into three major families. • A library of 1000 unit cells spanned a broad density range (near 0–1), yielding materials at desired relative densities, which largely govern mechanical properties via E = Es(ρ/ρs)^2. • 2D printed tiles exhibited open-cell foam behavior with three regions in stress–strain curves; example samples (seed 441 and 863) had relative densities 0.399 and 0.601 with effective moduli 96.6 kPa and 401.7 kPa, respectively. • Projection vs encoding: projection preserves leaf-like features but lower fidelity for out-of-distribution targets; encoding achieves higher fidelity but may lose pure leaf nuances; both enable guided design. The model generated colors beyond training data (e.g., black/red), indicating expansive generative capacity. • Style-mixing allowed separation and recombination of coarse structural features (early layers) and fine details (later layers), enabling directed control of hierarchical complexity and other attributes. • Reduced-dimension latent planes and latent walks produced smoothly varying 2D gradients; the Img2Architecture tool mapped images to spatially varying architectures to design patterned buckling regions that deformed as intended under compression. • 3D inverse tomography stacking of latent slices yielded complex, connected open-cell architectures, including periodic density variations and long interpolations among major families (e.g., ~250 steps) to control transition smoothness. • Simulation dataset (n=5000) and CNN surrogate (R^2=0.971) enabled rapid property prediction, supporting GA-driven optimization directly in w-space. • GA outcomes: random-weighted scheme (RWGA) found top-modulus individuals exceeding the maximum modulus in the training set (training max normalized modulus 0.732 at density 0.912), with some simulated moduli exceeding the solid reference (likely artifact of automated modulus extraction). Pareto-front solutions generally aligned with the expected E ∝ (ρ/ρs)^2 behavior; for RWGA populations, modulus vs density^2 fit yielded R^2=0.9819. Surrogate underpredicted top-modulus cases but predicted Pareto sets well. • Retraining with trabecular bone plus leaves produced novel microstructures and hierarchical hybrids via style-mixing, expanding the generative design space beyond the three leaf families.
The study demonstrates that an unsupervised StyleGAN can learn structural priors from unlabeled biological images (leaf venation) and provide a manipulable, continuous latent space to generate, guide, and optimize architected materials. Latent-space tools (projection, encoding, style-mixing, defined coordinate planes, and smooth walks) make it possible to control hierarchical features, density, and spatial gradients, enabling 2D and 3D designs that capture natural variability beneficial for behaviors like delocalized buckling. The CNN surrogate model and GA efficiently explored the latent space to identify high-modulus structures and Pareto-efficient solutions, validating the capacity of unsupervised generative design to move beyond the initial dataset’s envelope for certain objectives (e.g., modulus) while noting constraints on the Pareto front. Experimental prints and mechanical tests confirmed open-cell foam behavior and demonstrated programmed deformation using image-guided architectures. Incorporating additional structural classes (trabecular bone) broadened the latent design manifold and unlocked hierarchical hybrids, underscoring that dataset diversity and size are critical to expanding generative capabilities. Overall, the approach addresses the need for controllable, non–black-box generative design frameworks that transfer information across manifestations and dimensions (2D to 3D) for functional bioinspired metamaterials.
An unsupervised StyleGAN-based workflow was established to generate and manipulate nature-inspired architected materials, validated on leaf venation images. The work contributed: (1) creation of a library of tileable, hierarchical unit cells covering a wide density range; (2) latent-space tools (projection, encoding, style-mixing, reduced-dimension coordinates) for directed design; (3) 2D/3D gradient materials via latent walks and inverse tomography stacking; (4) a high-throughput simulation pipeline with a CNN surrogate (R^2=0.971) for modulus prediction; and (5) a GA operating directly in StyleGAN’s w-space to optimize microstructures, discovering high-modulus designs and Pareto-efficient sets consistent with cellular solids scaling. Retraining with trabecular bone expanded the design space and produced hierarchical hybrids. Future work should: (i) assemble larger, more diverse and complex structural datasets to increase generative variety and smooth transitions across disparate families; (ii) refine latent-space mining (e.g., conditional generation, improved GA strategies) to better explore beyond training distributions; and (iii) develop multiscale analyses, anisotropy considerations, and broader property objectives (yield strength, toughness, Poisson’s ratio) with higher-fidelity simulations and experimental validation for application-driven inverse design.
• Dataset size and diversity: The initial leaf dataset (95 images) limited StyleGAN’s generative space to three main structural families, constraining exploration and the GA’s ability to surpass the training Pareto front. • Surrogate model bias: The CNN overfit slightly and underpredicted moduli for top-performing designs, likely due to a training distribution lacking very high-modulus/density samples. • Automated modulus extraction artifacts: Some simulated moduli exceeded the solid reference, suggesting artifacts from automated linear-region detection; although overall trends matched expectations, absolute values for certain extremes may be unreliable. • Connectivity and fabrication constraints: Some generated/printed architectures exhibited imperfect connectivity or deviations during fabrication, affecting mechanical performance and continuity in patterned gradients. • Objective scope: The study focused primarily on effective modulus; broader multifunctional targets (e.g., toughness, yield strength, Poisson’s ratio, dynamic responses) were not optimized here. • Generalizability: The approach’s extrapolation capacity depends on the training data; broader datasets and conditional controls are needed to reliably generate off-distribution structures (e.g., closed-cell foams) and to precisely control features such as wall thickness.
Related Publications
Explore these studies to deepen your understanding of the subject.

