Participatory Evolution of Artificial Life Systems via Semantic Feedback

Presented at
SIGGRAPH Asia 2024

https://doi.org/10.1145/3757369.3767620    Read Paper Online



Figure 1: Evolutionary Pathways of Digital Lifeforms and Schematic Illustration of Ecological States in a Digital Life System.

Human interaction with complex systems, including ecological environments, social dynamics, and artificial simulations, presents ongoing challenges for both modeling and creative intervention. While agent-based models and artificial life systems provide formal tools for simulating emergent behavior, they often rely on fixed rule structures and limited input modalities. This makes it difficult to embed high-level conceptual intent into evolving system dynamics.

Recent advances in multimodal AI, particularly vision-language models such as Contrastive Language-Image Pretraining (CLIP), have expanded the potential of using natural language to guide generative systems. However, most implementations adopt a static prompt-to-output paradigm in which language acts as a trigger rather than a continuous modulator. This restricts semantic iteration and reduces a system’s ability to adapt behavior in response to evolving user input.

In this work we return to artificial life to make generative processes visible and shapeable by people. Generative tools are now common, yet they often hide how results are produced and why certain behaviors appear. From a critical perspective rather than a purely technical one, the focus is to keep the making in view and to invite negotiation of meaning, not only the selection of final outputs. We therefore treat language as a medium that regulates behavior over time rather than a trigger for one-off results. Our system lets audiences grow, steer, and revise an evolving ecosystem, so authorship becomes shared and procedures remain legible. The focus shifts from matching a prompt to cultivating dynamics that can be discussed and adjusted. In cultural settings, this reframes spectatorship as participation and relocates meaning-making into the ongoing regulation of behavior. We understand open-ended evolution not as endless novelty but as the practice of maintaining diversity, legibility, and revision in a public setting.

Motivated by this view, we introduce a semantic feedback-driven framework for evolving artificial life systems, where natural language serves as both expressive input and regulatory signal. Grounded in a swarm-based model, our architecture integrates a Bidirectional Encoder Representations from Transformers (BERT) prompt-to-parameter encoder, a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer, and a CLIP-based semantic evaluation module. This closed-loop pipeline allows language to influence both the visual output and the behavioral rules of the simulation, enabling interpretable, adaptive, and semantically aligned evolution.

To demonstrate the framework’s capabilities, we implemented it in an interactive platform that enables users to co-construct a semantic ecosystem. Participants generate and evolve synthetic agents using language prompts. These agents coexist and interact in a shared artificial environment, exhibiting behaviors shaped by both local rules and emergent collective dynamics. As prompts accumulate, the system analyzes linguistic histories and behavioral trajectories to derive meta-rules that recursively influence subsequent evolution. We evaluated the system in two settings: a controlled study assessing its alignment with user intent, and a public deployment supporting open-ended co-creation. Results suggest that semantic input enhances controllability, interpretability, and user engagement, positioning the framework as a novel platform for participatory artificial life and expressive complex system design.

Video Demo: