Hi,
I’m wondering whether OpenEvolve supports LLM interaction with an external execution environment such as a GPU cluster or a Docker-based server. For example, in a task like “discovering an optimized mixture-of-experts architecture for mathematical reasoning,” the system would need to fine-tune or evaluate candidate model architectures on a GPU environment. Does OpenEvolve support such GPU-backed execution, and is it also possible to call external tools or MCP servers during the evolutionary process?
Thanks!