Skip to content

Question: GPU/Docker Execution and Tool/MCP Integration Support #342

@passing2961

Description

@passing2961

Hi,

I’m wondering whether OpenEvolve supports LLM interaction with an external execution environment such as a GPU cluster or a Docker-based server. For example, in a task like “discovering an optimized mixture-of-experts architecture for mathematical reasoning,” the system would need to fine-tune or evaluate candidate model architectures on a GPU environment. Does OpenEvolve support such GPU-backed execution, and is it also possible to call external tools or MCP servers during the evolutionary process?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions