Skip to content

Conversation

@ChrisRackauckas
Copy link
Member

This post fully explains the licensing changes

This post fully explains the licensing changes
Copy link
Member

@AayushSabharwal AayushSabharwal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly just grammatical/phrasing changes.

ModelingToolkit compiler when ready. The ModelingToolkit tearing passes still need to improve their rules around
arrays to remove some scalarization, and this will happen in the v11 time frame as no breaking changes are required
for that. When completed, the generated code for array expressions will be O(1) in code size and compilation time.
This will solve a long-standing issue in the ModelingToolkit compiler, and approximately 80% of the work is now
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

80% seems a little over the top 😅 considering we still need tensor calculus. We also need to update StateSelection.jl's core data structures to support array equations, which it remains (rather blissfully) unaware of.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's a better guestimate?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uhm. I'd say around 60% best case.

@isaacsas
Copy link
Member

isaacsas commented Dec 2, 2025

This feels very premature to me. We need to make a community post explaining the situation on Discourse and solicit feedback before proceeding here. The community may have a strong feeling that the proposed split of MTK is not something they support, and they would prefer to just continue with an MIT MTK 10 as the SciML version (note, I'm not saying I'd personally advocate this direction).

@ChrisRackauckas
Copy link
Member Author

I talked with quite a few people and the issue with that is that it gives a false sense of choice. Continuing with the MIT MTK v10 version as the SciML version just isn't really on the docket. It's either we make it the project, in which case to compensate for the loses we need to get a private donor (because both EU and US sources have already been fully checked) of ~$15mil + 2 people who will leave their jobs to go full time on maintenance that are not part of that funding, or we do this route. The first is such a ridiculous ask that it sounds more like a satire, but we have all come to terms with that being the reality of the project by now, as explained in the post. I don't think the community will just find something to satisfy that in the next week or two, especially after we've had open calls for donations and maintainers since the start of the project. So it's either we leave the repo dead and abandoned, which still would need to probably find some compensation because that too can be a downside to further grant reviews, or we do this approach. At the end of the day, it's a false sense of choice unless some miracle happens. There are grants to submit for the next iteration of all entities, including MIT, and they all have had this requirement for years.

And of course we can always merge backports to v10 (and v9) if anyone does show up with commits.

@isaacsas
Copy link
Member

isaacsas commented Dec 2, 2025

edit: I have edited this as I thought the post had been merged and was not still open for comments. My apologies.

@isaacsas
Copy link
Member

isaacsas commented Dec 2, 2025

At least for me, I would not vote to make this post / official announcement without providing an opportunity for community feedback. Even if the feedback post states that this is the only way to realistically continue MTK development.

@ChrisRackauckas
Copy link
Member Author

What do you think is the best way to share it? A link to this PR?

@isaacsas
Copy link
Member

isaacsas commented Dec 2, 2025

Chris, I think you are misinterpreting my comments / thoughts on this. I am not trying to push back on a hybrid approach as we previously discussed.

I do, however, think we need to give our users a chance to understand the change and its ramifications, and to comment on it. I don't expect anyone to offer a better solution, but they deserve our letting them know the situation, what we think is the best resolution we can construct, and a chance to at least give us their feedback. Let's give users a heads up about this change and its necessity instead of surprising them with it, thereby giving them no ability to at least give us feedback before we officially make any switch.

On another note, we did get MTK funding from Wellcome and CZI, though the funding ultimately primarily contributed to MethodOfLines and Catalyst. However, in both cases I believe the funders were happy to fund MTK in addition to its ecosystem (isn't MTK part of what pulled Wellcome in originally?). That said, I don't think either of these funders are continuing such programs, so it completely reinforces your comments about lack of funding mechanisms these days.

@devmotion
Copy link
Member

I don't see why Pumas wouldn't just use ModelingToolkitBase, it doesn't rely on the other functionality.

structural_simplify is part of the SBML/QSP workflow in Pumas, and more generally when integrating external MTK models in a Pumas model. Additionally, being able to initialize a system with a combination of fixed values and eg zero derivatives is of interest to us. I used both features also in a research project recently. However, as far as I understood, both structural_simplify and the more advanced initialization system would be part of the AGPL-licensed ModelingToolkit, not MTKBase.

1 similar comment
@devmotion
Copy link
Member

I don't see why Pumas wouldn't just use ModelingToolkitBase, it doesn't rely on the other functionality.

structural_simplify is part of the SBML/QSP workflow in Pumas, and more generally when integrating external MTK models in a Pumas model. Additionally, being able to initialize a system with a combination of fixed values and eg zero derivatives is of interest to us. I used both features also in a research project recently. However, as far as I understood, both structural_simplify and the more advanced initialization system would be part of the AGPL-licensed ModelingToolkit, not MTKBase.

@TorkelE
Copy link
Member

TorkelE commented Dec 3, 2025

I think it might be easiest to just have a quick meeting to go through the details, ideally soon so that we donät have to delay too much with sorting this out. I'm mostly available. Probably it just hinges on finding a time Chris and Sam can both do

@AayushSabharwal
Copy link
Member

structural_simplify is part of the SBML/QSP workflow in Pumas, and more generally when integrating external MTK models in a Pumas model. Additionally, being able to initialize a system with a combination of fixed values and eg zero derivatives is of interest to us. I used both features also in a research project recently. However, as far as I understood, both structural_simplify and the more advanced initialization system would be part of the AGPL-licensed ModelingToolkit, not MTKBase.

A primitive form of structural_simplify exists in the no-GPL MTKBase. It only performs order reduction, and does not eliminate variables as observed. Initialization will also continue to live in MTKBase. Initializing with zero derivatives of differential variables will continue to function in MTKBase.

@devmotion
Copy link
Member

does not eliminate variables as observed

That's the crucial part in the SBML/QSP workflow (and I guess more generally when integrating MTK models, but that's were it came up repeatedly recently) 🙂

Initializing with zero derivatives of differential variables will continue to function in MTKBase.

That's great to hear and was not clear to me from previous discussions.

@ChrisRackauckas
Copy link
Member Author

does not eliminate variables as observed

Pumas and SBML do rely on a very primitive version of this though, which is effectively just expressions like x ~ ... which are already written as explicit calculations. I think creating a new function that turns explicit algebraic relations into observed is completely fine. It needs a new transformation function to be written so that it doesn't use the tearing graph but I can see a reasonable purely symbolic algorithm pretty easily. You scan the left hand side: every variable is either D(x) or x, if not error. So then you keep all of the D(x) ~ expressions, and move all of the x ~ expressions to observed. Because you won't have dependency graph resolution, you might want to just check that all symbolic values on the rhs are differential variables. This version is a rather quick version that requires none of the tearing infrastructure, 50 lines or so?

@AayushSabharwal
Copy link
Member

I actually have a pass since early in v10 which does exactly this, since it speeds up mtkcompile in a lot of cases (especially initialization). It uses the incidence graph, but I can probably either build it temporarily in MTKBase or do without it.

@ChrisRackauckas
Copy link
Member Author

I think that's worth adding.

@isaacsas
Copy link
Member

isaacsas commented Dec 3, 2025

If someone takes the time and effort to port pieces of the MTK 10 functionality that has moved to the JuliaHub library, will that be accepted into the new base MIT MTK 11?

@ChrisRackauckas
Copy link
Member Author

It would probably be best as a subpackage / sublibrary, so that it stays maintainable. Part of the refactoring is that it's just better to have a flexible pass system in the compiler: this will enable future alternative algorithms for doing tearing / index reduction (of which there are many possible alternative algorithms which have trade-offs, which we have not been able to explore because we have had one hardcoded path). So MTKBase having a flexible pass system makes it so that there can be many different forms of tearing and index reduction, and if someone wants to give a contribution of a sublibrary that adds new passes, such as the kind you suggest, that is a very reasonable thing to live in the lib folder.

Note that I will likely be submitting some grants from the MIT Julia Lab during the spring around work on doing optimal tearing algorithms, which would be an alternative tearing implementation which would use a SAT solver to directly solve the NP-complete problem to find the minimal tearing solution. This also gives uniqueness, something that the heuristic greedy methods like the one we have don't guarantee (this is where the sorting stuff comes from). However, it necessarily has issues with scaling and requires scalarization of all array equations, so it likely isn't so useful to lots of the things people are doing in Dyad, but as a symbolic-numeric push for research and a drive to get more stable methods with provable characteristics, this would be a very fun and interesting algorithm that would probably take like 3 years of a PhD student to get robust. This, if funded to MIT, would be something that would go into the lib folder as an alternative tearing pass and be MIT licensed. Though as stated above, if grants are in this like 5% acceptance rate for this space, then spending the time to write two more chances in the Spring means it's overwhelmingly likely this never gets funded (and hasn't gotten funded the last 2 years I've tried).

The MTKBase refactor is simply a better infrastructure for continuing this research and development because it does not favor any pass, and opens up the research and development of others. MTK defaulting to the JuliaHub one makes sense because it's the one that is the fastest, most tested, and most robust, but the whole goal of this is to foster more passes to be created. Again, the whole point is to get more contributors into the ecosystem, to get more people playing with all of this, and to foster more growth in people writing such passes and add-ons.

So, no it wouldn't go into the core of MTK because nothing should, there should be no hardcoded pass. All should be add-ons. And then MTK just chooses whatever is generally the best for most people to use. If someone spends enough time to build one that is strictly better than the one that has been developing over the last 5 years, MTK should change that to the default. I just think it's pretty unlikely that will be the case looking at the history of what it has required to even get to this point, so what will likely happen is that there will be some half alternatives which are more research focused and solve some interesting problems but the core recommended one will likely be the StateSelection.jl and ModelingToolkitTearing.jl ones.

ModelingToolkit.jl already had some GPL implications. However, with this change, the implications include
the symbolic transformation libraries specifically for handling acausal models and high-index DAEs.

This was a difficult decision to make, and the evolution of ModelingToolkit.jl shows many scars of previous
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we maybe should make more clear that this decision would be made and supported by the SciML Steering Council. That way readers know the mechanisms but which things like this is handled.

@ChrisRackauckas
Copy link
Member Author

On another note, we did get MTK funding from Wellcome and CZI, though the funding ultimately primarily contributed to MethodOfLines and Catalyst. However, in both cases I believe the funders were happy to fund MTK in addition to its ecosystem (isn't MTK part of what pulled Wellcome in originally?). That said, I don't think either of these funders are continuing such programs, so it completely reinforces your comments about lack of funding mechanisms these days.

Not only was the funding primarily to MethodOfLines and Catalyst, about half of CZI was siphoned off to Tim Holy to be what gave precompile binaries in v1.9, with the other part of the funding there being funding for Valentin that was siphoned out of time on CLIMA grants. The MTK portion on those ended up being rather minimal, as none of that went to funding my time, Yingbo's time, Shashi's time, or Chris Elrod's time who were doing the maintenance/growth of this area of MTK during those years. I believe it did fund Vincent for a bit? And he did make contributions to MTK (though hadn't been a core maintainer), so it helped in some great ways around the Catalyst analysis, but after did most of his MTK contributions (such as the BVP work) as an intern at JuliaHub and now has a position at Neuroblox where he does his contributions.

This reinforces the main point: funding agencies don't fund compilers. All of this has relied on the principle of just securing double the amount of funding needed, and having everyone do double the amount of work (to support all of the extra aims from the extra grants) so that the compiler folks can exist 😅, who themselves sometimes double time. It's madness that this has ever worked.

@sebapersson
Copy link

does not eliminate variables as observed

Pumas and SBML do rely on a very primitive version of this though, which is effectively just expressions like x ~ ... which are already written as explicit calculations. I think creating a new function that turns explicit algebraic relations into observed is completely fine. It needs a new transformation function to be written so that it doesn't use the tearing graph but I can see a reasonable purely symbolic algorithm pretty easily. You scan the left hand side: every variable is either D(x) or x, if not error. So then you keep all of the D(x) ~ expressions, and move all of the x ~ expressions to observed. Because you won't have dependency graph resolution, you might want to just check that all symbolic values on the rhs are differential variables. This version is a rather quick version that requires none of the tearing infrastructure, 50 lines or so?

I just wanted to add that having this supported in structural_simplify in MTKBase would indeed be important for SBML importers (and I suspect importers of other formats like CellML, BioNetGen) but also for parameter estimation workflows. As Chris mentioned, in SBML we frequently have models with so-called assignment rules, i.e. expressions of the form x ~ ... in addition to differential expressions of the form D(x) ~ .... Being able to send expressions like x ~ ... into observables simplifies SBML model import and also benefit parameter estimation, since these assignment rules are often used directly as observables in parameter estimation.

~~~

But we do realize this will have some implications to some users. Please feel free to reach out to me, Chris
Rackauckas, via email directly if you want to discuss this in further detail.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a call to action but you don't list an email contact

sustainable model for the project's future. I hope the complete honesty and transparency of these changes helps
everyone understand the reasoning behind the changes. And I, Chris Rackauckas, am available to talk about not
just the technical aspects but also the community and funding aspects of ModelingToolkit, SciML, and the rest of
the Julia ecosystem at any time. If you have any questions, feel free to reach out. Let's see if this makes the
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's another call to action that needs a specific contact.

sustainable model for the project's future. I hope the complete honesty and transparency of these changes helps
everyone understand the reasoning behind the changes. And I, Chris Rackauckas, am available to talk about not
just the technical aspects but also the community and funding aspects of ModelingToolkit, SciML, and the rest of
the Julia ecosystem at any time. If you have any questions, feel free to reach out. Let's see if this makes the
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's see if this makes the best version of the open source ecosystem.

Is this the best way to end such a blog post? It sounds a bit of a downer to my kiwi-lang. I'd either remove and end with "reach out", or something more positive.

@ViralBShah
Copy link
Contributor

ViralBShah commented Dec 7, 2025

I think it would be valuable to have some before/after timings for the loading times from the type stability and the refactoring work.

A thought - it would be good to have a separate licensing blog post rather than having everything in the MTK V11 release post.

@ChrisRackauckas
Copy link
Member Author

I think it would be valuable to have some before/after timings for the loading times from the type stability and the refactoring work.

@AayushSabharwal you have some of those?

@AayushSabharwal
Copy link
Member

Before

Script

import REPL
using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as D

@variables x(t) = 1 y(t)
@parameters k f = 2.3
eqs = [D(x) ~ x * k + y, y ~ 2x + t]
dvs = Num[x, y]
ps = Num[k, f]
defs= Dict{Num, Float64}()
defs[y] = 2.3
print(Val(:z))
println("System")
@timev System(eqs, t, dvs, ps; defaults = defs, guesses = defs, name = :sys)
# print(Val(:a))
sys = System(eqs, t, dvs, ps; defaults = defs, guesses = defs, name = :sys)
print(Val(:a))
println("Complete")
@timev complete(sys)
print(Val(:b))
println("TearingState")
@timev TearingState(sys)
print(Val(:c))
println("mtkcompile")
@timev mtkcompile(sys)

On Julia 1.11.7 with [email protected], with the command

julia --check-bounds=yes -O 3 --project=@. --startup-file=no --trace-compile=bad.jl workload.jl

Timings

System construction:

  0.243758 seconds (563.80 k allocations: 30.613 MiB, 99.48% compilation time: 3% of which was recompilation)
elapsed time (ns):  2.43757958e8
gc time (ns):       0
bytes allocated:    32099616
pool allocs:        563137
non-pool GC allocs: 16
malloc() calls:     651
free() calls:       0
minor collections:  0
full collections:   0

complete:

  1.795140 seconds (9.76 M allocations: 506.143 MiB, 2.67% gc time, 99.75% compilation time: 71% of which was recompilation)
elapsed time (ns):  1.795140083e9
gc time (ns):       47998414
bytes allocated:    530729216
pool allocs:        9747214
non-pool GC allocs: 111
malloc() calls:     10566
free() calls:       8069
minor collections:  5
full collections:   1

TearingState construction:

  0.374312 seconds (527.01 k allocations: 32.318 MiB, 24.13% gc time, 99.60% compilation time: 85% of which was recompilation)
elapsed time (ns):  3.74312e8
gc time (ns):       90318708
bytes allocated:    33888248
pool allocs:        526440
non-pool GC allocs: 11
malloc() calls:     555
free() calls:       2923
minor collections:  1
full collections:   0

mtkcompile:

  1.772756 seconds (3.81 M allocations: 206.068 MiB, 0.63% gc time, 99.71% compilation time: 71% of which was recompilation)
elapsed time (ns):  1.772755875e9
gc time (ns):       11162292
bytes allocated:    216077752
pool allocs:        3808615
non-pool GC allocs: 61
malloc() calls:     4877
free() calls:       4844
minor collections:  2
full collections:   0

After

Script

import REPL
using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as D
using Symbolics: SymbolicT

@variables x(t) = 1 y(t)
@parameters k f = 2.3
eqs = [D(x) ~ x * k + y, y ~ 2x + t]
dvs = Num[x, y]
ps = Num[k, f]
ics = Dict{SymbolicT, SymbolicT}()
ics[y] = 2.3

println("System")
print(Val(:z))
@timev System(eqs, t, dvs, ps; initial_conditions = ics, guesses = ics, name = :sys)
print(Val(:zz))
# print(Val(:a))
sys = System(eqs, t, dvs, ps; initial_conditions = ics, guesses = ics, name = :sys)
println("Complete")
print(Val(:a))
@timev complete(sys)
print(Val(:aa))
println("TearingState")
print(Val(:b))
@timev TearingState(sys)
print(Val(:bb))
println("mtkcompile")
print(Val(:c))
@timev mtkcompile(sys)
print(Val(:cc))

On Julia 1.11.7 with SciML/ModelingToolkit.jl#4044 with the command

julia --check-bounds=yes -O 3 --project=@. --startup-file=no --trace-compile=bad.jl workload.jl

Timings

System construction:

  0.000670 seconds (217 allocations: 10.641 KiB)
elapsed time (ns):  669875.0
gc time (ns):       0
bytes allocated:    10896
pool allocs:        217
non-pool GC allocs: 0
minor collections:  0
full collections:   0

complete:

  0.001191 seconds (1.08 k allocations: 2.554 MiB)
elapsed time (ns):  1.190625e6
gc time (ns):       0
bytes allocated:    2678088
pool allocs:        1077
non-pool GC allocs: 0
malloc() calls:     3
free() calls:       0
minor collections:  0
full collections:   0

TearingState:

  0.014592 seconds (3.96 k allocations: 4.456 MiB, 92.30% compilation time)
elapsed time (ns):  1.4592042e7
gc time (ns):       0
bytes allocated:    4672936
pool allocs:        3955
non-pool GC allocs: 0
malloc() calls:     4
free() calls:       0
minor collections:  0
full collections:   0

mtkcompile:

  0.018629 seconds (20.74 k allocations: 932.062 KiB, 89.89% compilation time)
elapsed time (ns):  1.8628542e7
gc time (ns):       0
bytes allocated:    954432
pool allocs:        20727
non-pool GC allocs: 0
malloc() calls:     13
free() calls:       0
minor collections:  0
full collections:   0

With more improvements to come.

Comment on lines +32 to +36
ModelingToolkitBase.jl is thus an MIT-licensed package with no Julia GPL dependencies and contains all
of the core functionality of ModelingToolkit.jl. This for example includes the utilities for building
`System`s, generating code for ODEs, SDEs, etc., and the main symbolic compiler pipeline.
ModelingToolkitBase.jl is thus a fully functional symbolic modeling compiler which can take symbolic
descriptions of systems and generate performant Julia code for them as-is.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ModelingToolkitBase.jl is thus an MIT-licensed package with no Julia GPL dependencies and contains all
of the core functionality of ModelingToolkit.jl. This for example includes the utilities for building
`System`s, generating code for ODEs, SDEs, etc., and the main symbolic compiler pipeline.
ModelingToolkitBase.jl is thus a fully functional symbolic modeling compiler which can take symbolic
descriptions of systems and generate performant Julia code for them as-is.
A new package **ModelingToolkitBase.jl** will be split out of what is now MTK, as an MIT-licensed package with no Julia GPL dependencies and contains all
of the core functionality of ModelingToolkit.jl. This for example includes the utilities for building
`System`s, generating code for ODEs, SDEs, etc., and the main symbolic compiler pipeline.
ModelingToolkitBase.jl is thus a fully functional symbolic modeling compiler which can take symbolic
descriptions of systems and generate performant Julia code for them as-is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not really a split though, since it's basically ModelingToolkit of today. It's like 99% of what MTK is today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants