Files
flutter-rimworld-modman/Cooking.md
2025-03-16 12:32:49 +01:00

6.4 KiB

Hello claude I come to you today with a bit of a brain teaser I'm wanting to cook up a mod manager for the game rimworld (please print no code, this is a thought experiment) Mods in rimworld can have several type of dependencies between them (2 really) and the game loads mods sequentially Some mods REQUIRE other mods to be loaded first And some WOULD LIKE other mods to be loadewd first

In other words some mods have strong dependencies and some have soft dependencies So in my to be mod manager all I'd have to do is create a list of mods Then shuffle them around such that these requirements are satisfied What is the best way of doing this? Hopefully something O(n) I can't think of a recipe that's better than O(n^2)

Secondly, and more importantly, sometimes you realize your game is fucked Some of the mods butchered it Loading the game with many mods enabled can take a very long time (>40 minutes!) So we have a problem: We have 500 mods One of them is causing issues We need to find out which one it is before the heat death of the universe

One somewhat decent approach is the classic git bisect Disable half and restart Disable the remaining half and restart We find our troublemaker in O(log n) Great!

But the problem is that loading 250 mods might take 25 min... 125 12 min So if we did bisect we'd be loading for: 25 + 12 + 6 +3 + some small number A total of say 40 min

But if we're clever and load only say 50 mods at a time which would take 5 min per pop we could do it in at most 10 iterations A little bit worse, 50 min But on average we would find it faster

Say we find the troublemaker in 3 iterations That's 15 min for method B and 35 for A!!

But.... How would we do that? Remembering to A) respect the innerdependencies B) which ones we've already tried and which we have not


Hello! These are fascinating problems related to mod management for RimWorld. Let's think through both issues carefully.

Problem 1: Sorting Mods Based on Dependencies

For sorting mods with both strong and soft dependencies, what you're describing is essentially a topological sort with preference constraints. Here's how I'd approach it:

  1. First, build a directed graph representing the strong dependencies (requires)
  2. Perform a topological sort on this graph, which gives us a valid loading order
  3. Then try to respect soft dependencies where possible without breaking the topological order

The standard topological sort using depth-first search is O(V + E) where V is the number of mods and E is the number of dependencies. This is effectively O(n) if the number of dependencies per mod is bounded.

You could implement this with:

  • A modified DFS-based topological sort
  • Kahn's algorithm (which works by repeatedly removing nodes with no incoming edges)

After getting a valid order from strong dependencies, you could make a second pass to try to satisfy soft dependencies by moving mods earlier where possible without violating the core ordering.

Problem 2: Finding the Problematic Mod

This is where it gets interesting! You've identified the trade-off between binary search (which is O(log n) in iterations but has high per-iteration cost) and linear search with smaller batches.

Here's how I'd approach this:

  1. Dependency-Aware Testing Groups: Create testing batches that respect dependencies
  2. Tracking System: Keep track of which mods have been tested in working configurations vs. untested
  3. Progressive Elimination: Build a "definitely working" set and a "potentially problematic" set

The algorithm might look like:

  1. Start with a baseline of essential/core mods that you know work
  2. Add small batches of untested mods (maybe 30-50) that form a valid dependency chain
  3. If the batch works, move all those mods to the "confirmed working" list
  4. If the batch fails, you have a small enough set to bisect quickly
  5. Continue adding new batches of untested mods to your working set

This approach:

  • Keeps loading times reasonable (5-6 minutes per test)
  • Allows you to gradually expand your working set
  • Quickly isolates problems to manageable groups
  • Respects all dependencies

To track your progress, you would need:

  • A set of confirmed working mods
  • A set of untested mods
  • A set of mods currently being tested
  • A record of all dependency relationships

The final touch would be a smart selection algorithm that picks the next batch of mods to test based on:

  1. Dependency relationships (all required mods must be in the confirmed set)
  2. Maximizing coverage of popular or important mods first
  3. Potentially grouping mods by author or category to detect related issues

Would you like me to explore either approach in more detail?


Good idea I could sort the mods by size Mods doing more things are more likely to be cooked

Ideally it would be by popularity Popular mods are unlikely to be cooked But I don't have this data so we cannot conjure it

So I would take our N mods, sort them by size, load first M along with their dependencies... Then load next M until we find a troubled set...


That's a smart approach! Sorting by size is a good heuristic since larger mods typically make more changes to the game and thus have more opportunities to create conflicts.

Here's how this could work:

  1. Sort all mods by file size (largest to smallest)
  2. Create batches of M mods, but when adding a mod to a batch, also include all its dependencies
  3. Test these batches in order

The process might look like:

  • Start with your baseline (core game, essential mods known to work)
  • Take the first M largest mods that aren't in the baseline
  • Add all their dependencies to this test group
  • If this group works, mark them all as "confirmed working" and move to the next batch
  • If this group fails, you've found your problematic set and can bisect within it

One refinement: when you find a problematic batch, you could prioritize testing the largest mods within that batch first, since they're more likely to be the culprit.

For tracking purposes, you'd maintain:

  • Working set (confirmed good mods)
  • Problem batch (when found)
  • Untested remainder
  • Dependency graph

This approach balances:

  • Testing efficiency (reasonable load times)
  • Problem detection (targeting likely culprits first)
  • Dependency integrity (always loading required mods)

If you find that certain types of mods tend to cause problems more often (like texture replacers, AI overhauls, etc.), you could further refine by categorizing mods and prioritizing suspicious categories.

Does this approach align with what you were thinking?