Layer A: find new comma to make 37-limit
Layer B, v1: resolve overlapping commas between and .: by shifting their boundary
Layer C, v1: resolve overlapping commas between and .: by invalidating one or the other
Layer B (and C), v2: resolve overlapping commas in 12 different places, potentially requiring invalidating some symbols — or decide that OoBSoE exceptions are not critical
- I propose we compress the Extreme Precision JI notation into the 37 limit, out of consistency with other meaningful instances of 37-limit throughout Sagittal, and also because there is only a single symbol beyond 37-limit which is flagrantly exceptional, skipping right over 41 and 43 right to 47. That symbol is , the 75th mina, the 47S. @Dave Keenan agrees this is worth looking into. So we begin searching for alternative commas for this symbol — a 37-limit comma inside that symbol's capture zone, 36.326¢ to 36.588¢. Let's call this Layer A.
- @herman.miller finds a comma suitable by this criteria, but Dave points out that it wouldn't be common enough per the Scala usage stats. I find a comma which seems to be a good candidate: ~36.529¢ = [ 1 -4 1 1 -1 1 > = 910/891, AKA the 455/11C. However Dave points out that this comma is a sum-of-subsets for a different symbol, the next one over, , the 75.489th mina, the 11:23S, with a capture zone from 36.588¢ to 36.888¢. He suggests this means that the boundary between these two commas needs to be shifted. In doing so, we implicitly commit ourselves to experimentally ensuring that symbols' sum-of-subsets are inside their bounds. At this point I, at least, have not become precise enough in my apprehension of the problem, and am not making distinctions between different types of SoS and different types of bounds. I agree with this boundary move, but do not immediately attack the issue. Let's call this Layer B.
- Instead, Dave and I remain focused on Layer A, the problem of choosing a replacement 37-limit comma for . He asks for an exhaustive list of SoS commas for this symbol, including variations using the secondary commas for the elements. Instead of doing the work to fully understand and follow this approach he's suggested, I keep using my own script. Still, no one has gotten this list to Dave. Dave is nice anyway and calls my list "Cool list." The actual choosing of the comma gets paused, though, because...
- On Layer B, I discover a second boundary that would need shifting (I won't reproduce it here since it's tangential because this was later proven wrong), as well as an issue with our ability to move the boundary between and .: : there is no good place to move it, because there is a comma which is a SoS for both of and .: . I propose this means that one or the other of the symbols should be invalidated, and suggest replacing with . Let's call this Layer C.
- Dave drops bombs of huge emails and spreadsheets. I by no means comprehensively ingest them, but focusing on the most obviously relevant bits, I retract my proposal to replace with since I've just learned it would constitute a DAFLL exception, and exchange it for a proposal to eliminate the splitting of the 75th mina by shifting over the boundary at the Very High precision level, for which I recognize that allowing the higher complexity stuff to impact the simpler stuff is generally a no-no, so I'm not super stoked about this proposal either.
- Dave and I get in a bit of cross-talk, since he's still apparently working on Layer A, since no one has assisted him with that yet; he's still asking about the secondary commas for the mina accents. I'm thinking he's on Layer C with me, dealing with the issue of the mutually invalid symbols, so my response to him is probably a bit confusing and off. Nonetheless, something that Dave says helps me realize that not only do we have sum-of-elements but that each symbol can have up to very many different sum-of-subsets depending on how you slice and dice it. I also realize that I've been using the wrong slicing-and-dicing. So I redo my work. That involves striking out the original Layer B and Layer C since they were based on incorrect work. I instead discover that we've actually got a much worse problem, if we indeed want to respect these OoBSoE errors: there are 12 exceptions. Let's call this Layer B (and C), v2, since we're not sure if any of these 12 will lead to an equivalent of Layer C (SoE being the same for multiple symbols) until more analysis is done.