Truncated Integer-Limit Triangle (TILT)

Introductions, chit chat and off-topic banter.
Post Reply
User avatar
Dave Keenan
Site Admin
Posts: 2095
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Truncated Integer-Limit Triangle (TILT)

Post by Dave Keenan »

[This is a draft of a section from a textbook chapter that Douglas Blumeyer and I are working on. I have placed it here temporarily because I want to refer to it from this feudal primes post.]

DRAFT

Truncated integer limit triangle (TILT)

As we (Dave and Douglas) developed this tuning article series, we found the existing theory around targeted interval sets somewhat lacking, and ended up conducting some original research ourselves. Enter the truncated integer limit triangle, or TILT for short. It took us months to come up with a target set scheme that we were both satisfied with. Perhaps the old OLDs (odd limit diamonds) are so entrenched that we're tilting at windmills with our TILTs, but we hope you will find our result appealing. In any case, you are very free to steal any or all of our ideas, and adapt them to whatever scheme you'd prefer.

First of all, our triangle gives no special treatment to prime 2, so it includes even numbers as well as odd, or in other words, integers, and accordingly is specified by an integer limit rather than an odd limit (as per the name). Also, we truncate our integer limit triangle, by which we mean that we eliminate intervals that are too large, too small, or too complex to worry about tuning; so it is truncated both in the geometric sense as it appears on the triangle as well as in the sense of interval size.

We predict that the integer limit part of our scheme will strike most readers as a perfectly logical solution to a cut-and-dried problem. On the other hand, truncation is a messier problem, so readers should be more likely to not exactly agree with our final choices there, which were size between \(\frac{15}{13}\) and \(\frac{13}{4}\) (inclusive on both sides), and product complexity less than or equal to 13 times the integer limit. So to be clear, we recognize that these are somewhat arbitrary rules, and won't spend too much time justifying these; again, please define your own exact bounds if you find ours don't fit your needs. In any case, we'll look at all of our choices in more detail below.

Integer limit

Suppose we're using a 7-(prime-)limit temperament. The next prime after 7 is 11. With the odd limit diamond our default choice of odd limit was the odd just before the next prime, which in this case would be 9. But for the truncated integer limit triangle, we seek the integer just before this next prime, which in this case would be 10.[1] In other words, we're looking for the largest integer limit within the given prime limit, so for prime limit 7, that's integer limit 10.

Here's an integer triangle for integer limit 10, and we've seen no particular need to rotate the thing 45° as is done with traditional tonality diamonds. Also, we haven't even bothered to fill in anything besides superunison ratios; this is what makes it a triangle rather than a diamond:

TODO: diagram, points to top-right with faint gridded lines, but rather than filled-in color regions just colored lines and show the diff between inside and outside of the triangle.

  1    2    3    4    5    6    7    8    9    10
1     2/1  3/1  4/1  5/1  6/1  7/1  8/1  9/1  10/1
2          3/2  4/2  5/2  6/2  7/2  8/2  9/2  10/2
3               4/3  5/3  6/3  7/3  8/3  9/3  10/3
4                    5/4  6/4  7/4  8/4  9/4  10/4
5                         6/5  7/5  8/5  9/5  10/5
6                              7/6  8/6  9/6  10/6   
7                                   8/7  9/7  10/7
8                                        9/8  10/8
9                                             10/9
10

Min size truncation

The first part of our truncation process we'll apply at is the minimum size truncation.

When constructing a truncated integer limit triangle, not only do we exclude all non-superunison intervals — i.e. anything less than or equal to unison — we further exclude many of the smaller superunison intervals as well. Specifically, we exclude all intervals that are smaller than \(\frac{15}{13}\), which is approximately 247.741 ¢.[2] This cutoff is designed to fall at right about the boundary between two conventional diatonic general interval size categories: seconds and thirds. The effect is that thirds are allowed into the targeted interval set, while seconds are excluded.

Regarding the choice of excluding seconds and smaller:
• While seconds are very important in melody, they are not as important in harmony, meaning that they don't often appear in chords; a glance at Wikipedia's "list of chords" support this.
• And you have to ask, when they do appear, are they functioning as consonances, i.e. approximations of simple ratios, or as dissonances that need to be resolved? Typically it is the latter.
• Furthermore, a melodic JND is far larger than a harmonic JND; A melodic JND is about 10 ¢, while a harmonic JND is only about 3 ¢. Of course these numbers depend on context and who you ask, but most will agree melody is far more tolerant of errors.[3]
• And even when people have strong feelings about the ideal tuning of melodic intervals, it tends to have little to do with ratios, simple or otherwise.

We can also point to some psychoacoustic theory on why (simultaneous) seconds are relatively dissonant no matter how you tune them. The magic words are "critical band" roughness. Put very crudely: Our ears have a series of overlapping "frequency buckets" and if two tones fall in the same frequency bucket they are dissonant (because we can't quite distinguish them), unless they are so close together in frequency that they are perceived as a single tone varying periodically in loudness. If they fall in different buckets they are not dissonant, except they are partially dissonant when some of their harmonics fall in the same bucket, unless those harmonics are so close together in frequency that they are perceived as a single harmonic varying periodically in loudness. Of course there are not really such sharp cutoffs. It's all very fuzzy.

The approximate spacing of the buckets in Hz, in the vicinity of a frequency \(f\) (in Hz) is given by

\(\text{ERB}(f) = 24.7 × (4.37 f / 1000 + 1)\)

where ERB stands for Equivalent Rectangular Bandwidth.

If we put in 262 Hz (middle C) for \(f\), we get 53 Hz as the width of the bucket centered on middle C. So the upper edge of that bucket is at 262 + 53/2 = 288.5 Hz and the lower edge is at 262 - 53/2 = 235.5 Hz. If we look at the ratio between those, we get 288.5/235.5 = 1.225. That's a 351 ¢ neutral third. If we do the same an octave above middle-C we get 269 ¢. At two octaves up we get 228 ¢. These are fuzzy cutoffs, but it's nice that the numbers agree so well with common practice and experience.

And now, regarding the exact choice of \(\frac{15}{13}\) as the cutoff interval:

If we have a 4:13:15 chord rooted on middle-C. the 13:15 will be centered around 14/4 × 262 Hz = 917 Hz. If we plug that in to the ERB formula and convert the ratio of the bucket edges to cents we get 234 ¢. And 15/13 is about 248 ¢, so that lets us have the chord rooted potentially even a bit lower than middle-C and still not be too dissonant.

If we make a 4:15:17 chord, however, the 15:17 will be centered around 16/4 × 262 Hz. We've already noted that this gives a bucket edge ratio equivalent to 228 ¢. And 17/15 is 217 ¢. So there's not much point worrying about getting their 17th and 15th harmonics coinciding, when their fundamentals are already creating dissonance.

So here, then is what our integer triangle looks like, now with the min size truncation applied (and the axes hidden moving forward):

2/1  3/1  4/1  5/1  6/1  7/1  8/1  9/1  10/1
     3/2  4/2  5/2  6/2  7/2  8/2  9/2  10/2 
          4/3  5/3  6/3  7/3  8/3  9/3  10/3
               5/4  6/4  7/4  8/4  9/4  10/4
                    6/5  7/5  8/5  9/5  10/5
                         7/6  8/6  9/6  10/6      
                                   9/7  10/7
                                        10/8

If we had a much larger truncated integer limit triangle example, we could observe that the min size truncation's boundary line cuts across the diagram as a perfectly straight diagonal out from the origin at the top left.

So now we've lost \(\frac87\), \(\frac98\), and \(\frac{10}{9}\), all three of which are smaller than \(\frac{15}{13}\).

Max size truncation

In the absence of octave reduction, the intervals in our target set can get quite large. Just take a gander across our top row, where everything is "over 1"; at the rightmost extreme we have 10/1, which is over three octaves above the root! The odd limit diamond had solved this problem with octave-reduction. But the truncated integer limit triangle gives no special treatment to the octave in this way, so we need to take special care to eliminate intervals from our integer triangle which are too large to have enough harmonic importance to be worth inclusion, using up our precious limited damage-reduction resources as discussed in the earlier "importance of exclusivity" section.

In setting a maximum interval size, we are essentially answering the question, "How large does an interval have to get before people stop caring about errors in them?" According to Wikipedia's list of chords page (linked earlier), the widest commonly used chords are thirteenth chords. In this case, "thirteenth" is referring to a diatonic general interval size category, but it happens to be the case that the harmonic series representative for this size category is the 13th harmonic.[4] So we'd like to include the widest interval in that chord, namely \(\frac{13}{4}\), and exclude anything larger than it.

And that's about the extent of our reasoning here, aside from the consideration that 13's presence in both the min and max size bounds, \(\frac{15}{13}\) and \(\frac{13}{4}\), helpfully reduces the memory load.[5]

So here, then is what our integer triangle looks like, now with the max size truncation applied:

2/1  3/1  
     3/2  4/2  5/2  6/2  
          4/3  5/3  6/3  7/3  8/3  9/3  
               5/4  6/4  7/4  8/4  9/4  10/4
                    6/5  7/5  8/5  9/5  10/5
                         7/6  8/6  9/6  10/6      
                                   9/7  10/7
                                        10/8

Again, if we had a much larger truncated integer limit triangle example, we could observe that the max size truncation's boundary line cuts across the diagram as a perfectly straight diagonal out from the origin at the top left, just the same as the min size truncation's boundary line. It's just that here, while the min's line cuts across the bottom, the max's line cuts across the top.

We've now dropped every "over one" interval past \(\frac31\), every "over two" interval past \(\frac62\), and every "over three" interval past \(\frac93\); these are all greater than \(\frac{13}{4}\).

Max complexity truncation

The final piece of our truncation process to address is the max complexity truncation. Again, to reduce the cognitive load, we feature the number 13 in this step: simply take the product complexity (that's the numerator times the denominator, e.g. complexity(\(\frac54\)) = 20), and if that's greater than 13 times the integer limit[6], then exclude the interval.

Note that we use plain old product complexity, not the log product complexity that we use by default for damage weighting. There's no reason to complicate complexity with a logarithm here (we are still deferring the reason for doing so to damage weighting until a later article), and so this leaves us with the convenience of computing these complexities with mental math, if need be.

For our 10-TILT example, where 10 is the integer limit, that's a max (product) complexity of 130. The interval remaining in our set with the largest product complexity, however, is \(\frac{10}{7}\), which has a complexity of 70, only about halfway to our limit. So in this case, the max complexity truncation has no effect.

Eliminate redundancies

For convenience, let's reproduce the current state of our truncated integer triangle:

2/1  3/1  
     3/2  4/2  5/2  6/2  
          4/3  5/3  6/3  7/3  8/3  9/3  
               5/4  6/4  7/4  8/4  9/4  10/4
                    6/5  7/5  8/5  9/5  10/5
                         7/6  8/6  9/6  10/6      
                                   9/7  10/7
                                        10/8

Our last step is to reduce intervals to lowest terms and de-dupe.

2/1  3/1  
     3/2       5/2        
          4/3  5/3       7/3  8/3       
               5/4       7/4       9/4   
                    6/5  7/5  8/5  9/5   
                         7/6                        
                                   9/7  10/7

So this is our final 10-TILT: {\(\frac21, \frac31, \frac32, \frac43, \frac52, \frac53, \frac54, \frac65, \frac73, \frac74, \frac75, \frac76, \frac83, \frac85, \frac94, \frac95, \frac97, \frac{10}{7}\)}. That's a total of 18 targeted intervals. So we've gone back up to almost the same count of intervals as we had with our odd limit diamond scheme (as it would be used with a 7-(prime-)limit temperament, i.e. we're comparing a 9-OLD to a 10-TILT), but this truncated integer limit triangle scheme is designed to cover a much wider range of tunings so it can be understood to have compacted more power into the same count of intervals.[7]

When choosing our max complexity, we used the inclusion of \(\frac65\) at the 5-limit a reality check. Nor did we think our scheme should exclude \(\frac97\) from the 7-prime-limit or \(\frac{15}{13}\) from the 13-limit, although that one was borderline. New kinds of consonant thirds (and sometimes fourths) are what most musicians are looking for when they go to higher prime limits, because the most consonant chords are made from stacks of thirds (and sometimes fourths). It's debatable whether \(\frac{15}{13}\) counts as a consonant third, because of both its small size and its complexity, but given the right timbre and the right other notes in the chord, it may be relatively consonant, and therefore its precise tuning may matter. We think that George Secor cared about the error in \(\frac{15}{13}\) in designing his 29-HTT, but not anything more complex than that. So that is why this interval serves as both our minimum interval size as well as a final guidepost for the definition of our max complexity.[8]

Our scheme's max complexity truncation actually has no effect, then, until past the 13-prime-limit, at the 17-TILT, which is the first max integer which permits an interval above the minimum interval size whose denominator is greater than 13, while the numerator is at the integer limit, thus allowing an interval whose complexity is greater than the integer limit times 13, namely, \(\frac{17}{14}\).

If we had used a much larger truncated integer limit triangle where the max complexity truncation had an effect, we would find that the shape its boundary makes is not a straight line like the integer limit's boundary or the size truncation's boundaries, but actually a smooth curve — specifically a hyperbola — that comes from the extreme top right of the diagram, shooting for the origin but instead of going straight into it, curving away from it and instead shooting toward the bottom left.

Eventually we hit a point, which is the 42-TILT — the default TILT for the 41-prime limit — beyond which the complexity limit of \(13n\) (in combination with the size limits) overpowers the integer limit \(n\), in the sense that we don't get 'any' ratio of \(43\) in the 46-TILT (the default for the 43-prime-limit). We have to go to the 47-TILT before we get \(\frac{43}{14}\). And the only ratio of \(41\) we get in the 42-TILT is \(\frac{41}{13}\). We don’t think this is unreasonable. George Secor agreed with Erv Wilson that the 41-prime-limit adds nothing new, because ratios of 41 sound too much like ratios of 5. This can be explained by the existence of the 41-limit schismina, \(\frac{6561}{6560}\), 0.26 ¢, [-5 8 -1 0 0 0 0 0 0 0 0 0 -1. In Sagittal notation's systematic comma naming scheme, its name would be the "5.41-schismina"). But these are only our default targeted interval sets. You are free to specify a larger-numbered TILT or design your own set.

This all said, between the max and min size truncations and the max complexity truncation, the size truncations are much more important. The complexity truncation plays a role much more similar to the integer limit. When you have max and min size truncations, you might only need either an integer limit or a max product complexity; you might not need both. Because in combination with a min size truncation, an integer limit implies a max product complexity. And in combination with a max size truncation, a max product complexity implies an integer limit. Even when the min size is zero cents, an integer limit implies a max product complexity; no interval more complex than \(\frac{n}{n-1}\) is permitted. So you may well decide you prefer a targeted interval set scheme which only uses one or the other of a max complexity or an integer limit.

Comparing the odd limit diamond and truncated integer limit triangle

We (Dave and Douglas) began our investigations which led to the development of the truncated integer limit triangle upon noticing shortcomings in the way the odd limit diamond represented harmonic reality.

These shortcomings weren't apparent to us at first, because they only began to affect tuning when we attempted to use the odd limit diamond outside of the context that the Target tunings page of the wiki presents it in. To be specific, on that page, the odd limit diamond is only used with pure-octave tunings that do not weight damage. This pure-octave, unweighted context renders any factors of 2 in the targeted interval set irrelevant to damage:
• With pure octaves, none of these factors of 2 contribute any error.
• Without any damage weighting, the difference these factors of 2 would make in complexity functions is also moot.

So what is the consequence? Well, for example, this means that by including \(\frac32\) in our target set, we are also in a sense targeting the damage to all other members of the same octave-equivalent interval class, which includes \(\frac31\), \(\frac43\), \(\frac83\), etc.

So we can see, for starters, that in this pure-octave, unweighted case, it is redundant for the odd limit diamond to include both \(\frac32\) and \(\frac43\). But this isn't actually that big of a deal. It's not the real problem we're concerned about. In fact, the truncated integer limit triangle also has this (relatively trivial) problem.[9]

The real problem here arises when the odd limit diamond is used outside of its original pure-octave, unweighted context. To be clear, whenever either of these tuning traits are not true, the factors of 2 in targeted intervals affect their damage values; it is only when both of these traits are true that factors of 2 become irrelevant. And so outside that context — whether it's by complexity- or simplicity- weighting our damage, or by tempering our octave — minimizing damage to \(\frac32\) and minimizing the damage to \(\frac43\) diverge into separate issues (though still interconnected issues, to be sure).

So now it's actually a good thing — outside the pure-octave unweighted-damage context, that is — that the odd limit diamond already at least contains two key intervals from this octave-reduced interval class (as it does for every such interval class that it had representation for).[10] But what about the interval \(\frac31\)? This interval is not included in the odd limit diamond, but it will have different damage yet from either of those two, and it's also a reasonably sized and low-complexity interval that we should care about damage to, since it is likely to come up in our music. And by similar logic we may also want to target \(\frac83\), and that interval will have different damage still from the other three intervals. And so on. Our definition of damage is capturing how the differing counts of 2's in these intervals are meaningful to us.

The point here is that the odd limit diamond seems to have been designed less as a targeted interval set scheme so much as a targeted octave-reduced interval class set scheme. But this is no good in general, because the fundamental mechanics of tuning optimization — as we've been learning them here — can not understand interval classes; they treat the members of targeted interval sets as just that: intervals. And so as soon as we depart from the context of pure octaves and unweighted damage, conflating these two data types (as the odd limit diamond does) starts to be harmful. A targeted interval set scheme which accurately represents harmonic reality in general — i.e. including tempered-octave tunings and/or weighted-damage tunings — should ensure that it directly includes any and all of the intervals that are most likely to occur in the harmonies of the music being tuned for. We (Dave and Douglas) designed our truncated integer limit triangle to fulfill this need.

As a particularly illuminating example, consider how the 9-odd limit diamond includes \(\frac98\) and \(\frac{16}{9}\), while the 10-truncated integer limit triangle includes neither of these, instead including \(\frac94\). When octaves are tempered, or we're using complexity functions to help us determine how best to fairly distribute our limited optimization resources among our targets, it's wiser to care about \(\frac94\), which is more likely to come up in chords than either \(\frac98\) or \(\frac{16}{9}\); the former is too small of an interval (the truncated integer limit triangle excludes any interval smaller than diatonic thirds), and the latter is too complex. So while the intervals \(\frac98\) or \(\frac{16}{9}\) may both be results of octave-reduction, but it's good to remember that many important intervals are larger than an octave. The odd limit diamond fails to realize this, as well as that many intervals smaller than an octave are not as important to tune; it's octave-reduction approach is good at finding reasonably-sized intervals given how simple it is, but the truncated integer limit triangle's more direct approach does it better.

Concluding notes

We'd like to say again in conclusion that our truncated integer triangle is not a thing we (Dave and Douglas) will advocate for communal consistency over. Our main motivation was to find a reasonable targeted interval set scheme to implement in our RTT library in Wolfram Language. If we learned anything while settling on it, it's that there are many more questions remaining to be answered about this topic, and many more concerns we can imagine people having that we don't satisfy. Such as: what if various subsets of the limits were unioned rather than intersected? Or what if the default max integer was the next power of 2, or perhaps the greater of the two between that and the integer just before the next prime (since otherwise that leads to 7-limit temperaments' truncated integer limit triangles cutting off at 8, even lower than 10)? Or what if the max complexity was simply fixed at some number, rather than linear[11], or if it was quadratic[12], or if its boundary on the integer grid bulged outward from the origin rather than inward[13]? It never ends. So have fun exploring if you like, or just go with our recommendation.

Footnotes

  • 1. ^ It may be worth noting that while the default odd limit diamond for the 5-limit, the 5-OLD, includes the interval \(\frac85\), the default truncated integer limit triangle for the 5-limit does not, because this is the 6-TILT which excludes ratios with integers greater than 6, so the 8 in \(\frac85\) rules it out. At first we thought this might be a bad thing. But several sources we found cast doubt on whether the minor sixth is a diatonic consonance. And we wonder if we and possibly others may only be used to getting \(\frac85\) for free along with \(\frac54\) when we assume pure octave tuning.

    2. ^ Remember: these cutoffs apply to untempered intervals. It is irrelevant whether an interval, once tempered, might fall within or without this range.

    3. ^ https://en.wikipedia.org/wiki/Just-noticeable_difference#Music_production_applications

    4. ^ It's a fascinating quirk that from "seventh" to "fourteenth", the corresponding harmonic falls into the diatonic size category (assuming the 4th harmonic is the root), with the 7th harmonic being 969 ¢, on the smaller side of a diatonic seventh, the 8th harmonic being 1200 ¢, of course a diatonic eighth (compound unison), the 9th harmonic being 1404 ¢ which is a diatonic ninth (compound second), etc. all the way up to the 14th harmonic being 2169, on the smaller side of a diatonic fourteenth (compound seventh). So this is all the compound diatonic steps, plus the seventh just before them.

    5. ^ We considered using noble numbers as boundaries, as is done for some size class boundaries in Sagittal notation, because noble numbers by definition will never land directly on a justly intonated ratio someone might care about. That definitely makes sense for Sagittal notation, but we decided these target sets don't require the exclusion of just ratios, don't require this much precision, and that memorability of the ratios was more important anyway.

    6. ^ Our maximum complexity is a function of the integer limit, not the prime limit. This is because prime limit is an implementation detail of RTT — in particular how it manifests the fundamental theorem of arithmetic through linear algebra, expressing prime factorizations as vectors. Whether harmonic complexity increases even more with the addition of a prime than it increases with the addition of a composite odd or even integer is not important in this context; all that is important is that harmonic complexity increases at all with the addition of each higher integer, and so this is why we base our max complexity on the integer limit. We note, however, that the max complexity will be indirectly a function of the prime limit via the integer limit in the case that one relies upon the default integer limit for their truncated integer limit triangle, which as discussed earlier, is a function of the temperament's prime limit.

    7. ^ The intersection with the temperament's interval subspace (usually a prime limit) must also be taken, which should almost go without saying, since any other intervals are outside the temperament. This should be true of any interval set scheme — TILT, OLD, or otherwise.

    This issue will never matter if the default max integer is chosen, i.e. the integer one less than the next prime limit, which prevents any integers with primes beyond the temperament's prime limit appearing in the targeted interval set. However, it is not unreasonable at the 5-prime-limit to wish to tune with respect to intervals with, say, 9's in them, or perhaps even 15's. So some people may like to use a 10-, 12-, or 16- TILT to tune a 5-limit temperament with, and in this case, they will need to cut out intervals with 7's, 11's, or 13's in their factorizations. This is done simply by enforcing the prime limit of the temperament as part of the target set scheme.

    Unlike the other limits we've looked at — integer, size, and complexity — the prime limit does not make a neat line through the array of ratios — straight, curved, or otherwise. Instead, it chaotically slices rows and columns out of the results. Were we to use our 10-TILT at the 5-limit, we'd end up with:

    2/1  3/1  
         3/2       5/2           
              4/3  5/3            8/3         
                   5/4                 9/4
                        6/5       8/5  9/5
    

    The entire row for 7 and column for 7 have been deleted.

    8. ^ Dave did have a moment of doubt about the factor of 13 in our max complexity, because some people consider the true just minor triad to be 16:19:24 rather than the inverse of the major triad 4:5:6, or in other words 1/(4:5:6) = 10:12:15. The 16:19:24 chord is very close to the minor triad of 12-EDO. To include this chord within our maximum complexity would require increasing the factor of 13 to a 14, which would result in a maximum complexity of 22×14=308 at the 19-prime-limit of that chord, thereby allowing the 19/16 dyad from that chord (complexity 304), though not the 24/19 dyad from that chord (complexity 456), though that's above 22-integer-limit, so no matter. We decided that the memorability of all the 13's in our various bounds was worth preserving, though, and wondered whether the one time Dave thought he heard specialness in 16:19:24 while using a sawtooth wave timbre may only have been the result of a combination-tone effect, or, if it was actually coinciding harmonics, what timbre other than an artificial one like a sawtooth would have sufficient 19th harmonic to make it audible.

    9. ^ Including both of these members of the same octave-reduced interval class — \(\frac32\) and \(\frac43\) — in the same targeted interval set for a pure-octave, unweighted damage tuning scheme means that it will take us longer to do the calculations for that set, but these calculations will ultimately lead to the same tunings.

    For small interval sets, though — like we have at the 5-prime-limit — this isn't a big deal to a computer doing the job; the difference between three targets and six is a matter of milliseconds worth of compute time.

    So rather than bother complicating the situation by describing a different targeted interval set scheme for the special case of pure-octave + unweighted damage tuning schemes, we suggest that we all just eat the (very low) costs of these extra calculations.

    10. ^ This may not always be a good thing, though. Consider the case of \(\frac54\) and \(\frac85\). The default truncated integer limit triangle for the 5-prime-limit actually excludes \(\frac85\) per its integer limit of 6, which is its way of deeming that interval insufficiently worthy of targeting; better, that is, to allocate precious resources to optimizing the tuning of other intervals with lower integers.

    11. ^ We decided against this because if people have different intuitions about harmony, and so if someone is considering a really high prime limit temperament, then they probably have a more complex sense of consonance.

    12. ^ Which would allow the max complexity to rise proportionally with the integer limit, but since the correct max complexity for prime limits below 13 renders it redundant to the integer limit, this would preserve its redundancy up through any prime limit.

    13. ^ We explored this too, but, you know what… we'll spare you the details on that one.
Last edited by cmloegcmluin on Sun Sep 18, 2022 3:24 am, edited 2 times in total.

Post Reply