developing a notational comma popularity metric
 Dave Keenan
 Site Admin
 Posts: 1599
 Joined: Tue Sep 01, 2015 2:59 pm
 Location: Brisbane, Queensland, Australia
 Contact:
 cmloegcmluin
 Site Admin
 Posts: 1274
 Joined: Tue Feb 11, 2020 3:10 pm
 Location: San Francisco, California, USA
 Real Name: Douglas Blumeyer
 Contact:
Re: developing a notational comma popularity metric
I was looking over this thread https://yahootuninggroupsultimatebackup ... 55471.html recently for some reason and I noticed this bit:
How do you feel about replacing every instance of "pitch ratio" with "rational pitch", to be absolutely unambiguous that said ratios are between whole numbers?
I assume that N2D3P9 is only defined for rational numbers, i.e. that things like N2D3P9(`\frac{\pi}{2}`) would be undefined.
I think we technically have the same problem with our N2D3P9 page.> nor is any restriction to rational values mentioned.
Technically, you're right! Yikes! But it was certainly my intention
to so restrict it.
Monz, if you could change those two ocurrences of "pitch ratios"
to "rational pitches" that should solve the problem.
How do you feel about replacing every instance of "pitch ratio" with "rational pitch", to be absolutely unambiguous that said ratios are between whole numbers?
I assume that N2D3P9 is only defined for rational numbers, i.e. that things like N2D3P9(`\frac{\pi}{2}`) would be undefined.
 Dave Keenan
 Site Admin
 Posts: 1599
 Joined: Tue Sep 01, 2015 2:59 pm
 Location: Brisbane, Queensland, Australia
 Contact:
Re: developing a notational comma popularity metric
Good point. It's an annoying aspect of the language, that not all ratios are rational. But I don't like your proposed solution even though it echoes one I made in a similar situation many years ago.
It's totally clear in our formula for N2D3P9 that it only applies to rational numbers, since we say that n and d are integers. Of course N2D3P9(π/2) is undefined, since neither copfr(π) nor primelimit(2π) is defined.
And in the intro paragraph we already had:
"Given a pitch ratio n/d, N2D3P9 estimates its rank in popularity among all rational pitches in musical use."
I've now changed that to:
"Given a rational number n/d representing a pitch (relative to some tonic note), N2D3P9 estimates its rank in popularity among all rational pitches in musical use."
I think that change, along with the formula, is sufficient so that it will be clear that the "pitch ratios" we refer to in the rest of the article are rational.
[At this point, armed with this new metric, we resumed work in the topic: Magrathean diacritics.]
It's totally clear in our formula for N2D3P9 that it only applies to rational numbers, since we say that n and d are integers. Of course N2D3P9(π/2) is undefined, since neither copfr(π) nor primelimit(2π) is defined.
And in the intro paragraph we already had:
"Given a pitch ratio n/d, N2D3P9 estimates its rank in popularity among all rational pitches in musical use."
I've now changed that to:
"Given a rational number n/d representing a pitch (relative to some tonic note), N2D3P9 estimates its rank in popularity among all rational pitches in musical use."
I think that change, along with the formula, is sufficient so that it will be clear that the "pitch ratios" we refer to in the rest of the article are rational.
[At this point, armed with this new metric, we resumed work in the topic: Magrathean diacritics.]
 cmloegcmluin
 Site Admin
 Posts: 1274
 Joined: Tue Feb 11, 2020 3:10 pm
 Location: San Francisco, California, USA
 Real Name: Douglas Blumeyer
 Contact:
Re: developing a notational comma popularity metric
Here is the table but up to N2D3P9 of 307.
 Dave Keenan
 Site Admin
 Posts: 1599
 Joined: Tue Sep 01, 2015 2:59 pm
 Location: Brisbane, Queensland, Australia
 Contact:
Re: developing a notational comma popularity metric
Dear Douglas, Thank you so much for this. Congratulations on finding and fixing that bug yesterday morning, to make this possible. And thanks for the incredible amount of work you put into finding the metric we call N2D3P9. This table is really the culmination of all that work — an awesome resource that I know I'll be referring to again and again in the future.
It just hit me that this table is nothing less than a new foundation for the entire Sagittal notation system. And as such, it feels so much more solid than a bunch of noisy statistics.
It just hit me that this table is nothing less than a new foundation for the entire Sagittal notation system. And as such, it feels so much more solid than a bunch of noisy statistics.
 Dave Keenan
 Site Admin
 Posts: 1599
 Joined: Tue Sep 01, 2015 2:59 pm
 Location: Brisbane, Queensland, Australia
 Contact:
Re: developing a notational comma popularity metric
OK. We're back here to complete the project described in this thread's title, although renaming it slightly as a "notational comma usefulness metric". N2D3P9 is a 2,3equivalenceclass popularity metric. Now we want to combine it with considerations of absolute three exponent (ATE) which relates to usefulness in notating JI, and absolute apotome slope (AAS) which relates to usefulness in notating EDOs and other temperaments.
The primary reason that we want such a metric now, is to decide the primary commas for the Magrathean accent marks (diacritics) which are intended to represent integer multiples from 1 to 9 and 0.5 of, a tina, which is 1/8539.00834 octaves or 0.140531541 cents.
We don't have any statistical data to base this usefulness metric on, in the way we did in deriving N2D3P9. And the issue here is not psychoacoustic but notational. Here are some relevant recent discussions from the Magrathean diacritics thread, beginning with a discussion of George Secor's "weighted complexity" which we are now calling "Secor complexity".
The primary reason that we want such a metric now, is to decide the primary commas for the Magrathean accent marks (diacritics) which are intended to represent integer multiples from 1 to 9 and 0.5 of, a tina, which is 1/8539.00834 octaves or 0.140531541 cents.
We don't have any statistical data to base this usefulness metric on, in the way we did in deriving N2D3P9. And the issue here is not psychoacoustic but notational. Here are some relevant recent discussions from the Magrathean diacritics thread, beginning with a discussion of George Secor's "weighted complexity" which we are now calling "Secor complexity".
Dave Keenan wrote: ↑Tue Sep 08, 2020 11:52 pmYes. J is what I was referring to. I think we have to replace G+J with some constant times the log of N2D3P9. Or maybe k*sqrt(N2D3P9) or k*N2D3P9. I'm not sure which.cmloegcmluin wrote: ↑Tue Sep 08, 2020 4:29 amI believe the term you are talking about which attempts to correct for the balanceblindness of SoPF>3 is "J". So, I agree that the first two terms, "G" and "J", can be replaced with (a function of) N2D3P9. And then consolidate the "K" and "L" terms too.Dave Keenan wrote: ↑Sat Sep 05, 2020 10:50 am Notice how George included a term that attempted to correct for the balanceblindness of SoPF>3. Perhaps we can replace the first two terms with some function of N2D3P9 and simplify the last two terms into one term, given that abs3exp and absApotomeSlope are the same thing here.
volleo6144 wrote: ↑Sat May 30, 2020 2:43 am The [Secor] complexity is G+J+K+L, where:
 G (">3") is the SoPF>3
 J ("d+n") is the absolute value of (HI) times G/5, with H = the number of primes >3 (with multiplicity) in the denominator (smaller value) and I in the numerator (larger value)—7:25k (224:225) has G=17, H=1 (actually 1 in the spreadsheet), I=2, J=(21) times 17/5 or 3.4. This is zero for any comma that's one prime against another, so 5:7k, 5:7C, 11:23S, etc. are all zero here.
 K ("3exp.") is 2^(abs(3exp)  8.5) times ln(G+2), so 5C (80:81) is 2^(48.5)×ln(5+2) = ln(7)/sqrt(512) = 1.95/22.6 = 0.086. This ranks commas with high 3exponents as really complex (3C is at 7.84 here, and 3s is at 17.2 trillion).
 L ("slope") is like K, but with apotome slope instead of 3exponent. L = 2^(abs(slope)  8.5)×ln(G+2). 3C (slope = 10.6) is at 2.88 here, and 3s (slope = 52.8) is at 14.8 trillion.Yeah. I think it is something like my munging. I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.Okay, so the question becomes: what function of N2D3P9 balances well with some version of the "K" term in George's complexity function. There's a bunch of magic numbers in George's function. Any idea what the 8.5 is (I see that that's the ATE of half a Pythagorean large diesis, which is the size category boundary between C and S)? Or why it's two to this power? Or why we add 2 to the SoPF>3 and then take the natural log? It feels incommensurate to use two different bases between N2D3P9 and the "K" part. I feel like George must have had a target he was aiming for, like he had put things in terms of the relationship between two key ratios he wanted to push to the correct side of each other. Or something along the lines of what Dave was going for when he brought "munging" into the lingo:
It's messy how the SOPF>3 enters into both G and K.
Dave Keenan wrote: ↑Wed Sep 09, 2020 10:45 amIf we took it there, I'd feel obliged to make it general for all notational commas, whereas here we can take shortcuts that rely on their small size and close spacing as tinas.cmloegcmluin wrote: ↑Wed Sep 09, 2020 5:32 am I suggest again that we should bring this back to the developing a notational comma popularity metric thread.
I think it's slightly flexible. Psychoacoustics are not relevant. Yes, it's to do with the chain (not circle) of fifths and the fact that we can only go to 2 sharps or 2 flats max, and we'd really like to avoid having more than one. And since George takes the absolute value of the 3exponent, he's assuming 1/1 is D (the point of symmetry) or not far from it, on the chain of fifths.Okay. But you're not specifically aware, then, that it's in terms of like something to do with the circle of fifths, or most popular nominal as 1/1 being D vs. G, etc. that kind of stuff. In other words: is it flexible, and/or might there be a psychoacoustically justifiable value for this parameter.I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.
I don't think we have to get into stuff like that. I think it can be done with only N2D3P9 and ATE if they are combined in the right way.It only takes a glance at the precision levels diagram to see that 's secondary comma zone is an enclave of the secondary comma zone for , and since the 5s has much lower N2D3P9 than the 19s, by the logic of the test as I've written it now, the 5s should be the comma for . Which we know there's a lot more considerations involved here, and maybe it comes down to the complex relationship between the High and Ultra levels (or maybe more accurately the complex relationship between the Promethean and Herculean symbol subsets). Anywho...
I think we only need to go back to DANCPM if we are developing a comma usefulness metric with general application, and yes, that would require that we consider EDOability. But that wouldn't need to involve error or badness. Here's how I see our current terminology: "Usefulness" would be a combination of (popularity or N2D3P9) and (3exponent and/or slope). "Badness" would be a combination of "usefulness" and error.What would be relevant to resolve, at least, would be the boundary between error and usefulness. Was I right to say it the point where we start considering EDOability is when we move on to badness (and thus "developing a notational comma popularity metric" topic being a subtopic of "Just Intonation notations" is the correct home for it)?
Dave Keenan wrote: ↑Wed Sep 09, 2020 12:09 pm Here's N2D3P9 versus sopfr for N2D3P9 < 903.
And here it is again with N2D3P9 on a log axis.
Clearly we need to take the log of N2D3P9 to make it work in a modified "Secor complexity" metric.
Just reading off the graph, 4.5 × lb(N2D3P9) ≈ 15 × log_{10}(N2D3P9) ought to just plug straight in.
Dave Keenan wrote: ↑Wed Sep 09, 2020 12:46 pm So I'm getting that something like
4.5×lb(N2D3P9) + 8×2^(ATE8.5)
should be similar to Secor complexity.
Call it 4.5×lb(N2D3P9) + 9×2^(ATE8.5) then eliminate the common factors since they don't affect ranking.
lb(N2D3P9) + 2×2^(ATE8.5)
= lb(N2D3P9) + 2^(ATE7.5)
Exponentiating that whole thing also doesn't affect ranking. That would be
N2D3P9 × 2^(2^(ATE7.5))
ATE N2D3P9 multiplier = 2^(2^(ATE7.5)) 0 1.003836474 1 1.007687666 2 1.015434433 3 1.031107087 4 1.063181825 5 1.130355594 6 1.277703768 7 1.632526919 8 2.665144143 9 7.102993301 10 50.45251384 11 2545.456153 12 6479347.025 13 4.19819E+13 14 1.76248E+27
That looks like it's doing the right kind of thing. We only have to adjust that 7.5 value, which is the ATE at which N2D3P9 gets doubled. Call it the DATE. My mapping from George's formula was all very rough and I ignored the J (balance ) term. Maybe bump DATE up to 9.
ATE N2D3P9 multiplier = 2^(2^(ATEDATE)) where DATE = 9 0 1.00135472 1 1.002711275 2 1.005429901 3 1.010889286 4 1.021897149 5 1.044273782 6 1.090507733 7 1.189207115 8 1.414213562 9 2 10 4 11 16 12 256 13 65536 14 4294967296
Dave Keenan wrote: ↑Wed Sep 09, 2020 3:09 pmI hate to say it, but now I'm thinking the only way to validate this comma usefulness metric, and to tune its parameters (like DATE) is to apply it to all the existing extremeprecision JI notation commas under the halfapotome, and minimise the sumofsquares or something. That means including another multiplier: 2^(2^(AASDAAS)).cmloegcmluin wrote: ↑Wed Sep 09, 2020 1:51 pm I hope we don't regret if we do want to find a general comma no pop rank and wish we'd used it on the tinas. But honestly at this point I think my hypothetical future regretful self would accept the fact that my present impatient self felt he'd spent enough time on this problem already.
A much harder problem would be to optimise it to maximallyjustify our existing choices of one comma over another in each extreme bucket.
An interesting case (in addition to the 5s vs 19s one you mentioned) is 3C. It has N2D3P9 of 1 and ATE of 12 that results in a multiplier of 256. But I don't know what else it's competing with for that slot. No extreme comma under the half apotome has ATE > 12.
Actually, I can see some serious floatingpoint overflow and underflow issues with the form:If we settle on this as our usefulness metric, I can build it into the code and/or get some updated results per halftina.
N2D3P9 × 2^(2^(ATEDATE)) × 2^(2^(AASDAAS))
Can you build it into the code as lb(N2D3P9) + 2^(ATEDATE) + 2^(AASDAAS) with switches to change DATE and DAAS (separately) from their default values of 9 and 9.
 cmloegcmluin
 Site Admin
 Posts: 1274
 Joined: Tue Feb 11, 2020 3:10 pm
 Location: San Francisco, California, USA
 Real Name: Douglas Blumeyer
 Contact:
Re: developing a notational comma popularity metric
Thanks for seeding the discussion back over here.
I can implement this function as lb(N2D3P9) + 2^(ATEDATE) + 2^(AASDAAS), yes. Are you sure, however, that you want add the term for AAS without adjusting the strength with which it and ATE applies? I might have expected you would want to keep a similar overall impact of the 3exponent, distributing it between ATE and AAS. I do realize that Secor complexity included both of them at 2^ strength, but I didn't pay close enough attention when you were balancing N2D3P9 w/ ATE alone to see whether its simple lb() accounted for the elimination of AAS or not.
(ATE = Absolute Three Exponent, formerly referred to as abs3exp, useful in knowing how many sharps and flats are required accompaniment to the comma in question to notate its 2,3equivalent pitch ratio class; AAS = Absolute Apotome Slope, more important for EDOability because it measures how much the comma's fraction of an apotome changes as the fifth is tempered)
From the above, it looks like you do not have a good simple answer to my question I posed:
It does seem like an interesting problem... but I am unfortunately allocated to another project today, so I won't be able to attack it until tomorrow. Clearly I'm not thinking incisively yet... just gathering my thoughts and concerns is all.
I can implement this function as lb(N2D3P9) + 2^(ATEDATE) + 2^(AASDAAS), yes. Are you sure, however, that you want add the term for AAS without adjusting the strength with which it and ATE applies? I might have expected you would want to keep a similar overall impact of the 3exponent, distributing it between ATE and AAS. I do realize that Secor complexity included both of them at 2^ strength, but I didn't pay close enough attention when you were balancing N2D3P9 w/ ATE alone to see whether its simple lb() accounted for the elimination of AAS or not.
(ATE = Absolute Three Exponent, formerly referred to as abs3exp, useful in knowing how many sharps and flats are required accompaniment to the comma in question to notate its 2,3equivalent pitch ratio class; AAS = Absolute Apotome Slope, more important for EDOability because it measures how much the comma's fraction of an apotome changes as the fifth is tempered)
From the above, it looks like you do not have a good simple answer to my question I posed:
Well, I guess it really wasn't posed as a question. But I am still unsure what exactly to do with this metric, once we have it, w/r/t existing commas in Extreme. Here's the previous effort I described, from earlier in this thread:cmloegcmluin wrote: ↑Wed Sep 09, 2020 5:32 am I thought it would be a quick experiment to use the test I have already set up for verifying primary commas, but I'm not sure the approach I am testing is appropriate.
I'm not sure how this compares with "apply it to all the existing extremeprecision JI notation commas under the halfapotome, and minimise the sumofsquares or something" or "A much harder problem would be to optimise it to maximallyjustify our existing choices of one comma over another in each extreme bucket." I don't really know what either of those mean. You had said "I don't think we have to get into stuff like that" when I mentioned how the 5s enclave of the 19s works, but I still don't see how to avoid getting into stuff like that.cmloegcmluin wrote: ↑Mon Jun 29, 2020 9:39 am I know we haven't settled on a metric yet, but I elected to get the infrastructure in place to check the primary commas for symbols in the JI notations, i.e. to verify that they are the most popular commas in their secondary comma zones according to this metric.
Only about half of the time was this true. But fret not: I realized that we shouldn't hold ourselves to this condition.
It does seem like an interesting problem... but I am unfortunately allocated to another project today, so I won't be able to attack it until tomorrow. Clearly I'm not thinking incisively yet... just gathering my thoughts and concerns is all.
 Dave Keenan
 Site Admin
 Posts: 1599
 Joined: Tue Sep 01, 2015 2:59 pm
 Location: Brisbane, Queensland, Australia
 Contact:
Re: developing a notational comma popularity metric
Here's a loglog plot of N2D3P9 versus sopfr. It shows that the rightmost line of points corresponds to
sopfr = sqrt(N2D3P9×18) = 4.2 × sqrt(N2D3P9). I believe this line corresponds to the lone primes.
So sqrt(N2D3P9) would be an alternative to lb(N2D3P9), as a substitute for sopfr in our replacement for Secor complexity.
But it would really be replacing George's G+J = sopfr(nd) + [copfr(n)copfr(d)]×sopfr(nd)/5, so that's really what I should be plotting N2D3P9 against. I just haven't had time to extract the copfr's to do this.
sopfr = sqrt(N2D3P9×18) = 4.2 × sqrt(N2D3P9). I believe this line corresponds to the lone primes.
So sqrt(N2D3P9) would be an alternative to lb(N2D3P9), as a substitute for sopfr in our replacement for Secor complexity.
But it would really be replacing George's G+J = sopfr(nd) + [copfr(n)copfr(d)]×sopfr(nd)/5, so that's really what I should be plotting N2D3P9 against. I just haven't had time to extract the copfr's to do this.
 Attachments

 LogN2d3p9VsLogSopfr.png
 (9.64 KiB) Not downloaded yet
 Dave Keenan
 Site Admin
 Posts: 1599
 Joined: Tue Sep 01, 2015 2:59 pm
 Location: Brisbane, Queensland, Australia
 Contact:
Re: developing a notational comma popularity metric
That's a good point. I did double the contribution from 2^(ATEDATE) to make up for omitting 2^(AASDAAS), back when I thought we were only applying it to tinas. So we should halve them both now. But that is equivalent to subtracting one from their exponents, which means the default values for DATE and DAAS should be 10 instead of 9.cmloegcmluin wrote: ↑Thu Sep 10, 2020 1:18 am I can implement this function as lb(N2D3P9) + 2^(ATEDATE) + 2^(AASDAAS), yes. Are you sure, however, that you want add the term for AAS without adjusting the strength with which it and ATE applies? I might have expected you would want to keep a similar overall impact of the 3exponent, distributing it between ATE and AAS. I do realize that Secor complexity included both of them at 2^ strength, but I didn't pay close enough attention when you were balancing N2D3P9 w/ ATE alone to see whether its simple lb() accounted for the elimination of AAS or not.
Yes, you could also make the 2's into parameters. I think that's like how sharp the knees are.
We're definitely "getting into stuff like that" now that we're back in this thread.From the above, it looks like you do not have a good simple answer to my question I posed:
Well, I guess it really wasn't posed as a question. But I am still unsure what exactly to do with this metric, once we have it, w/r/t existing commas in Extreme. Here's the previous effort I described, from earlier in this thread:cmloegcmluin wrote: ↑Wed Sep 09, 2020 5:32 am I thought it would be a quick experiment to use the test I have already set up for verifying primary commas, but I'm not sure the approach I am testing is appropriate.
I'm not sure how this compares with "apply it to all the existing extremeprecision JI notation commas under the halfapotome, and minimise the sumofsquares or something" or "A much harder problem would be to optimise it to maximallyjustify our existing choices of one comma over another in each extreme bucket."cmloegcmluin wrote: ↑Mon Jun 29, 2020 9:39 am I know we haven't settled on a metric yet, but I elected to get the infrastructure in place to check the primary commas for symbols in the JI notations, i.e. to verify that they are the most popular commas in their secondary comma zones according to this metric.
I don't really know what either of those mean. You had said "I don't think we have to get into stuff like that" when I mentioned how the 5s enclave of the 19s works, but I still don't see how to avoid getting into stuff like that.
I had forgotten that you were already set up to do this. That's brilliant. That's the thing I thought would be "a much harder problem". Forget what I wrote above about sum of squares. We just need to fiddle with the functions and parameters of this usefulness metric to maximise the number of primary commas for symbols in the JI notations that are the most "useful" commas in their secondary comma zones according to this metric.
SInce we'd be making the function fit our existing choices, we can't use it to justify them, except in so far as they would fit some simple consistent metric and are not merely random, but the main thing it would let us do is justify the choice of tina commas on the basis that they are consistent with the other comma assignments.
 cmloegcmluin
 Site Admin
 Posts: 1274
 Joined: Tue Feb 11, 2020 3:10 pm
 Location: San Francisco, California, USA
 Real Name: Douglas Blumeyer
 Contact:
Re: developing a notational comma popularity metric
I was thinking something like that. And probably the other straight lines are lone primes over 5, lone primes over 7, etc.Dave Keenan wrote: ↑Thu Sep 10, 2020 10:33 pm Here's a loglog plot of N2D3P9 versus sopfr. It shows that the rightmost line of points corresponds to
sopfr = sqrt(N2D3P9×18) = 4.2 × sqrt(N2D3P9). I believe this line corresponds to the lone primes.
Is that another puzzle? I don't quite see the relationship between sqrt and the loglog plot.So sqrt(N2D3P9) would be an alternative to lb(N2D3P9), as a substitute for sopfr in our replacement for Secor complexity.
Yes, that makes sense.But it would really be replacing George's G+J = sopfr(nd) + [copfr(n)copfr(d)]×sopfr(nd)/5, so that's really what I should be plotting N2D3P9 against. I just haven't had time to extract the copfr's to do this.
Ah, easy peasy. Thanks for explaining.Dave Keenan wrote: ↑Thu Sep 10, 2020 11:24 pm I did double the contribution from 2^(ATEDATE) to make up for omitting 2^(AASDAAS), back when I thought we were only applying it to tinas. So we should halve them both now. But that is equivalent to subtracting one from their exponents, which means the default values for DATE and DAAS should be 10 instead of 9.
Is that what they call them down under? I usually call them elbows.Yes, you could also make the 2's into parameters. I think that's like how sharp the knees are.
Looking into it, though, it seems some people consider elbows to be pointing down while knees point up. But "knee" also seems to be the more popular term overall, perhaps when people use it generically (regardless of curve direction).
Yes I think you're right that the 2's control the knee sharpness, and yes I can make those configurable.
Aaaaaaand now I get what you mean by "justify". I was thinking you meant something akin to textual justification, like you wanted the commas centered within their zones (why is this called "justification" in the context of aligning text in a column, anyway???). Now I get that you meant justify in the sense of defending our decisions. Which I blame only myself for not getting in the first place, because that is certainly the type of justification we've been dealing with throughout this process.I had forgotten that you were already set up to [optimise it to maximallyjustify our existing choices of one comma over another in each extreme bucket]. That's brilliant. That's the thing I thought would be "a much harder problem". Forget what I wrote above about sum of squares. We just need to fiddle with the functions and parameters of this usefulness metric to maximise the number of primary commas for symbols in the JI notations that are the most "useful" commas in their secondary comma zones according to this metric.
Yes that makes sense. We're no longer justifying our tina commas in the scope of How Music Is; only in the context of How Sagittal Is.SInce we'd be making the function fit our existing choices, we can't use it to justify them, except in so far as they would fit some simple consistent metric and are not merely random, but the main thing it would let us do is justify the choice of tina commas on the basis that they are consistent with the other comma assignments.
But I think this is fine. We're not being "wicked" as we spoke about before. As long as we don't advertise this metric as anything otherwise. Just getting our job done.