## Magrathean diacritics

cmloegmcluin
Posts: 1643
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

### Re: Magrathean diacritics

All of the highlighted winners in this list from 5 months ago agree with this list. The 1-tina in the older list lacks a highlighted winner, but includes the winner from the list I've just posted in its shortlist of candidates. The same is true of the 7-tina. I don't see that the 77/185n appeared until you proposed a couple months ago.

With the exception of the 1 and 8 tinas, all of these are LATE and, for what that's worth. Most of them are LAAS too. Maybe I should add that to the list of factors in our decision I recently compiled, too. Well I guess it was just a specific breed of uselessness check which we eventually upgraded to full-fledged balancing of compressed N2D3P9 against expanded ATE & AAS, inspired by George's original complexity function.

Dave Keenan
Posts: 1962
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

### Re: Magrathean diacritics

Awesome work! Thanks. I can't respond in depth at present as I have Warrick here. I have a backup plan I was holding in reserve in case there was no clear winner for the 0.5 tina dot:

Count occurrences of 1 semitina metacommas between the 809 semitina commas.

cmloegmcluin
Posts: 1643
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

### Re: Magrathean diacritics

No worries. Have a good time with Warrick today.

Great idea. Here's those top 20% results:

AND NOW, METAMETACOMMAS
[
[
"77/185n",
26
],
[
"35/299n",
24
],
[
"21505/7n",
23
],


The 35/299n came up a couple months ago but you pointed out that it's only 0.16 tinas big. 299 = 13×23.

The 21505/7n doesn't appear to have ever come up before. 21505 = 5×11×17×23.

4675/13n, which has seen some talk lately, has only 20 occamms.

1/20735n was another we talked about a bit but it only had 16 occamms.

Of these 5, only 77/185n and 35/299n zeta-map to 0.

I think we gotta go with 77/185n.

Dave Keenan
Posts: 1962
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

### Re: Magrathean diacritics

How about pulling the same trick you just did with 1 semitina, for 2 semitinas. i.e. ignore the existing symbol commas and just look at all 2 semitina metacommas in the list of 809 "best" (LPE and LPEI) commas.

Dave Keenan
Posts: 1962
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

### Re: Magrathean diacritics

Forget the 2 semitinas thing I suggested in the previous message. It could only confuse things.

Lets just go with what came out of your LPEI run. Let's not force the 8 tina comma to be the schisma-complement of the 6 tina. That way there is potential to notate more ratios exactly. I suspect we really only latched on the schisma-complement in desperation as a way to decide an otherwise difficult n-tina comma. But now we have a better way.

That way there is a simple consistent description of how we decided all of these commas.

So this is the same as the list you approved most recently, except for the hilited value.

0.5 tina: 77/185n
1 tina: 10241/5n
2 tinas: 1/5831n
3 tinas: 1/455n
4 tinas: 3025/7n
5 tinas: 2401/25n
6 tinas: 65/77n
7 tinas: 7/425n
8 tinas: 187/175n
9 tinas: 1/539n

Excluding the 37-limit 0.5 tina, they are still 19-limit and are now down to a max ATE of 7 (77/13n had ATE of 11) and are all within ±025 tinas. And 7 of the 10 are still superparticular.

Are we done?

cmloegmcluin
Posts: 1643
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

### Re: Magrathean diacritics

Dave Keenan wrote: Tue Nov 03, 2020 3:22 pm Forget the 2 semitinas thing I suggested in the previous message. It could only confuse things.
K. Yeah for a moment I thought it was a great idea. But I thought about it while watching a really slow movie and decided it wouldn't give us the right information. Looking relative to Ultra notation I think was the right way.
Lets just go with what came out of your LPEI run. Let's not force the 8 tina comma to be the schisma-complement of the 6 tina. That way there is potential to notate more ratios exactly. I suspect we really only latched on the schisma-complement in desperation as a way to decide an otherwise difficult n-tina comma. But now we have a better way.
Agreed.
That way there is a simple consistent description of how we decided all of these commas.
Which is: we found the best comma per semitina zone up to the half apotome using LPEI =

lb(N2D3P9) + (AAS / 10)^1.5 + 2^(ATE- 10) + abs(2 * tinavalue - 2 * round(tinavalue))

Then snapped Ultra notation to the nearest semitina and lined it up with this list of 809 best commas. Then for each of the 47 Ultra commas we grabbed the ±19 semitinas near it, calculated the metacommas between each of those (up to 38) best commas and the given Ultra comma, and grouped them into the proper bucket by the rounded integer semitina zone jump size. Then for the even-numbered of these buckets, corresponding to the whole tinas, we took the most commonly occurring metacomma. Thus we captured the comma for each tina which allows us to notate the best commas relative to the existing Sagittal cores with the least error.

For the final step we took the most common metacomma between each of the 809 best commas. (I realized these aren't really metametacommas anymore).
Are we done?
Feel free to put that list in bold, big size, underline, whatever strikes your fancy. I think we're done!

When you said let's do it in 2 days I thought you were mad. But we did it like 3.5.

Dave Keenan
Posts: 1962
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

### Re: Magrathean diacritics

Darn! I can't open the champagne yet. There are problems with this formula:
cmloegcmluin wrote: Tue Nov 03, 2020 5:01 pm Which is: we found the best comma per semitina zone up to the half apotome using LPEI =

lb(N2D3P9) + (AAS / 10)^1.5 + 2^(ATE- 10) + abs(2 * tinavalue - 2 * round(tinavalue))

Dave Keenan
Posts: 1962
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

### Re: Magrathean diacritics

Part of the problem with

LPEI = lb(N2D3P9) + (AAS / 10)^1.5 + 2^(ATE- 10) + abs(2 * tinavalue - 2 * round(tinavalue))

is probably just a typo/oversight on your part. I expect what you actually computed has the last term as

+ 1.5 * abs(2 * tinavalue - round(2 * tinavalue))

i.e. there's a weight of 1.5 and the second "2 *" goes inside the round(). It might be better written as

+ 1.5 * abs(semitinavalue - round(semitinavalue))

or better still

LPEI = lb(N2D3P9) + (AAS / 10)^1.5 + 2^(ATE- 10) + 1.5 * AERR

where N2DP = ...
AAS = ...
ATE = ...
AERR = abs(sizeInSemitinas - round(sizeInSemitinas))
sizeInSemitinas = ...

The other part of the problem is entirely my fault.

What is the justification for using this particular version of LPEI (I ask myself)?

With my approval, you got it from my spreadsheet, where I was using it to choose commas (not metacommas) for the tina accents against a bare shaft. I used it because it gave me what I wanted, as you can read starting here: viewtopic.php?p=2604#p2604

That justification doesn't cut it any more. We need to justify it from the extreme level commas alone (with mina error in place of semitina error). It should be a parameterisation of LPEI that matches the maximum number of existing extreme commas (102 out of 123).

I gave one such parametrisation here: viewtopic.php?p=2604#p2604
Namely:

LPEI_badness = lb(N2D3P9) + (AAS/8.5)^1.5 + 2^(ATE-10) + 0.5×AERR

But I still think George and I should have given more weight to error when assigning commas to the extreme symbols. So here's the version with the maximum weight on error, that still matches 102 of the existing extreme commas.

LPEI_badness = lb(N2D3P9) + (AAS/9.65)^1.7 + 2^(ATE-9.65) + 0.8×AERR

Forcing the maximum weight on AERR has caused the other 3 parameters to be tightly constrained in order to still achieve 102 matches. It's a nice coincidence that we have 9.65 in two places.

So, when you get a chance, would you please:
(a) check that the above set of LPEI parameters, i.e. b = 1.7, s = 1/9.65^1.7, t = 2^-9.65, u = 0.8, do give 102 extreme comma matches (with AERR based on size in minas).
(b) re-run the metacomma search using this new badness to choose the best comma for each semitina zone (with AERR based on size in semitinas).

cmloegmcluin
Posts: 1643
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

### Re: Magrathean diacritics

Dave Keenan wrote: Tue Nov 03, 2020 5:49 pm Darn! I can't open the champagne yet. There are problems...
NBD. Let's get it right.
Dave Keenan wrote: Tue Nov 03, 2020 7:01 pm Part of the problem with

LPEI = lb(N2D3P9) + (AAS / 10)^1.5 + 2^(ATE- 10) + abs(2 * tinavalue - 2 * round(tinavalue))

is probably just a typo/oversight on your part. I expect what you actually computed has the last term as

+ 1.5 * abs(2 * tinavalue - round(2 * tinavalue))

i.e. there's a weight of 1.5 and the second "2 *" goes inside the round(). It might be better written as

+ 1.5 * abs(semitinavalue - round(semitinavalue))
Ugh. This is exactly the kind of fiddly thing that my brain ties itself in knots over. It's so friggin' simple and then I get super frustrated that it's not instantaneous which makes it worse.

Alright, I whipped up a quick spreadsheet to demonstrate why it matters whether the 2* is inside or outside the parentheses. Inside: good. That way you still get a series of consecutive integer results. Outside: bad. Then you only get the even integers.

So, my code definitely has the 2* outside the round, which is bad. My code agrees with what I wrote here. Which I copied and pasted from my previous post here (guess neither of us caught it then either).

And yet my error calculations agree exactly with those in your spreadsheet, and have agreed since I first laid the code down. I even wrote some unit tests this morning to double-check things to be certain. What could explain this?

Perhaps the difference is that I don't actually use a round function. Your spreadsheet needed to round some value in order to get the float tina (n.0 or n.5) or integer semitina value it needed, whereas my code just has that tina or semitina value on hand. Back when it was a tina value, I multiplied by 2 in order to get the semitina value, e.g. 3.5 tinas is 7 semitinas. Once we started just thinking and working and coding directly in semitinas, I just starting using the straight semitina value. I hope that explains the confusion away? I guess the only bad scenario would have been if I for some reason had a comma that was 3.501156378 tinas and found its semitina bucket by rounding it to 4 tinas before multiplying by 2 to get 8 semitinas when it should have been 7 semitinas... yes, that would have been bad. But I could never have been doing that, because I would have already had the 3.5 tina or 7 semitina value handy already.

Currently, working in semitinas, my formula does look like this: 1.5 * abs(semitinavalue - round(semitinavalue)). There's zero trickery to that. It's plain as day. Yes, let's stop thinking about this 2* trickery which unfortunately causes an undue amount of cognitive overhead for me to trust what I'm doing while it's around. Apologies.
or better still

LPEI = lb(N2D3P9) + (AAS / 10)^1.5 + 2^(ATE- 10) + 1.5 * AERR

where N2DP = ...
AAS = ...
ATE = ...
AERR = abs(sizeInSemitinas - round(sizeInSemitinas))
sizeInSemitinas = ...
Yes, at some point we should definitely LaTeX up a pretty looking master formula. This is clearly more your forte than mine.
We need to justify [a version of LPEI] from the extreme level commas alone (with mina error in place of semitina error). It should be a parameterisation of LPEI that matches the maximum number of existing extreme commas (102 out of 123).
So, when you get a chance, would you please:
(a) check that the above set of LPEI parameters, i.e. b = 1.7, s = 1/9.65^1.7, t = 2^-9.65, u = 0.8, do give 102 extreme comma matches (with AERR based on size in minas).
Ok, that makes sense. I wasn't set up to include badness in the usefulness/complexity area of the code, but I hacked something together. And for whatever reason these s and t values don't want to work as you've derived them (it's either an error in your derivation, which is unlikely, or some obnoxious thing I can't see which is wrong in my order of operations and/or JavaScript that I can't identify, which is more likely), but when I just literally transcribe your formula it works (which is how I've been doing it in the occam semitinas script too). And you'll be happy to know that adding that mina error actually improves the situation from a metric score of 21 down to only 18
(b) re-run the metacomma search using this new badness to choose the best comma for each semitina zone (with AERR based on size in semitinas).
In other words, the only changes to the formula are using 0.8 instead of 1.5 for u, 1.7 in place of the other 1.5, and 9.65 in place of both 10's.

I've run it with that configuration, and there's only one difference in the results. For the 7-tina, we now get a tie between the 7/425n and the 143/1715n, each with 12 occams.

The 143/1715n hasn't seen a whole lot of love or attention from us since @Ash9903b4 included it in his list of suggestions back in March in the post which kicked this quest off. However, it was George's original suggestion.

The 7/425n has gotten special attention a couple of times. Here you prefer it and yet also express an interest in disqualifying it on the basis of tina error (which by the way it does have worse tina error than the 143/1715n). And here I gave a list of reasons to prefer it:
Re: the 7-tina, I vote for 7:425n:
- made out of the 9, which is the 6 and the 3 which are already in Sagittal
- lowest SoPF>3
- most occurrences in Extreme Precision
- low abs 3 exp
- superparticular
However the first reason should be stricken because you just expressed that we're actually trying to avoid composing the tinas out of each other at this point, in an effort to reduce redundancy i.e. notate more new 2,3-free classes. Lowest SoPF>3 shouldn't matter (neither should N2D3P9, its successor); we now understand well that the popularity of the pitches directly notated by a tina comma is not as important as those indirectly notated by it via the ultracores the accents will be applied to. I re-read posts before that post, and racked my memory, but I can't figure out what I would have meant by "most occurrences in Extreme Precision". Low abs 3 exp is true but it's not as good as 143/1715n there (see the table here). And both it and 143/1715n are superparticular, so that's not a differentiating factor.

The 143/1715n is 0-zeta-max consistent, and it is 13-limit where the 7/425n is 17-limit.

I suspect the only reason we didn't choose 143/1715n it back in May was that it didn't have the best SoPF>3.

Perhaps I should invoke the George Was Probably Right Tiebreaker, that we should go with the 143/1715n.

There was one little thing that popped in my head last night though that I just remembered. For our list of 809 best commas per tina zone... we know that there are 21 (well now 18) of those which are not matches for the commas that are actually in Sagittal. When finding the metacommas, I've been doing it against these 18 theoretically best commas, even though those aren't the actual ones the minas would be applying to. Or does it not matter because almost certainly all 18 of those are in the Extreme level? Or even if they weren't, does it make more sense to calculate against the theoretically best ones anyway? It probably barely matters, but just wondered what you thought.

Dave Keenan
Posts: 1962
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

### Re: Magrathean diacritics

cmloegcmluin wrote: Wed Nov 04, 2020 4:55 am Currently, working in semitinas, my formula does look like this: 1.5 * abs(semitinavalue - round(semitinavalue)). There's zero trickery to that. It's plain as day. Yes, let's stop thinking about this 2* trickery which unfortunately causes an undue amount of cognitive overhead for me to trust what I'm doing while it's around. Apologies.
No worries.
... when I just literally transcribe your formula it works (which is how I've been doing it in the occam semitinas script too). And you'll be happy to know that adding that mina error actually improves the situation from a metric score of 21 down to only 18
123 total - 18 non-matches = 105 matches. I only got 102 matches, so what's going on here?
In other words, the only changes to the formula are using 0.8 instead of 1.5 for u, 1.7 in place of the other 1.5, and 9.65 in place of both 10's.
That's correct.
I've run it with that configuration, and there's only one difference in the results.
That's good.
For the 7-tina, we now get a tie between the 7/425n and the 143/1715n, each with 12 occams.
Damn!
Perhaps I should invoke the George Was Probably Right Tiebreaker, that we should go with the 143/1715n.
As nice as that would be, I know it's not what George would want, although he'd thank you for the sentiment. And I don't think there's any evidence that George knew about 7/425n, and so he might have had a different opinion if he had known about it.

There was one little thing that popped in my head last night though that I just remembered. For our list of 809 best commas per tina zone... we know that there are 21 (well now 18) of those which are not matches for the commas that are actually in Sagittal. When finding the metacommas, I've been doing it against these 18 theoretically best commas, even though those aren't the actual ones the minas would be applying to. Or does it not matter because almost certainly all 18 of those are in the Extreme level? Or even if they weren't, does it make more sense to calculate against the theoretically best ones anyway? It probably barely matters, but just wondered what you thought.
Are you saying you ignored my definition of "list-B" here: viewtopic.php?f=10&t=430&p=2619&hilit=list+B#p2619
which involves "the existing commas", and instead substituted the lowest-badness comma in the same semitina zone as each existing Ultra comma?

I don't see any justification for doing that?

I have had similar thoughts. But these involve, in some sense, using the Ultra commas we should have had. But that would mean substituting the lowest-badness comma in the same mina zone as each existing Ultra comma, not the same semitina zone.

So I'd appreciate it if you'd re-run the metacomma search twice (using the latest (b = 1.7, u = 0.8) LPEI badness) with list-B being:
1. the actual existing Ultra commas.
2. the lowest badness comma in the same mina zone as each existing Ultra comma.

It would also be good to know how these two lists differ, if at all.