developing a notational comma popularity metric

Dave Keenan
Posts: 1071
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

cmloegcmluin
Posts: 770
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

I was looking over this thread https://yahootuninggroupsultimatebackup ... 55471.html recently for some reason and I noticed this bit:
> nor is any restriction to rational values mentioned.

Technically, you're right! Yikes! But it was certainly my intention
to so restrict it.

Monz, if you could change those two ocurrences of "pitch ratios"
to "rational pitches" that should solve the problem.
I think we technically have the same problem with our N2D3P9 page.

How do you feel about replacing every instance of "pitch ratio" with "rational pitch", to be absolutely unambiguous that said ratios are between whole numbers?

I assume that N2D3P9 is only defined for rational numbers, i.e. that things like N2D3P9(\frac{\pi}{2}) would be undefined.

Dave Keenan
Posts: 1071
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Good point. It's an annoying aspect of the language, that not all ratios are ratio-nal. But I don't like your proposed solution even though it echoes one I made in a similar situation many years ago.

It's totally clear in our formula for N2D3P9 that it only applies to rational numbers, since we say that n and d are integers. Of course N2D3P9(π/2) is undefined, since neither copfr(π) nor prime-limit(2π) is defined.

"Given a pitch ratio n/d, N2D3P9 estimates its rank in popularity among all rational pitches in musical use."

I've now changed that to:
"Given a rational number n/d representing a pitch (relative to some tonic note), N2D3P9 estimates its rank in popularity among all rational pitches in musical use."

I think that change, along with the formula, is sufficient so that it will be clear that the "pitch ratios" we refer to in the rest of the article are rational.

[At this point, armed with this new metric, we resumed work in the topic: Magrathean diacritics.]

cmloegcmluin
Posts: 770
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

Here is the table but up to N2D3P9 of 307.
2,3-equivalentexactlyintroducing
pitchnotatingsymbolScalaScala
ratioJIsubsetN2D3P9archivearchive
classN2D3P9symbolsindicesrankrankoccurrences
1/1 1.0000117624
5/1 1.389 3, 0225371
7/1 2.722 0, 3, 3333016
25/1 3.472 3, 0441610
7/5 4.537 0, 3551318
11/1 6.722 3, 0, 0661002
35/1 6.806 3, 4, 0, 077875
125/1 8.681 3, 4, 488492
13/1 9.389 4, 4, 4910447
49/1 9.528 2, 2, 2109463
11/5 11.204 1, 3, 31111339
25/7 11.343 3, 41214312
13/5 15.648 4, 2, 21316205
11/7 15.685 1, 11412324
49/5 15.880 2, 41515246
17/1 16.056 2, 11613318
55/1 16.806 1, 3, 31724119
175/1 17.014 4, 4, 41817168
19/1 20.056 2, 21918166
625/1 21.701 4, 4, 42021143
13/7 21.90742120145
65/1 23.472 4, 4, 4225040
77/1 23.52842325111
245/1 23.819 4, 32419165
49/25 26.46632523134
17/5 26.759 3, 42626108
25/11 28.009274742
125/7 28.3564283362
23/1 29.389 2, 22922136
91/1 32.861 4, 4305730
343/1 33.3474313170
19/5 33.426 3, 2322797
13/11 34.426 4, 4332989
121/1 36.9723442.546
17/7 37.463 4, 4354050
25/13 39.120 4, 4, 43652.534
35/11 39.2133382892
55/7 39.2133383461
77/5 39.21343835.555
85/1 40.1393407820
275/1 42.0144411477
875/1 42.5354427621
29/1 46.722433267
19/7 46.79624437.552
23/5 48.981 2, 4, 4454445
95/1 50.139467223
143/1 51.6392476626
31/1 53.389 4, 4, 4483080
3125/1 54.25344952.534
35/13 54.769516825
65/7 54.769 4, 451102.511
91/5 54.769 4, 451102.511
49/11 54.8982535433
343/5 55.57945455.531
119/1 56.19455252.53
325/1 58.681456604.51
385/1 58.81945737.552
17/11 58.870 4, 4, 45835.555
1225/1 59.549 4, 4594147
169/1 61.0284608614
121/5 61.620611477
77/25 65.355 4, 4626327
125/49 66.165636327
25/17 66.898464134.58
23/7 68.574654742
17/13 69.5744664742
125/11 70.023671477
133/1 70.194683292
625/7 70.8916911310
115/1 73.47270604.51
19/11 73.537 2, 27155.531
37/1 76.056 4, 47242.546
49/13 76.6764731477
29/5 77.8707459.528
455/1 82.153475186.55
539/1 82.34776186.55
1715/1 83.368777621
25/19 83.56537879.519
55/13 86.06580217.54
65/11 86.0654801646
143/5 86.065809013
121/7 86.269821646
19/13 86.9073834544
187/1 88.30684217.54
31/5 88.981856825
91/25 91.2814863292
55/49 91.4974873951
605/1 92.4318894.512
343/25 92.631896825
41/1 93.389907223
35/17 93.65792123.59
85/7 93.65792-0
119/5 93.65792217.54
125/13 97.801494604.51
175/11 98.03295.5102.511
275/7 98.032 4, 495.51477
425/1100.347973292
169/5101.713983292
121/25102.70199217.54
43/1102.7221005829
161/1102.861101604.51
221/1104.361102252.53
1375/1105.035103-0
4375/1106.33741041477
23/11107.759 4, 41056327
29/7109.0191068315
209/1110.306107604.51
19/17113.6481087223
637/1115.014109604.51
2401/1116.71511079.519
145/1116.806111-0
35/19116.9911131646
95/7116.991113604.51
133/5116.991113604.51
77/13120.491116134.58
91/11120.491 4, 4, 4116217.54
143/7120.4914116123.59
25/23122.4541181646
47/1122.7221195137
31/7124.57412011310
475/1125.34712194.512
37/5126.7591227024
23/13127.3521236327
65/49127.7931243292
715/1129.097125186.55
847/1129.40312611310
29/25129.78412711310
247/1130.361128604.51
49/17131.12041293292
155/1133.472130604.51
15625/1135.6341316327
289/1136.472132252.53
175/13136.921133.5604.51
325/7136.921133.5-0
245/11137.245135.511310
539/5137.245135.5604.51
595/1140.486 4, 4137-0
169/7142.398138-0
143/25143.4411393292
121/35143.781140102.511
1625/1146.701141-0
1925/1147.049142604.51
55/17147.1761443292
85/11147.176 4, 4, 4144186.55
187/5147.176144217.54
31/25148.302146604.51
6125/1148.8721471646
845/1152.569148604.51
343/125154.385149102.511
41/5155.6481501646
53/1156.0561517422
119/25156.096152-0
253/1161.6394153604.51
125/77163.38715411310
203/1163.528155-0
49/19163.787156102.511
625/49165.41315794.512
23/17166.5371587621
125/17167.245159-0
169/25169.522160-0
323/1170.472161604.51
43/5171.2041621646
29/11171.3151639013
35/23171.4351651646
115/7171.435165604.51
161/5171.435165252.53
65/17173.935168217.54
85/13173.9351683292
221/5173.9351683292
625/11175.058170-0
665/1175.486171-0
3125/7177.228172252.53
37/7177.463173123.59
1001/1180.73641741477
575/1183.681175-0
55/19183.843177123.59
95/11183.843177604.51
209/5183.8431773292
23/19186.130179102.511
217/1186.861180604.51
121/13189.343181604.51
185/1190.139182-0
361/1190.528183-0
299/1191.028184-0
245/13191.690185.5-0
637/5191.690185.5604.51
343/11192.144187-0
59/1193.3891888315
2401/5194.525189134.58
133/25194.985190-0
31/11195.759 4, 41914941
833/1196.681192-0
77/65200.8184194217.54
91/55200.818194604.51
143/35200.818194186.55
121/49201.293196134.58
29/13202.463197102.511
1331/1203.347198186.55
47/5204.537199134.58
2275/1205.382200-0
2695/1205.8682013292
77/17206.046203604.51
119/11206.046203-0
187/7206.0462033292
61/1206.72220511310
8575/1208.4204206123.59
125/19208.912207-0
37/25211.265208217.54
1183/1213.597209604.51
275/13215.162210.5-0
325/11215.162 4, 4210.5-0
605/7215.671212.5-0
847/5215.671212.5604.51
65/19217.2692153292
95/13217.269215604.51
247/5217.2692153292
41/7217.907217123.59
85/49218.5342183292
935/1220.764219604.51
169/11223.769220-0
289/5227.454221-0
125/91228.202222-0
275/49228.742223.5102.511
539/25228.742223.5-0
3025/1231.076225-0
31/13231.352226123.59
205/1233.472227-0
175/17234.144228.5-0
425/7234.144228.5-0
169/35237.3302303292
143/125239.069231-0
43/7239.6852321477
49/23240.009233186.55
91/17243.509235-0
119/13243.509235-0
221/7243.5092353292
625/13244.502237604.51
875/11245.081238.5-0
1375/7245.081238.5604.51
187/25245.293240-0
931/1245.681241-0
67/1249.389242252.53
391/1249.806243-0
2125/1250.868244-0
125/121256.752245604.51
215/1256.806246-0
319/1256.972247-0
805/1257.153248-0
77/19257.380250186.55
133/11257.380250604.51
209/7257.3802503292
41/25259.414252604.51
53/5260.0932531646
1105/1260.903254-0
6875/1262.587255-0
29/17264.759256186.55
21875/1265.842257604.51
259/1266.194258-0
343/13268.366259604.51
55/23269.398261217.54
115/11269.398261-0
253/5269.398261604.51
35/29272.5462643292
145/7272.546264-0
203/5272.546264-0
95/49272.978266-0
1045/1275.764267-0
37/11278.87026859.528
437/1279.194269-0
71/1280.056270-0
143/49281.145271-0
169/125282.536272-0
1573/1284.014273604.51
85/19284.120275604.51
95/17284.120275604.51
323/5284.120275186.55
43/25285.340277217.54
161/25285.7252783292
47/7286.352279252.53
3185/1287.535280-0
3773/1288.215281-0
221/25289.892282-0
12005/1291.788283186.55
725/1292.014284-0
175/19292.477285.5-0
475/7292.477285.5-0
341/1293.639287604.51
37/35295.7722883292
29/19295.9072891477
73/1296.056290217.54
385/13301.227292.53292
455/11301.227292.53292
715/7301.227292.5252.53
1001/5301.227292.51646
31/17302.5372958116
377/1303.694296-0
91/19304.176298-0
133/13304.176298-0
247/7304.1764298252.53
125/23306.1344300604.51
209/25306.404301-0
235/1306.806302-0

Dave Keenan
Posts: 1071
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Dear Douglas, Thank you so much for this. Congratulations on finding and fixing that bug yesterday morning, to make this possible. And thanks for the incredible amount of work you put into finding the metric we call N2D3P9. This table is really the culmination of all that work — an awesome resource that I know I'll be referring to again and again in the future.

It just hit me that this table is nothing less than a new foundation for the entire Sagittal notation system. And as such, it feels so much more solid than a bunch of noisy statistics.

Dave Keenan
Posts: 1071
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

OK. We're back here to complete the project described in this thread's title, although renaming it slightly as a "notational comma usefulness metric". N2D3P9 is a 2,3-equivalence-class popularity metric. Now we want to combine it with considerations of absolute three exponent (ATE) which relates to usefulness in notating JI, and absolute apotome slope (AAS) which relates to usefulness in notating EDOs and other temperaments.

The primary reason that we want such a metric now, is to decide the primary commas for the Magrathean accent marks (diacritics) which are intended to represent integer multiples from 1 to 9 and 0.5 of, a tina, which is 1/8539.00834 octaves or 0.140531541 cents.

We don't have any statistical data to base this usefulness metric on, in the way we did in deriving N2D3P9. And the issue here is not psychoacoustic but notational. Here are some relevant recent discussions from the Magrathean diacritics thread, beginning with a discussion of George Secor's "weighted complexity" which we are now calling "Secor complexity".

Dave Keenan wrote:
Tue Sep 08, 2020 11:52 pm
cmloegcmluin wrote:
Tue Sep 08, 2020 4:29 am
Dave Keenan wrote:
Sat Sep 05, 2020 10:50 am
Notice how George included a term that attempted to correct for the balance-blindness of SoPF>3. Perhaps we can replace the first two terms with some function of N2D3P9 and simplify the last two terms into one term, given that abs3exp and absApotomeSlope are the same thing here.
I believe the term you are talking about which attempts to correct for the balance-blindness of SoPF>3 is "J". So, I agree that the first two terms, "G" and "J", can be replaced with (a function of) N2D3P9. And then consolidate the "K" and "L" terms too.
Yes. J is what I was referring to. I think we have to replace G+J with some constant times the log of N2D3P9. Or maybe k*sqrt(N2D3P9) or k*N2D3P9. I'm not sure which.
volleo6144 wrote:
Sat May 30, 2020 2:43 am
The [Secor] complexity is G+J+K+L, where:
- G (">3") is the SoPF>3
- J ("d+n") is the absolute value of (H-I) times G/5, with H = the number of primes >3 (with multiplicity) in the denominator (smaller value) and I in the numerator (larger value)—7:25k (224:225) has G=17, H=1 (actually -1 in the spreadsheet), I=2, J=(2-1) times 17/5 or 3.4. This is zero for any comma that's one prime against another, so 5:7k, 5:7C, 11:23S, etc. are all zero here.
- K ("3-exp.") is 2^(abs(3exp) - 8.5) times ln(G+2), so 5C (80:81) is 2^(4-8.5)×ln(5+2) = ln(7)/sqrt(512) = 1.95/22.6 = 0.086. This ranks commas with high 3-exponents as really complex (3C is at 7.84 here, and 3s is at 17.2 trillion).
- L ("slope") is like K, but with apotome slope instead of 3-exponent. L = 2^(abs(slope) - 8.5)×ln(G+2). 3C (slope = 10.6) is at 2.88 here, and 3s (slope = 52.8) is at 14.8 trillion.
Okay, so the question becomes: what function of N2D3P9 balances well with some version of the "K" term in George's complexity function. There's a bunch of magic numbers in George's function. Any idea what the 8.5 is (I see that that's the ATE of half a Pythagorean large diesis, which is the size category boundary between C and S)? Or why it's two to this power? Or why we add 2 to the SoPF>3 and then take the natural log? It feels incommensurate to use two different bases between N2D3P9 and the "K" part. I feel like George must have had a target he was aiming for, like he had put things in terms of the relationship between two key ratios he wanted to push to the correct side of each other. Or something along the lines of what Dave was going for when he brought "munging" into the lingo:
Yeah. I think it is something like my munging. I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.

It's messy how the SOPF>3 enters into both G and K.

Dave Keenan wrote:
Wed Sep 09, 2020 10:45 am
cmloegcmluin wrote:
Wed Sep 09, 2020 5:32 am
I suggest again that we should bring this back to the developing a notational comma popularity metric thread.
If we took it there, I'd feel obliged to make it general for all notational commas, whereas here we can take shortcuts that rely on their small size and close spacing as tinas.
I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.
Okay. But you're not specifically aware, then, that it's in terms of like something to do with the circle of fifths, or most popular nominal as 1/1 being D vs. G, etc. that kind of stuff. In other words: is it flexible, and/or might there be a psychoacoustically justifiable value for this parameter.
I think it's slightly flexible. Psychoacoustics are not relevant. Yes, it's to do with the chain (not circle) of fifths and the fact that we can only go to 2 sharps or 2 flats max, and we'd really like to avoid having more than one. And since George takes the absolute value of the 3-exponent, he's assuming 1/1 is D (the point of symmetry) or not far from it, on the chain of fifths.
It only takes a glance at the precision levels diagram to see that 's secondary comma zone is an enclave of the secondary comma zone for , and since the 5s has much lower N2D3P9 than the 19s, by the logic of the test as I've written it now, the 5s should be the comma for  . Which we know there's a lot more considerations involved here, and maybe it comes down to the complex relationship between the High and Ultra levels (or maybe more accurately the complex relationship between the Promethean and Herculean symbol subsets). Anywho...
I don't think we have to get into stuff like that. I think it can be done with only N2D3P9 and ATE if they are combined in the right way.
What would be relevant to resolve, at least, would be the boundary between error and usefulness. Was I right to say it the point where we start considering EDO-ability is when we move on to badness (and thus "developing a notational comma popularity metric" topic being a subtopic of "Just Intonation notations" is the correct home for it)?
I think we only need to go back to DANCPM if we are developing a comma usefulness metric with general application, and yes, that would require that we consider EDO-ability. But that wouldn't need to involve error or badness. Here's how I see our current terminology: "Usefulness" would be a combination of (popularity or N2D3P9) and (3-exponent and/or slope). "Badness" would be a combination of "usefulness" and error.

Dave Keenan wrote:
Wed Sep 09, 2020 12:09 pm
Here's N2D3P9 versus sopfr for N2D3P9 < 903.

And here it is again with N2D3P9 on a log axis.

Clearly we need to take the log of N2D3P9 to make it work in a modified "Secor complexity" metric.

Just reading off the graph, 4.5 × lb(N2D3P9) ≈ 15 × log10(N2D3P9) ought to just plug straight in.

Dave Keenan wrote:
Wed Sep 09, 2020 12:46 pm
So I'm getting that something like

4.5×lb(N2D3P9) + 8×2^(ATE-8.5)

should be similar to Secor complexity.

Call it 4.5×lb(N2D3P9) + 9×2^(ATE-8.5) then eliminate the common factors since they don't affect ranking.

lb(N2D3P9) + 2×2^(ATE-8.5)

= lb(N2D3P9) + 2^(ATE-7.5)

Exponentiating that whole thing also doesn't affect ranking. That would be

N2D3P9 × 2^(2^(ATE-7.5))

ATE	N2D3P9 multiplier = 2^(2^(ATE-7.5))
0	1.003836474
1	1.007687666
2	1.015434433
3	1.031107087
4	1.063181825
5	1.130355594
6	1.277703768
7	1.632526919
8	2.665144143
9	7.102993301
10	50.45251384
11	2545.456153
12	6479347.025
13	4.19819E+13
14	1.76248E+27

That looks like it's doing the right kind of thing. We only have to adjust that 7.5 value, which is the ATE at which N2D3P9 gets doubled. Call it the DATE. My mapping from George's formula was all very rough and I ignored the J (balance ) term. Maybe bump DATE up to 9.

ATE	N2D3P9 multiplier = 2^(2^(ATE-DATE)) where DATE = 9
0	1.00135472
1	1.002711275
2	1.005429901
3	1.010889286
4	1.021897149
5	1.044273782
6	1.090507733
7	1.189207115
8	1.414213562
9	2
10	4
11	16
12	256
13	65536
14	4294967296

Dave Keenan wrote:
Wed Sep 09, 2020 3:09 pm
cmloegcmluin wrote:
Wed Sep 09, 2020 1:51 pm
I hope we don't regret if we do want to find a general comma no pop rank and wish we'd used it on the tinas. But honestly at this point I think my hypothetical future regretful self would accept the fact that my present impatient self felt he'd spent enough time on this problem already.
I hate to say it, but now I'm thinking the only way to validate this comma usefulness metric, and to tune its parameters (like DATE) is to apply it to all the existing extreme-precision JI notation commas under the half-apotome, and minimise the sum-of-squares or something. That means including another multiplier: 2^(2^(AAS-DAAS)).

A much harder problem would be to optimise it to maximally-justify our existing choices of one comma over another in each extreme bucket.

An interesting case (in addition to the 5s vs 19s one you mentioned) is 3C. It has N2D3P9 of 1 and ATE of 12 that results in a multiplier of 256. But I don't know what else it's competing with for that slot. No extreme comma under the half apotome has ATE > 12.
If we settle on this as our usefulness metric, I can build it into the code and/or get some updated results per half-tina.
Actually, I can see some serious floating-point overflow and underflow issues with the form:
N2D3P9 × 2^(2^(ATE-DATE)) × 2^(2^(AAS-DAAS))

Can you build it into the code as lb(N2D3P9) + 2^(ATE-DATE) + 2^(AAS-DAAS) with switches to change DATE and DAAS (separately) from their default values of 9 and 9.

cmloegcmluin
Posts: 770
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

Thanks for seeding the discussion back over here.

I can implement this function as lb(N2D3P9) + 2^(ATE-DATE) + 2^(AAS-DAAS), yes. Are you sure, however, that you want add the term for AAS without adjusting the strength with which it and ATE applies? I might have expected you would want to keep a similar overall impact of the 3-exponent, distributing it between ATE and AAS. I do realize that Secor complexity included both of them at 2^ strength, but I didn't pay close enough attention when you were balancing N2D3P9 w/ ATE alone to see whether its simple lb() accounted for the elimination of AAS or not.

(ATE = Absolute Three Exponent, formerly referred to as abs3exp, useful in knowing how many sharps and flats are required accompaniment to the comma in question to notate its 2,3-equivalent pitch ratio class; AAS = Absolute Apotome Slope, more important for EDO-ability because it measures how much the comma's fraction of an apotome changes as the fifth is tempered)

From the above, it looks like you do not have a good simple answer to my question I posed:
cmloegcmluin wrote:
Wed Sep 09, 2020 5:32 am
I thought it would be a quick experiment to use the test I have already set up for verifying primary commas, but I'm not sure the approach I am testing is appropriate.
Well, I guess it really wasn't posed as a question. But I am still unsure what exactly to do with this metric, once we have it, w/r/t existing commas in Extreme. Here's the previous effort I described, from earlier in this thread:
cmloegcmluin wrote:
Mon Jun 29, 2020 9:39 am
I know we haven't settled on a metric yet, but I elected to get the infrastructure in place to check the primary commas for symbols in the JI notations, i.e. to verify that they are the most popular commas in their secondary comma zones according to this metric.

Only about half of the time was this true. But fret not: I realized that we shouldn't hold ourselves to this condition.
I'm not sure how this compares with "apply it to all the existing extreme-precision JI notation commas under the half-apotome, and minimise the sum-of-squares or something" or "A much harder problem would be to optimise it to maximally-justify our existing choices of one comma over another in each extreme bucket." I don't really know what either of those mean. You had said "I don't think we have to get into stuff like that" when I mentioned how the 5s enclave of the 19s works, but I still don't see how to avoid getting into stuff like that.

It does seem like an interesting problem... but I am unfortunately allocated to another project today, so I won't be able to attack it until tomorrow. Clearly I'm not thinking incisively yet... just gathering my thoughts and concerns is all.

Dave Keenan
Posts: 1071
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Here's a log-log plot of N2D3P9 versus sopfr. It shows that the rightmost line of points corresponds to
sopfr = sqrt(N2D3P9×18) = 4.2 × sqrt(N2D3P9). I believe this line corresponds to the lone primes.

So sqrt(N2D3P9) would be an alternative to lb(N2D3P9), as a substitute for sopfr in our replacement for Secor complexity.

But it would really be replacing George's G+J = sopfr(nd) + [copfr(n)-copfr(d)]×sopfr(nd)/5, so that's really what I should be plotting N2D3P9 against. I just haven't had time to extract the copfr's to do this.
Attachments
LogN2d3p9VsLogSopfr.png

Dave Keenan
Posts: 1071
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

cmloegcmluin wrote:
Thu Sep 10, 2020 1:18 am
I can implement this function as lb(N2D3P9) + 2^(ATE-DATE) + 2^(AAS-DAAS), yes. Are you sure, however, that you want add the term for AAS without adjusting the strength with which it and ATE applies? I might have expected you would want to keep a similar overall impact of the 3-exponent, distributing it between ATE and AAS. I do realize that Secor complexity included both of them at 2^ strength, but I didn't pay close enough attention when you were balancing N2D3P9 w/ ATE alone to see whether its simple lb() accounted for the elimination of AAS or not.
That's a good point. I did double the contribution from 2^(ATE-DATE) to make up for omitting 2^(AAS-DAAS), back when I thought we were only applying it to tinas. So we should halve them both now. But that is equivalent to subtracting one from their exponents, which means the default values for DATE and DAAS should be 10 instead of 9.

Yes, you could also make the 2's into parameters. I think that's like how sharp the knees are.
From the above, it looks like you do not have a good simple answer to my question I posed:
cmloegcmluin wrote:
Wed Sep 09, 2020 5:32 am
I thought it would be a quick experiment to use the test I have already set up for verifying primary commas, but I'm not sure the approach I am testing is appropriate.
Well, I guess it really wasn't posed as a question. But I am still unsure what exactly to do with this metric, once we have it, w/r/t existing commas in Extreme. Here's the previous effort I described, from earlier in this thread:
cmloegcmluin wrote:
Mon Jun 29, 2020 9:39 am
I know we haven't settled on a metric yet, but I elected to get the infrastructure in place to check the primary commas for symbols in the JI notations, i.e. to verify that they are the most popular commas in their secondary comma zones according to this metric.
I'm not sure how this compares with "apply it to all the existing extreme-precision JI notation commas under the half-apotome, and minimise the sum-of-squares or something" or "A much harder problem would be to optimise it to maximally-justify our existing choices of one comma over another in each extreme bucket."

I don't really know what either of those mean. You had said "I don't think we have to get into stuff like that" when I mentioned how the 5s enclave of the 19s works, but I still don't see how to avoid getting into stuff like that.
We're definitely "getting into stuff like that" now that we're back in this thread.

I had forgotten that you were already set up to do this. That's brilliant. That's the thing I thought would be "a much harder problem". Forget what I wrote above about sum of squares. We just need to fiddle with the functions and parameters of this usefulness metric to maximise the number of primary commas for symbols in the JI notations that are the most "useful" commas in their secondary comma zones according to this metric.

SInce we'd be making the function fit our existing choices, we can't use it to justify them, except in so far as they would fit some simple consistent metric and are not merely random, but the main thing it would let us do is justify the choice of tina commas on the basis that they are consistent with the other comma assignments.

cmloegcmluin
Posts: 770
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

Dave Keenan wrote:
Thu Sep 10, 2020 10:33 pm
Here's a log-log plot of N2D3P9 versus sopfr. It shows that the rightmost line of points corresponds to
sopfr = sqrt(N2D3P9×18) = 4.2 × sqrt(N2D3P9). I believe this line corresponds to the lone primes.
I was thinking something like that. And probably the other straight lines are lone primes over 5, lone primes over 7, etc.
So sqrt(N2D3P9) would be an alternative to lb(N2D3P9), as a substitute for sopfr in our replacement for Secor complexity.
Is that another puzzle? I don't quite see the relationship between sqrt and the log-log plot.
But it would really be replacing George's G+J = sopfr(nd) + [copfr(n)-copfr(d)]×sopfr(nd)/5, so that's really what I should be plotting N2D3P9 against. I just haven't had time to extract the copfr's to do this.
Yes, that makes sense.
Dave Keenan wrote:
Thu Sep 10, 2020 11:24 pm
I did double the contribution from 2^(ATE-DATE) to make up for omitting 2^(AAS-DAAS), back when I thought we were only applying it to tinas. So we should halve them both now. But that is equivalent to subtracting one from their exponents, which means the default values for DATE and DAAS should be 10 instead of 9.
Ah, easy peasy. Thanks for explaining.
Yes, you could also make the 2's into parameters. I think that's like how sharp the knees are.
Is that what they call them down under? I usually call them elbows.

Looking into it, though, it seems some people consider elbows to be pointing down while knees point up. But "knee" also seems to be the more popular term overall, perhaps when people use it generically (regardless of curve direction).

Yes I think you're right that the 2's control the knee sharpness, and yes I can make those configurable.
I had forgotten that you were already set up to [optimise it to maximally-justify our existing choices of one comma over another in each extreme bucket]. That's brilliant. That's the thing I thought would be "a much harder problem". Forget what I wrote above about sum of squares. We just need to fiddle with the functions and parameters of this usefulness metric to maximise the number of primary commas for symbols in the JI notations that are the most "useful" commas in their secondary comma zones according to this metric.
Aaaaaaand now I get what you mean by "justify". I was thinking you meant something akin to textual justification, like you wanted the commas centered within their zones (why is this called "justification" in the context of aligning text in a column, anyway???). Now I get that you meant justify in the sense of defending our decisions. Which I blame only myself for not getting in the first place, because that is certainly the type of justification we've been dealing with throughout this process.
SInce we'd be making the function fit our existing choices, we can't use it to justify them, except in so far as they would fit some simple consistent metric and are not merely random, but the main thing it would let us do is justify the choice of tina commas on the basis that they are consistent with the other comma assignments.
Yes that makes sense. We're no longer justifying our tina commas in the scope of How Music Is; only in the context of How Sagittal Is.

But I think this is fine. We're not being "wicked" as we spoke about before. As long as we don't advertise this metric as anything otherwise. Just getting our job done.