developing a notational comma popularity metric

User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote: Fri Aug 21, 2020 8:47 am I see lots of "[math]...[/math]" when I view it using Chrome on my Android phone, but it looks great when I view it with Firefox or Chrome on my Windows machine.
You have to scroll all the way to the bottom of the page and click the link to request the desktop version of the site. For whatever reason the Mathjax stuff doesn't work in the mobile version which it serves mobile browsers by default. I learned that a few months ago from Paul, I think, when I was working on my metallic MOS page.
We still need to add a list of ratios with their N2D3P9 (and perhaps their rank according to N2D3P9 and their archive rank and votes). And perhaps a graph or two.
Oh right! Yes.
  • I can get the table of top 5-rough ratios up there. I agree that including ranks will good, including as an illustration of the fractional ranking scheme. What do you think – should I include long ASCII forms of symbols in Sagittal JI notation?
  • I think your graph of N2D3P9 against actual Scala stats graph would be nice. You have an N2D3P1 vs the stats graph here and then N2D3P9 but w/ the axes switched and on a logarithmic scale here. It would be nice to see the 4%-fewer-on-average effect visualized, more like the full version of the graph here.
  • Maybe a histogram of the Scala stats, showing votes per ratio (maybe not all 820 of them... just the first 100 or something), so you can visualize the Zipf's law effect.
  • The charts you produced for the individual weightings of primes were really interesting but they may be a bit tangential to the final results for the Wiki page... but let me know if you think otherwise.
I think I will need to re-register to be able to edit on the Wiki. My existing account, if it still exists, will only have my old email address which I no longer have access to. I will investigate when I have time.
If you want access to your old account, you might want to ping Mike. I'm pretty sure he's an admin and might be able to help.
We need to change "The 5-rough-ratio notational popularity ranking function that had been used by the creators of Sagittal was sopfr" to "An earlier 5-rough-ratio notational popularity ranking function that had been used by the creators of Sagittal was sopfr".

And I suggest changing "a single figure obtained for each 5-rough ratio (representing the class)" to "a single figure obtained for each 5-rough superunison ratio (representing the class)".
Agreed and updated.
I think it would be good to announce it on the Microtonal Theory facebook group, when we both agree it is ready. I think the comma-no-pop-rank will be of even less general interest (even more specific to Sagittal notation design).
Good point. Yes, I was just thinking about this the other night. Were Sagittal nothing but a JI notation, we'd be done. But Sagittal aspires to notate pitch systems with fifths other than exactly 3/2 which is why we care about apotome slope. What I was wondering to myself was whether we should factor limma slope into the comma-no-pop-rank too, or if we need to factor them in differently depending on the symbol. But of course I'm teetering dangerously on the precipice of working on the comma-no-pop-rank metric here... which I'm not... supposed... to do yet...

FWIW, a couple of my coworkers did in fact think it was somewhat cool that I had helped develop a way to estimate the popularity of a musical pitch ratio. It's certainly at least a concept whose value can be easily explained to someone with little interest in composing or helping to compose microtonal music.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Fri Aug 21, 2020 10:38 am
We still need to add a list of ratios with their N2D3P9 (and perhaps their rank according to N2D3P9 and their archive rank and votes). And perhaps a graph or two.
Oh right! Yes.

• I can get the table of top 5-rough ratios up there. I agree that including ranks will good, including as an illustration of the fractional ranking scheme. What do you think – should I include long ASCII forms of symbols in Sagittal JI notation?
No. I've found that people get the wrong idea and think the ASCII is the staff notation. Can we put small gif, png, or jpg files (identical to the Sagittal smilies) inline with the text? If we did, I'd want to only show one symbol per 5-rough ratio — the one from the lowest-numbered subset. And I'd want to show Spartan symbols as level 0. And I'd want to briefly explain what might seem to reader to be embarrassing gaps. e.g. that 29/1 clashes with 7/1.
• I think your graph of N2D3P9 against actual Scala stats graph would be nice. You have an N2D3P1 vs the stats graph here and then N2D3P9 but w/ the axes switched and on a logarithmic scale here. It would be nice to see the 4%-fewer-on-average effect visualized, more like the full version of the graph here.
Some version of this, is probably the only one I think worthwhile (in addition to the Zipf graph mentioned below). It would be part of the Justification section.



The log axes were an approximation of what it really should be, which is reciprocal (1/x and 1/y) axes so the weighted error is the same size everywhere on the graph.

I don't think the 4% thing is that important.
• Maybe a histogram of the Scala stats, showing votes per ratio (maybe not all 820 of them... just the first 100 or something), so you can visualize the Zipf's law effect.
Like this one, but with slash notation for the ratios? Yeah I think that's worthwhile. I'll make both graphs. Would you please do the table.


• The charts you produced for the individual weightings of primes were really interesting but they may be a bit tangential to the final results for the Wiki page... but let me know if you think otherwise.
Yeah. Probably too much detail. But there probably should be a link to the Sagittal forum thread somewhere in the Wiki article.

I have registered with the Wiki and made a tiny change to the article.
... Sagittal aspires to notate pitch systems with fifths other than exactly 3/2 which is why we care about apotome slope. What I was wondering to myself was whether we should factor limma slope into the comma-no-pop-rank too, or if we need to factor them in differently depending on the symbol.
No. All the limma-fraction notations that will ever be needed have already been designed (the 12 rose EDOs on the periodic table). Whereas apotome-fraction notations are open-ended.
But of course I'm teetering dangerously on the precipice of working on the comma-no-pop-rank metric here... which I'm not... supposed... to do yet...
Noooo. :)
FWIW, a couple of my coworkers did in fact think it was somewhat cool that I had helped develop a way to estimate the popularity of a musical pitch ratio. It's certainly at least a concept whose value can be easily explained to someone with little interest in composing or helping to compose microtonal music.
That's great to hear. You should indeed be enjoying a warm inner glow over what you achieved here. You discovered it. Not me. It was you who you built the Large Function Collider. :-)
cmloegcmluin wrote: Tue Aug 11, 2020 10:40 am
Dave Keenan wrote:I feel like I'm [trying to be] Peter Higgs, but you've built the Large Hadron Collider. :) The Large Function Collider?
What I've built is complex, but honestly, pretty dumb. Not that I'm saying I'm dumb for building it. I'm just saying that my approach is fundamentally one of brute force. The artificial life type stuff the Excel solver is doing is a more intellectually interesting approach. There are clearly immense depths of thought-provoking possibilities for automating searches intelligently in such problem spaces which I only have the foggiest notion of at this time. One day I'd like to master this stuff. Perhaps one day I can look back fondly on this project and chuckle at myself a bit.
Brute force is exactly why I made the analogy with the LHC. How much more brute force could you get than smashing things together at 0.999999991 of the speed of light 600 million times a second and examining the shrapnel? :o
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote: Sat Aug 22, 2020 5:59 pm
cmloegcmluin wrote: Fri Aug 21, 2020 10:38 am What do you think – should I include long ASCII forms of symbols in Sagittal JI notation?
No. I've found that people get the wrong idea and think the ASCII is the staff notation. Can we put small gif, png, or jpg files (identical to the Sagittal smilies) inline with the text?
You might recall that I did a quick experiment with such small image files to see if they could fill the role of smilies for the Sagittal documentation (that thing I started building in GitBook). It seemed successful enough — they can indeed be inserted inline, the resolution looks nice, and it doesn't badly disrupt the line spacing (as we see MathJax can do) — the only problem I can see so far is that you can't achieve tight horizontal spacing per GitBook's implementation, unless I can hack that somehow — and also I haven't tested to see if pages stop working well if you've added dozens or hundreds of such symbol images — so I was planning to continue creating such small images for each of the symbols and upload them to the docs, so as long as I plan to do that, it probably wouldn't take too long to then upload all of them to the Xen Wiki too (and I think then that others technically could also use them on their Xen Wiki pages, or at least I could in the pages I've created for my scales).
If we did, I'd want to only show one symbol per 5-rough ratio — the one from the lowest-numbered subset. And I'd want to show Spartan symbols as level 0.
Can do.

I assume you'd only want to show the lowest introducing level, too, then — the one corresponding to the single displayed symbol?

Re: the zero- or one- indexing of the levels: why not just spell out the level name (Medium, High, etc.)?
And I'd want to briefly explain what might seem to reader to be embarrassing gaps. e.g. that 29/1 clashes with 7/1.
I'll leave that up to you. I think I know what you mean but I'm sure you'll be more sensitive to exactly how this should be phrased.
• Maybe a histogram of the Scala stats, showing votes per ratio (maybe not all 820 of them... just the first 100 or something), so you can visualize the Zipf's law effect.
Like this one, but with slash notation for the ratios?
Yes. Huh. I thought I had updated my table here to use slashes. Oops.

I fixed those, and zero-indexed the precision levels too. I didn't consolidate to only the earliest-introduced symbol though. We were on the forum can stomach them all. :)
Yeah I think that's worthwhile. I'll make both graphs. Would you please do the table.
Yes, on it. I had just realized there was an issue in the code.

I realized that my implementation of the maximum denominator exponent method had a flaw. When finding the maximum exponent for a given denominator prime and maximum N2D3P9 combination, we need a maximum value the numerator can reach (otherwise those two lists of numerators with greater and lesser gcp, sorted by N2P and N2 respectively, would be infinitely long). It's no big deal to find it, but for whatever reason at some point (I reviewed posts here, here, here, here, and here but found no evidence of us discussing this) I had apparently decided I only needed to look at numerators that were powers of single primes. That is incorrect. The first counterexample is when one asks for the maximum numerator when the maximum N2D3P9 is 7. That would be 35, because N2D3P9(35) = 6.81; we don't hit the next power of a single prime until N2D3P9=8.68 (for n=125) and then N2D3P9=9.53 (for n=49).

For maximum N2D3P9 = 136 it was okay, I think, because you intentionally chose a value that was just over the N2D3P9 of a power of 5. But for other max N2D3P9 values it is broken.
But there probably should be a link to the Sagittal forum thread somewhere in the Wiki article.
Added.
I have registered with the Wiki and made a tiny change to the article.
I see it! Great.
... Sagittal aspires to notate pitch systems with fifths other than exactly 3/2 which is why we care about apotome slope. What I was wondering to myself was whether we should factor limma slope into the comma-no-pop-rank too, or if we need to factor them in differently depending on the symbol.
No. All the limma-fraction notations that will ever be needed have already been designed (the 12 rose EDOs on the periodic table). Whereas apotome-fraction notations are open-ended.
Mmm, indeed, good point. Man I really am having a hard time stopping myself from thinking about the comma no pop rank metric...
You discovered it. Not me. It was you who you built the Large Function Collider. :-)
Brute force is exactly why I made the analogy with the LHC. How much more brute force could you get than smashing things together at 0.999999991 of the speed of light 600 million times a second and examining the shrapnel? :o
Whether or not it's fair to say neither of us could have arrived at N2D3P9 without the other, surely it's fair to say neither of us would have.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Sun Aug 23, 2020 6:00 am You might recall that I did a quick experiment with such small image files to see if they could fill the role of smilies for the Sagittal documentation (that thing I started building in GitBook). It seemed successful enough — they can indeed be inserted inline, the resolution looks nice, and it doesn't badly disrupt the line spacing (as we see MathJax can do) — the only problem I can see so far is that you can't achieve tight horizontal spacing per GitBook's implementation, unless I can hack that somehow — and also I haven't tested to see if pages stop working well if you've added dozens or hundreds of such symbol images —
I had forgotten about that. Good work.
so I was planning to continue creating such small images for each of the symbols and upload them to the docs, so as long as I plan to do that, it probably wouldn't take too long to then upload all of them to the Xen Wiki too (and I think then that others technically could also use them on their Xen Wiki pages, or at least I could in the pages I've created for my scales).
Why do you need to create them? Why can't you use the same files as are used here in the Sagittal forum?
I assume you'd only want to show the lowest introducing level, too, then — the one corresponding to the single displayed symbol?
Yes. I would have. But in fact I've chickened out even further, and now don't want to show any symbols outside the Spartan set, to avoid the freakout factor. It's not meant to be an article about Sagittal.
Re: the zero- or one- indexing of the levels: why not just spell out the level name (Medium, High, etc.)?
Yes, you could do that, if we still needed that column. But I failed to convey my point, which was that you were not distinguishing Spartan (low) from Athenian (high) [Edit: (medium)]. You had them both as level 1, which is correct for Athenian (according to the numbering of the Scala SA_JI.par files). I wanted 0 for Spartan, and 1 for Athenian. All the others were correct. And yes, it could be Low for Spartan, Medium for Athenian, etc.

It's not really JI precision introducing-level that we care about in this article, but rather visual-complexity level, which correlates with JI precision introducing level.
And I'd want to briefly explain what might seem to reader to be embarrassing gaps. e.g. that 29/1 clashes with 7/1.
I'll leave that up to you. I think I know what you mean but I'm sure you'll be more sensitive to exactly how this should be phrased.
I had decided the best way would be to simply fill them all in, with secondary roles, and flag that in some way. But that's irrelevant since we're only doing Spartans. i.e. only showing the symbols for the first 7 ratios, with the 1/1 symbol being blank.

Maybe we should leave out the Spartans too. Or maybe you want to talk me into including more.
Yes. Huh. I thought I had updated my table here to use slashes. Oops.

I fixed those, and zero-indexed the precision levels too. I didn't consolidate to only the earliest-introduced symbol though. We were on the forum can stomach them all. :)
Thanks. I think N2D3P9 < 100 is the right amount to include in the article.
Yes, on it. I had just realized there was an issue in the code.

I realized that my implementation of the maximum denominator exponent method had a flaw. When finding the maximum exponent for a given denominator prime and maximum N2D3P9 combination, we need a maximum value the numerator can reach ...
I was a bit surprised at how quickly you got that working, as the generating of those two lists is a major undertaking in itself (which is why it saves so much time later).

I see I didn't really spell it out, but my intention was that you would first generate a list of all the possible numerators for 5-rough superunison ratios having N2D3P9 < 3501. To do that, you apply the formula ⌊logp/2(N×9/2)⌋-1 to obtain the maximum exponents for all primes (up to 251). Then you generate all possible monzos for integers with prime exponents in those ranges (not merely prime powers), and throw away most of them because they could not be numerators for N2D3P9 < 3501. i.e. because their N2P is greater than 3501×9. Maybe 3501 is too high. Feel free to choose something lower.
Whether or not it's fair to say neither of us could have arrived at N2D3P9 without the other, surely it's fair to say neither of us would have.
Yes.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote: Sun Aug 23, 2020 8:51 am Why can't you use the same files as are used here in the Sagittal forum?
Higher resolution is all.
...[I] don't want to show any symbols outside the Spartan set, to avoid the freakout factor. It's not meant to be an article about Sagittal.
Honestly I wondered if we should include them at all. If folks want details w/r/t Sagittal, they can come here.
I failed to convey my point, which was that you were not distinguishing Spartan (low) from Athenian (high)
Ah! Indeed you're right. I hadn't noticed that. I forgot we'd eliminated the Low/Spartan JI precision level. Well, hadn't forgotten so much as it wasn't at the forefront of my mind when I did a lookup of each of these symbols in my code's representation of the JI notation data.
Maybe we should leave out the Spartans too. Or maybe you want to talk me into including more.
Nah, I think we should focus the table on the estimated popularity rank of the ratios.

You tend to think more strategically/politically than I do about this stuff. But I think the way the article is positioned now is pretty great. We're clear and up front about having developed this as part of work on Sagittal. But then we leave Sagittal out of it otherwise.

Perhaps we should add a tidbit in there somewhere simply stating that after developing N2D3P9 and then comparing the results with how Sagittal had been designed with SoPF>3, we found that only a couple changes would be good to make. We don't want to act like it's "almost no changes" because we don't want to come across like we tailored it to fit our own data. But we also don't want to spoil the "we've been more or less solid for 20 years" vibe that Sagittal has going on either. Maybe it would help to explain that we were motivated to develop the improved metric in order to make better decisions about new symbols we were adding for a new level of JI precision Sagittal hadn't bitten off yet.
I was a bit surprised at how quickly you got that working, as the generating of those two lists is a major undertaking in itself (which is why it saves so much time later).
Pretty much here's how it worked: I read your instructions over and over, sometimes slowly, sometimes quickly, probably about a dozen times, until nothing merely made vague sense anymore, but I felt I had the entire process crystal clear in my head. That part probably took an hour.

Then I just flowed everything out, in a single giant file, ugly as sin, with no tests or anything, and to my surprise, it pretty much worked straight away. That may've only taken a half hour.

Then I've been spending the past, what, three days just cleaning the darn thing up so it meets my quality standards, i.e. so that I'll have the best chance of understanding what in tarnation is happening if I ever need to fiddle with it again one day. (I've started adding comments in the code linking to posts on the forum, which has already been proving handy).
I see I didn't really spell it out, but my intention was that you would first generate a list of all the possible numerators for 5-rough superunison ratios having N2D3P9 < 3501. To do that, you apply the formula ⌊logp/2(N×9/2)⌋-1 to obtain the maximum exponents for all primes (up to 251). Then you generate all possible monzos for integers with prime exponents in those ranges (not merely prime powers), and throw away most of them because they could not be numerators for N2D3P9 < 3501. i.e. because their N2P is greater than 3501×9. Maybe 3501 is too high. Feel free to choose something lower.
I haven't added the layer to draw from an almost-certainly-enough pre-calculated pair of lists yet. It's a trade-off: big lists will increase the bundle size, slowing down the loading of the app. But maybe none of this code will make it into the bundle for the calculator web app, so maybe that's a moot point. And I could certainly use a speed up on this new "popular ratios" command; its test is so slow that it put me over the edge such that I felt compelled to finally write in a test reporter to summarize the tests which take the longest to run (and indeed it had beaten out the previous slowest test, the one for best metrics from a search scope, from the part of the codebase that was built to find N2D3P9 in the first place).

But the process you describe here is almost exactly what I did for enumerating the possible numerators. Only difference is that at the end I throw away most of them by checking their N2D3P9, not their N2P — not because the D3 or the 9 have any effect (we both know they don't), but just because it's slightly less awkward to do it that way given the way I organized the code.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Sun Aug 23, 2020 10:24 am Honestly I wondered if we should include them at all. If folks want details w/r/t Sagittal, they can come here.
...
Nah, I think we should focus the table on the estimated popularity rank of the ratios.
...
You tend to think more strategically/politically than I do about this stuff. But I think the way the article is positioned now is pretty great. We're clear and up front about having developed this as part of work on Sagittal. But then we leave Sagittal out of it otherwise.
That's agreed then. No Sagittal symbols in the article.
Perhaps we should add a tidbit in there somewhere simply stating that after developing N2D3P9 and then comparing the results with how Sagittal had been designed with SoPF>3, we found that only a couple changes would be good to make. We don't want to act like it's "almost no changes" because we don't want to come across like we tailored it to fit our own data. But we also don't want to spoil the "we've been more or less solid for 20 years" vibe that Sagittal has going on either. Maybe it would help to explain that we were motivated to develop the improved metric in order to make better decisions about new symbols we were adding for a new level of JI precision Sagittal hadn't bitten off yet.
That's a brilliant idea. Yes. Please add it.
I haven't added the layer to draw from an almost-certainly-enough pre-calculated pair of lists yet. It's a trade-off: big lists will increase the bundle size, slowing down the loading of the app. But maybe none of this code will make it into the bundle for the calculator web app, so maybe that's a moot point. And I could certainly use a speed up on this new "popular ratios" command; its test is so slow that it put me over the edge such that I felt compelled to finally write in a test reporter to summarize the tests which take the longest to run (and indeed it had beaten out the previous slowest test, the one for best metrics from a search scope, from the part of the codebase that was built to find N2D3P9 in the first place).
I was assuming there would only be a few hundred different numerators for all ratios with N2D3P9 < 3501.
But the process you describe here is almost exactly what I did for enumerating the possible numerators. Only difference is that at the end I throw away most of them by checking their N2D3P9, not their N2P — not because the D3 or the 9 have any effect (we both know they don't), but just because it's slightly less awkward to do it that way given the way I organized the code.
Then I don't understand why you had the issue/flaw in the code, that you describe here: viewtopic.php?p=2303#p2303

And I don't understand why you say "we need a maximum value the numerator can reach (otherwise those two lists of numerators with greater and lesser [gpf], sorted by N2P and N2 respectively, would be infinitely long)."

If you're going to always compute those 2 lists on the fly, surely you just limit them to numerators of ratios with the same maximum value of N2D3P9 as the denominator exponent ranges you're looking for.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote: Sun Aug 23, 2020 11:34 am That's agreed then. No Sagittal symbols in the article.
The table is up.

I also updated the earlier table on the forum to include the estimated rank and the actual rank.

I realized what you actually wanted was the lowest symbol set which supports each symbol, not the introducing level for each symbol. Which only affects the Spartan symbols, pulling a bunch of them down from 1 to 0, but :|\ \: gets pulled down from 2 to 0 since it is in Spartan but doesn't appear until the Ultra precision level.
Perhaps we should add a tidbit in there somewhere simply stating that after developing N2D3P9 and then comparing the results with how Sagittal had been designed with SoPF>3, we found that only a couple changes would be good to make. We don't want to act like it's "almost no changes" because we don't want to come across like we tailored it to fit our own data. But we also don't want to spoil the "we've been more or less solid for 20 years" vibe that Sagittal has going on either. Maybe it would help to explain that we were motivated to develop the improved metric in order to make better decisions about new symbols we were adding for a new level of JI precision Sagittal hadn't bitten off yet.
That's a brilliant idea. Yes. Please add it.
Also added. Last paragraph of the development / discovery section if you want to review it.

Note: there seems to be a bug on the Xen Wiki. I got this occasionally back when I was working on the metallic MOS page. There's the two different edit modes: Edit, and Edit Source. Edit gives you the slicker UI and is more WYSIWYG. But sometimes Edit Source can just be easier so you know what the deal is under the hood and you get it just right or assemble elsewhere and paste it in. It seems that sometimes when I use Edit mode, it botches existing stuff I'd added with Edit Source. In this case, it yanked out a ton of the MathJax I'd added, ripping it from the place it belonged, and smooshing it all together at the bottom of the page. So then I had to go and break down this blob, find all the places these pieces had been ripped from, and put them back. Annoying! Watch out for it.
I haven't added the layer to draw from an almost-certainly-enough pre-calculated pair of lists yet. It's a trade-off: big lists will increase the bundle size, slowing down the loading of the app. But maybe none of this code will make it into the bundle for the calculator web app, so maybe that's a moot point. And I could certainly use a speed up on this new "popular ratios" command; its test is so slow that it put me over the edge such that I felt compelled to finally write in a test reporter to summarize the tests which take the longest to run (and indeed it had beaten out the previous slowest test, the one for best metrics from a search scope, from the part of the codebase that was built to find N2D3P9 in the first place).
I was assuming there would only be a few hundred different numerators for all ratios with N2D3P9 < 3501.
But the process you describe here is almost exactly what I did for enumerating the possible numerators. Only difference is that at the end I throw away most of them by checking their N2D3P9, not their N2P — not because the D3 or the 9 have any effect (we both know they don't), but just because it's slightly less awkward to do it that way given the way I organized the code.
Then I don't understand why you had the issue/flaw in the code, that you describe here: viewtopic.php?p=2303#p2303

And I don't understand why you say "we need a maximum value the numerator can reach (otherwise those two lists of numerators with greater and lesser [gpf], sorted by N2P and N2 respectively, would be infinitely long)."
Bah, that's my fault. Sorry for the confusion:
  1. I told you that I had written code with a flaw.
  2. I fixed the flaw.
  3. You described a strategy for fixing the flaw.
  4. I responded saying that's what my code does. Re-reading what I wrote, I didn't make it clear at all that I had only just today make that so. This is further compounded by the other part of my story in which I claim that I got things pretty much right within the first half-hour a few days ago; the disclaimer on that should have been "except for this flaw which only affected situations I didn't have a manually prepared table on the forum to check my work against".
If you're going to always compute those 2 lists on the fly, surely you just limit them to numerators of ratios with the same maximum value of N2D3P9 as the denominator exponent ranges you're looking for.
That is correct. But don't call me Shirley.

That joke doesn't work as well in text.

I tried to get you to do it to me in the previous post. Perhaps you considered it, but thought better of it.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Hm, should we add columns for sopfr and estimated rank by sopfr?

Would you like to be the one share the page on the Facebook group? I think your name has a bit more sway there. Of course please feel free to tag me in it.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

I noticed that 65/1 is the first 5–rough ratio with an estimated rank which is quite different than its real rank (22 vs 50). This struck me as interesting because 65/1 is really close to 2/1, so perhaps in practice it’s not a particularly popular ratio despite being relatively simple by the numeric calculations within N2D3P9’s formula. In other words, N2D3P9 does not account for harmonic concepts like this. We only briefly discussed dealing with such things. Just wondering if your feelings have changed. I assume not. But perhaps we should mention this somewhere on the Xen Wiki article in the justification section.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Mon Aug 24, 2020 4:11 am The table is up.

I also updated the earlier table on the forum to include the estimated rank and the actual rank.
Thanks for those:
https://en.xen.wiki/w/N2D3P9#Table_of_T ... .29_Ratios
viewtopic.php?p=2246#p2246

I'd prefer the heading "Scala archive rank" rather than "actual rank". Who's to say what's "actual" or "real" here? And I'd prefer "N2D3P9 rank" rather than 'estimated rank", since there are other ways to estimate the Scala arcive rank (like sopfr), and who knows if estimating the Scala archive rank is even the right thing to do, in the case of less popular ratios.

Now that I see the table, with its 604.5 and 329 Scala archive ranks, I think it would be good to include another column to the right which is "Scala archive occurrences". So people can see they shouldn't take too much notice of such ranks, given that they correspond to 1 or 2 occurrences.

I'm having thoughts about a graph where I plot Scala archive occurrences against N2D3P9 notional occurrences or impled occurrences or something, but as well as not being sure what to call it, I don't know whether it should be proportional to 1/N2D3P91.37 or 1/N2D3P9_rank1.37. Any thortz?
I realized what you actually wanted was the lowest symbol set which supports each symbol, not the introducing level for each symbol. Which only affects the Spartan symbols, pulling a bunch of them down from 1 to 0, but :|\ \: gets pulled down from 2 to 0 since it is in Spartan but doesn't appear until the Ultra precision level.
Right. I suppose we could also say "introducing subset" versus "introducing precision level". Thanks for consistentising that terminology. Sorry I'm still learning it. I think you mean :/ /|: , not :|\ \: .
Also added. Last paragraph of the development / discovery section if you want to review it.
The paragraph in question is:
"After deciding upon N2D3P9, the Sagittal forum members checked Sagittal against it, to see how well they'd been served by sopfr. Each symbol in Sagittal's JI notations has a default value, or primary comma, which allows it to exactly notate ratios in a 5-rough ratio equivalence class, and based on N2D3P9, it was found that only a couple of these commas should be changed (these were among the rarest-used symbols in Sagittal). This was as expected; N2D3P9 was developed primarily in order to add new symbols to Sagittal, to enable it to exactly notate even rarer JI pitches than it already does."

The phrase "the Sagittal forum members" strikes me as very odd here, and I'd prefer it was replaced with "we" or "the authors" or "Blumeyer and Keenan".. This also makes me realise we have a mixture of active and passive voice in this article. I think we should choose one and be consistent about it. I am conflicted between the fact that it is a wiki post so in theory anyone can edit, in which case "we" or "the authors" may not be the people who did the research described, and the fact that it is intended to educate, in which case the use of "we" results in a much more engaging story.
Note: there seems to be a bug on the Xen Wiki. ... sometimes when I use Edit mode, it botches existing stuff I'd added with Edit Source. In this case, it yanked out a ton of the MathJax I'd added...
Thanks for confirming what I suspected. I took one look at what came up the first time I clicked "Edit", and thought "That's going to screw up all the math expressions" and backed out of it and used "Edit source". I plan to never use "Edit".
That is correct. But don't call me Shirley.

That joke doesn't work as well in text.
It worked. :) I've seen Airplane! (1980) and I've used the gag myself. :)
Post Reply