tinas name ratio prime exponent vector (monzo) cents slope gpf sopfr N2D3P9 0.5 (none) 1.0 59/7n 14337/14336 [-11 5 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 1 ⟩ 0.121 4.993 59 66 51.241 1.5 2023/5n 163863/163840 [-15 4 -1 1 0 0 2 ⟩ 0.243 3.985 17 46 796.088 2.0 1/5831n 5832/5831 [ 3 6 0 -3 0 0 -1 ⟩ 0.297 5.982 17 38 688.382 2.5 85/121n 61965/61952 [ -9 6 1 0 -2 0 1 ⟩ 0.363 5.978 17 44 539.645 3.0 1/455n 4096/4095 [ 12 -2 -1 -1 0 -1 ⟩ 0.423 -2.026 13 25 82.153 3.5 (none) 4.0 3025/7n 3025/3024 [ -4 -3 2 -1 2 ⟩ 0.572 -3.035 11 39 539.178 4.5 (none) 5.0 2401/25n 2401/2400 [ -5 -1 -2 4 ⟩ 0.721 -1.044 7 38 324.209 5.5 (none) 6.0 65/77n 2080/2079 [ 5 -3 1 -1 -1 1 ⟩ 0.833 -3.051 13 36 200.818 6.5 13/37n 9477/9472 [ -8 6 0 0 0 1 0 0 0 0 0 -1 ⟩ 0.914 5.944 37 50 329.574 7.0 7/425n 1701/1700 [ -2 5 -2 1 0 0 -1 ⟩ 1.018 4.937 17 34 234.144 7.5 1/2875n 2097152/2095875 [ 21 -6 -3 0 0 0 0 0 -1 ⟩ 1.055 -6.065 23 38 459.201 8.0 253/5n 20493/20480 [-12 4 -1 0 1 0 0 0 1 ⟩ 1.099 3.932 23 39 269.398 8.5 (none) 9.0 1/539n 131072/130977 [ 17 -5 0 -2 -1 ⟩ 1.255 -5.077 11 25 82.347 9.5 1/1295n 1296/1295 [ 4 4 -1 -1 0 0 0 0 0 0 0 -1 ⟩ 1.336 3.918 37 49 665.486
Magrathean diacritics
- Dave Keenan
- Site Admin
- Posts: 2180
- Joined: Tue Sep 01, 2015 2:59 pm
- Location: Brisbane, Queensland, Australia
- Contact:
Re: Magrathean diacritics
Here's the full set of LATE half tinas with N2D3P9 below 896, that I found using your script. They were constrained to be within ±0.25 tinas.
- cmloegcmluin
- Site Admin
- Posts: 1700
- Joined: Tue Feb 11, 2020 3:10 pm
- Location: San Francisco, California, USA
- Real Name: Douglas Blumeyer (he/him/his)
- Contact:
Re: Magrathean diacritics
As one of the find-commas script's most trusted advisors, I accept your fealty on its behalf. It pledges to be a good king, in the way of Cyrus the Great, respecting your existing customs and making sure being part of our empire is to your benefit.
So I ran my script overnight with max N2D3P9 of Infinity and the default max ATE and AAS of 15 and 14, respectively, and my results agree with yours. Not that I'm super surprised, since we're running very similar scripts. All I did was build a meta-script to automatically check for the least N2D3P9 LATEs among the commas for each half-tina swath.
By the way, next time, you can use the "--for-forum" flag to spit the table out in [td]'s and such.
We do need to find a primary comma for the 0.5 tina symbol, though, right? For that, I found this to the lowest N2D3P9 LATE comma: 4675/13n. It was first suggested by @Ash9903b4
back on page 2, and is still hanging out in our big cool table on page 6. I think the reason it didn't make the cut the last time around is that you'd asked me to search only ±0.125 tinas for the half tina. The error is not so hot for the 4675/13n, being 0.73 tinas.
Which probably leads us back to the inevitability of a comma notational popularity rank weighing ATE/AAS against N2D3P9. There must be some trade-off.
You had asked us to go back and look at George's formula:
The three paragraphs just above the previous one I quoted of yours are:
I should also resurface something Dave brought up a few pages ago:
If we need another example, we would probably want to make sure that the 6-tina stays the 65/77n, since we've already established that in the Extreme level, but there is a comma within ±0.25 which has half the N2D3P9: the 125/13n; it has ATE of 9, and is not LATE, so that's why we moved on to the 65/77n which is LATE with ATE of 3.
So I'm not sure whether it'll be linear, exp, or what... but a difference of 3x or 6 ATE should not counteract a factor of 2x or 100 N2D3P9. In Secor complexity, a change of 1 in the ATE moves the "K" term by a factor of 2, that is to say exponentially, so perhaps we preserve that aspect of it.
So could we say that for the comma-usefulness metric, we're still not dealing with anything that involves EDO-ability? And therefore that if we make any considerations for the consistency of the comma under 8539.00834-EDO, that should be saved to be part of the badness metric?
Re: error, here's a though: mightn't we say that the smaller a comma, the more the error matters? Or in other words, error matters proportionally, and therefore should be a proportion of the comma size to the tina size. In that case, the 4675/13n would almost certainly be out of the running, as it'd be off by a factor of 50%, as bad as considering a comma of around 9 tinas in size for the 6-tina.
Re: consistency, I'm tempted to say it's a hard requirement. Not because I'm masochistic here, but because I'm haunted by pink elephants. That is to say, I want to keep the metric as simple as possible, and if we do anything more complex w/r/t consistency than a binary yes/no, then I'm afraid we'd be spending our parametric currency less frugally than we should.
So I ran my script overnight with max N2D3P9 of Infinity and the default max ATE and AAS of 15 and 14, respectively, and my results agree with yours. Not that I'm super surprised, since we're running very similar scripts. All I did was build a meta-script to automatically check for the least N2D3P9 LATEs among the commas for each half-tina swath.
By the way, next time, you can use the "--for-forum" flag to spit the table out in [td]'s and such.
We do need to find a primary comma for the 0.5 tina symbol, though, right? For that, I found this to the lowest N2D3P9 LATE comma: 4675/13n. It was first suggested by @Ash9903b4
back on page 2, and is still hanging out in our big cool table on page 6. I think the reason it didn't make the cut the last time around is that you'd asked me to search only ±0.125 tinas for the half tina. The error is not so hot for the 4675/13n, being 0.73 tinas.
Which probably leads us back to the inevitability of a comma notational popularity rank weighing ATE/AAS against N2D3P9. There must be some trade-off.
You had asked us to go back and look at George's formula:
I believe the term you are talking about which attempts to correct for the balance-blindness of SoPF>3 is "J". So, I agree that the first two terms, "G" and "J", can be replaced with (a function of) N2D3P9. And then consolidate the "K" and "L" terms too.Dave Keenan wrote: ↑Sat Sep 05, 2020 10:50 am Notice how George included a term that attempted to correct for the balance-blindness of SoPF>3. Perhaps we can replace the first two terms with some function of N2D3P9 and simplify the last two terms into one term, given that abs3exp and absApotomeSlope are the same thing here.
The three paragraphs just above the previous one I quoted of yours are:
It still doesn't appear that you've actually answered my question: why do we care about LAAS or LATE for these tina commas? We've already said that LAAS doesn't matter much because they'll probably never be used for a EDO (unless we find people wanting to do 1200-EDO stuff I suppose). And we know that for commas this small, LAAS and LATE are almost identical, because the bit that slope subtracts from the exponent is proportional to the apotome fraction (aka size) of the comma. And you recently pointed out that — ah, here's the answer:Ah. But I think I sidetracked you from your actual point, which was, "Why do we care so much about tina commas being either LAAS or LATE (which are after all the same thing for tina-sized commas)? Why can't we use non-LAAS non_LATE commas?".
Which is a good question, and leads us back to having to come up with a comma-usefulness metric after-all, and then combining that with error to get a badness metric.
I'd like to see the "Secor complexity" of these commas, as kindly reverse-engineered by @volleo6144 here:
viewtopic.php?p=1659#p1659
Okay, so the question becomes: what function of N2D3P9 balances well with some version of the "K" term in George's complexity function. There's a bunch of magic numbers in George's function. Any idea what the 8.5 is (I see that that's the ATE of half a Pythagorean large diesis, which is the size category boundary between C and S)? Or why it's two to this power? Or why we add 2 to the SoPF>3 and then take the natural log? It feels incommensurate to use two different bases between N2D3P9 and the "K" part. I feel like George must have had a target he was aiming for, like he had put things in terms of the relationship between two key ratios he wanted to push to the correct side of each other. Or something along the lines of what Dave was going for when he brought "munging" into the lingo:Dave Keenan wrote: ↑Fri Sep 04, 2020 11:40 pm LATE (lowest absolute three exponent) has nothing to do with EDO-ability. It's LAAS (lowest absolute apotome slope) that relates to EDO-ability. LATE relates to JI-ability because it tells you how likely it is that you will need to combine the comma symbol with a sharp or flat, which is undesirable.
For example, if we look at the candidates for the 2 tina, the lowest N2D3P9 comma within ±0.25 is the 1/205n, with N2D3P9 of 233.47. The problem is that it has ATE of 8. Which is not unheard of in the existing set of commas in Sagittal, by the way. But it is not LATE, since the 205C (a 1C off from it) has ATE of 4. But we must ask ourselves, would we rather have this than the 1/5831n, which is LATE, but only barely — tied with the 5831C w/ ATE of 6 — and with an N2D3P9 three times bigger at 688.382? If there's a chance we should prefer the 1/205n to the 1/5831n, perhaps we could base the balancing of N2D3P9 and ATE on making that so.Dave Keenan wrote: ↑Fri Jun 05, 2020 12:54 pm We would then have to decide what decrease in SoPF>3 we would be willing to trade for an increase in error from 0 to 0.25 tinas. Maybe we'd be willing to trade that for the removal of a prime 11 or two prime 5's, in which case we'd multiply the munged_error by 11/2 = 5.5 or 10/2 = 5.0 before adding it to the SoPF>3.
I should also resurface something Dave brought up a few pages ago:
which gets me thinking whether the 5s could be one of these such key examples. Like the 1/205n, it is not LATE, and has an ATE of 8 (ha) vs. a LATE comma with ATE of 4 (in this case, the 1/5C). So if we allow the 5s, perhaps that could be part of a justification for the 1/205n.Dave Keenan wrote: ↑Tue Sep 01, 2020 11:07 am [permitting only LAAS commas] would disallow as 5s (although we could make an exception for accents because they would be defined as "meta-commas")
If we need another example, we would probably want to make sure that the 6-tina stays the 65/77n, since we've already established that in the Extreme level, but there is a comma within ±0.25 which has half the N2D3P9: the 125/13n; it has ATE of 9, and is not LATE, so that's why we moved on to the 65/77n which is LATE with ATE of 3.
So I'm not sure whether it'll be linear, exp, or what... but a difference of 3x or 6 ATE should not counteract a factor of 2x or 100 N2D3P9. In Secor complexity, a change of 1 in the ATE moves the "K" term by a factor of 2, that is to say exponentially, so perhaps we preserve that aspect of it.
So could we say that for the comma-usefulness metric, we're still not dealing with anything that involves EDO-ability? And therefore that if we make any considerations for the consistency of the comma under 8539.00834-EDO, that should be saved to be part of the badness metric?
Re: error, here's a though: mightn't we say that the smaller a comma, the more the error matters? Or in other words, error matters proportionally, and therefore should be a proportion of the comma size to the tina size. In that case, the 4675/13n would almost certainly be out of the running, as it'd be off by a factor of 50%, as bad as considering a comma of around 9 tinas in size for the 6-tina.
Re: consistency, I'm tempted to say it's a hard requirement. Not because I'm masochistic here, but because I'm haunted by pink elephants. That is to say, I want to keep the metric as simple as possible, and if we do anything more complex w/r/t consistency than a binary yes/no, then I'm afraid we'd be spending our parametric currency less frugally than we should.
- Dave Keenan
- Site Admin
- Posts: 2180
- Joined: Tue Sep 01, 2015 2:59 pm
- Location: Brisbane, Queensland, Australia
- Contact:
Re: Magrathean diacritics
I'm correcting me because I'm wrong. Here are the first two counterexamples in N2D3P9 order.Dave Keenan wrote: ↑Mon Sep 07, 2020 11:16 pm Correct me if I'm wrong, but I think the max-absolute-three-exponent of 6 ensures LATE commas.
symbol name ratio monzo cents slope gpf sopfr N2D3P9 .(|( 1/11S 8192/8019 [ 13 -6 0 0 -1 ⟩ 36.952 -8.275 11 11 6.722 /|\ 11M 33/32 [ -5 1 0 0 1 ⟩ 53.273 -2.28 11 11 6.722 .//| 1/125S 128/125 [ 7 0 -3 ⟩ 41.059 -2.528 5 15 8.681 `/|) 125M 250/243 [ 1 -5 3 ⟩ 49.166 -8.027 5 15 8.681
You'll notice that in both cases the 3 exponents sum to 5, meaning they are limma complements, and indeed their cents sum to 90.22499567. If a comma is larger than a limma minus a half-apotome, 90.22499567 - 56.84250303 = 33.38249264 cents, i.e. the comma/small-diesis (C/S) boundary, then it must have an absolute 3-exponent of 4 or less in order to be the LATE (Least Absolute Three Exponent) comma for that 2,3-free class.
I'm thinking maybe you already explained that to me somewhere and I forgot.
But fortunately, a 3-exponent of 6 or less will ensure LATEness in the case of the tina commas we're considering here.
For commas smaller than 33.38 c that have an absolute 3-exponent of 6, there will be another such comma that is either a Pythagorean comma (3C = 23.46001038 c) larger or smaller, or is the 3C-complement of the first. In the case of the tina commas we're considering here, those we find will always be the smaller of the two.
Any idea why we haven't seen 77/185n as a candidate for 0.5 tinas previously in this thread? It doesn't seem like it could be due to earlier limits on 3 exponent, gpf or sopfr.
tinas name ratio prime exponent vector (monzo) cents slope gpf sopfr N2D3P9 0.5 77/185n 1515591/1515520 [-13 9 -1 1 1 0 0 0 0 0 0 -1 ⟩ 0.081 8.995 37 60 1626.744 [was mistakenly 43.966]
- Dave Keenan
- Site Admin
- Posts: 2180
- Joined: Tue Sep 01, 2015 2:59 pm
- Location: Brisbane, Queensland, Australia
- Contact:
Re: Magrathean diacritics
Would it be easy for you to add a flag to make it put out tab-separated value (TSV) format, including tabs between the terms in the monzo, and between the terms and their brackets. This would make it much easier for me to get these lists into a spreadsheet for further manipulation.cmloegcmluin wrote: ↑Tue Sep 08, 2020 4:29 am By the way, next time, you can use the "--for-forum" flag to spit the table out in [td]'s and such.
Given that 4675/13n's N2D3P9 is literally off the charts (that we can presently produce) at 2391.61, what makes you believe there is no LATE comma for 0.5 tinas with lower N2D3P9?We do need to find a primary comma for the 0.5 tina symbol, though, right? For that, I found this to the lowest N2D3P9 LATE comma: 4675/13n. It was first suggested by @Ash9903b4
back on page 2, and is still hanging out in our big cool table on page 6. I think the reason it didn't make the cut the last time around is that you'd asked me to search only ±0.125 tinas for the half tina. The error is not so hot for the 4675/13n, being 0.73 tinas.
Indeed. What's wrong with 77/185n as the primary comma for the 0.5 tina symbol? It has N2D3P9 of 1626.74 [was mistakenly given as 43.97] compared to 4675/13n's 2391.61. And 77/185n is more accurate at 0.58 tinas compared to 0.73 tinas. Sure it's not LATE, but it only has an ATE of 9 compared to 6 for 4675/13n.Which probably leads us back to the inevitability of a comma notational popularity rank weighing ATE/AAS against N2D3P9. There must be some trade-off.
I thought I implied that I now thought there was no good reason. The only reason I ever wanted to stick to LATE/LAAS commas for these tina defs, was to avoid having to come up with a comma usefulness metric other than straight N2D3P9. But clearly that strategy has failed.It still doesn't appear that you've actually answered my question: why do we care about LAAS or LATE for these tina commas?
- cmloegcmluin
- Site Admin
- Posts: 1700
- Joined: Tue Feb 11, 2020 3:10 pm
- Location: San Francisco, California, USA
- Real Name: Douglas Blumeyer (he/him/his)
- Contact:
Re: Magrathean diacritics
I don't think so! Thanks for the analysis. That might save a bit of time then, to avoid searching ATE > 6.Dave Keenan wrote: ↑Tue Sep 08, 2020 10:22 am I'm thinking maybe you already explained that to me somewhere and I forgot.
After running the find-commas script w/o the --for-forum flag, you could import "dist/pitch/all.txt" to Excel or Sheets and that should work. It uses the tab character as a delimiter and should recognize it automatically.Dave Keenan wrote: ↑Tue Sep 08, 2020 11:58 am Would it be easy for you to add a flag to make it put out tab-separated value (TSV) format, including tabs between the terms in the monzo, and between the terms and their brackets. This would make it much easier for me to get these lists into a spreadsheet for further manipulation.
Anything those scripts log to the console also gets saved in a file that sticks around until you run the script again. You can find them in the dist/ folder. It's not the cleanest implementation or anything yet, but can be quite handy.
The script has only found all commas with SoPF>3 greater than 127. It would definitely be possible for there to be one with less N2D3P9 than 2391.61 but SoPF>3 greater than 127.Given that 4675/13n's N2D3P9 is literally off the charts (that we can presently produce) at 2391.61, what makes you believe there is no LATE comma for 0.5 tinas with lower N2D3P9?
I've been getting it, but haven't mentioned it because it wasn't LATE. That's all.Indeed. What's wrong with 77/185n as the primary comma for the 0.5 tina symbol?
Got it. Okay. LATE/LAAS is over.The only reason I ever wanted to stick to LATE/LAAS commas for these tina defs, was to avoid having to come up with a comma usefulness metric other than straight N2D3P9. But clearly that strategy has failed.
I kind of like the abbreviations ATE and AAS, though, as opposed to abs3exp and absolute apotome slope which we never had a shorthand for.
- Dave Keenan
- Site Admin
- Posts: 2180
- Joined: Tue Sep 01, 2015 2:59 pm
- Location: Brisbane, Queensland, Australia
- Contact:
Re: Magrathean diacritics
Oops! I screwed up. The 0.5 tina candidate 77/185n has an N2D3P9 of 1626.74, not 43.97 as I was claiming. I left out the prime limit, duh. I should have used your analyse-comma script. Still significantly lower than 4675/13n's N2D3P9 of 2391.61, but not quite as starry.
- cmloegcmluin
- Site Admin
- Posts: 1700
- Joined: Tue Feb 11, 2020 3:10 pm
- Location: San Francisco, California, USA
- Real Name: Douglas Blumeyer (he/him/his)
- Contact:
Re: Magrathean diacritics
No worries! Obviously my attention is divided too, or I would have spotted that. I decided that it would be a good idea today to be more explicit about the difference between plain old Ratios and the more specialized TwoThreeFreeClasses, and for example have the N2D3P9 function ask for them as its arguments. Opened a whole can of worms, sd it turns out... the code has been converting monzos to ratios with silent data loss whenever the fractional parts are huge, namely when larger than 9007199254740991, or 253 - 1. It can happen even with max N2D3P9 of 136, when it checks out monzos like [0 0 6 4 2 2 0 1 1 1 ⟩ which approach the maximum values of every possible prime exponent. Apparently JavaScript still keeps a good approximation of the numbers when they get that high, or I would have noticed it before. The problem comes when you try to prime factorize them, which apparently I hadn't been doing before. So that's been fun! Not. (Edit: I am looking into the BigInt type now).
- Dave Keenan
- Site Admin
- Posts: 2180
- Joined: Tue Sep 01, 2015 2:59 pm
- Location: Brisbane, Queensland, Australia
- Contact:
Re: Magrathean diacritics
It's the same in Excel, because it too uses double-precision floating-point format.cmloegcmluin wrote: ↑Tue Sep 08, 2020 3:02 pm the code has been converting monzos to ratios with silent data loss whenever the fractional parts are huge, namely when larger than 9 007 199 254 740 991, or 253 - 1. It can happen even with max N2D3P9 of 136, when it checks out monzos like [0 0 6 4 2 2 0 1 1 1 ⟩ which approach the maximum values of every possible prime exponent.
Sure. That's the point of floating-point formats. A kind of gradual overflow.Apparently JavaScript still keeps a good approximation of the numbers when they get that high, or I would have noticed it before.
I note that there is no log function for BigInts. But it will still have more than enough precision when the BigInt is converted to a number (i.e. double float) before taking the log. You will also need to convert BigInts to numbers before dividing them, unless you don't care that the result will be truncated to an integer.The problem comes when you try to prime factorize them, which apparently I hadn't been doing before. So that's been fun! Not. (Edit: I am looking into the BigInt type now).
The main problem is that BigInt operations will be much slower. This article mentions a test where BingInt multiplications took nearly twice as long. https://medium.com/better-programming/t ... 6350fd807d
I think it would be better to live within the limitations of double floats. Ratios with numbers bigger than 9×1015 are of no interest to us. Such ratios could not have N2D3P9 less than 315 796 771, which is the N2D3P9 of 522/1.
I don't understand why you would need to prime-factorise such large numbers. Surely they would only arise as the result of converting a monzo to a ratio as you describe, in which case you already have the prime factorisation.
- cmloegcmluin
- Site Admin
- Posts: 1700
- Joined: Tue Feb 11, 2020 3:10 pm
- Location: San Francisco, California, USA
- Real Name: Douglas Blumeyer (he/him/his)
- Contact:
Re: Magrathean diacritics
Exactly. It's a completely superficial problem. Where I used to just pass the monzo around, I decided it was better from the human user of the code to convert to using the ratio-like representation of the 2,3-free pitch class, and then for certain operations past that point it needs to get the monzo back again. I do still think it would have been a win for representing the "truth" of how the function fits in the world, but it is so bad from the perspective of the limitations of the language I've chosen that I don't think I'll be keeping it this way.Dave Keenan wrote: ↑Tue Sep 08, 2020 4:00 pm Surely they would only arise as the result of converting a monzo to a ratio as you describe, in which case you already have the prime factorisation.
Thanks for all the extra info about this issue.
Yeah I saw that. I won't be going with BigInts.The main problem is that BigInt operations will be much slower. This article mentions a test where BingInt multiplications took nearly twice as long. https://medium.com/better-programming/t ... 6350fd807d
I've stepped away from the code this evening but I've been reflecting on how this problem fits into another larger scale refactor. When I started out building the code, I created data structures that contained many different representations of values, for maximum flexibility, so I'd always have the monzo ready if that's the form I needed it in, or ratio if that's what I needed, or cents, etc. I'm realizing now that things might be a lot cleaner in general if I treated monzos as the singular first-class representations of JI pitches, while other forms are just derived when inputting and outputting.
Anyway, enough implementation details. I look forward to your thoughts on my revised Secor complexity metric thoughts and badness stuff.
- Dave Keenan
- Site Admin
- Posts: 2180
- Joined: Tue Sep 01, 2015 2:59 pm
- Location: Brisbane, Queensland, Australia
- Contact:
Re: Magrathean diacritics
I agree with treating the monzo as the standard format.
I know you already have code to convert an integer to a monzo, but I thought you might like the puzzle of figuring out how the following works. It's a javascript translation of what I use in Excel to obtain the exponent (i.e. monzo term) for each prime p, for an integer n. It takes advantage of the fact that the largest consecutive integer in double float format is 253 and assumes you wont pass it anything larger.
exponent(n, p) = Math.round(math.log(math.gcd(n, p**(Math.floor(53/Math.log2(p))) ), p) )
But it's only convenient if you have an efficient (Euclidean algorithm) GCD function (as Excel does), e.g. this:
https://mathjs.org/docs/reference/functions/gcd.html
And since it was in the same library, I assumed this too:
https://mathjs.org/docs/reference/functions/log.html
I know you already have code to convert an integer to a monzo, but I thought you might like the puzzle of figuring out how the following works. It's a javascript translation of what I use in Excel to obtain the exponent (i.e. monzo term) for each prime p, for an integer n. It takes advantage of the fact that the largest consecutive integer in double float format is 253 and assumes you wont pass it anything larger.
exponent(n, p) = Math.round(math.log(math.gcd(n, p**(Math.floor(53/Math.log2(p))) ), p) )
But it's only convenient if you have an efficient (Euclidean algorithm) GCD function (as Excel does), e.g. this:
https://mathjs.org/docs/reference/functions/gcd.html
And since it was in the same library, I assumed this too:
https://mathjs.org/docs/reference/functions/log.html