I just borrowed down into the part of the code which compares a candidate comma's size against the size category bounds to make sure it's inside, and it actually doesn't take the log. It's comparing, in each case, a rational monzo (comma candidate) against a sqrt of a rational monzo (size category bound) and it just computes what those numbers are, then checks <=>. Taking the log seems like an unnecessary step to me. Maybe it's more performant. All I can remember was that I was grappling for the longest time getting default precision levels just right everywhere across everything so things worked out reasonably across all these parts of the code, where things which certainly should be considered equal but actually aren't insanely close are, while some other things which certainly should not be considered equal in other contexts but actually are much more insanely close together than the other things are too. And where it landed, it landed.Dave Keenan wrote: ↑Sat Jan 02, 2021 11:15 pmOnly the perhaps-too-obvious fact that, in enumerating the commas with the same prime content above 3, between [0⟩ and the comma being named, no matter which order you do it, you are presumably incrementing or decrementing exponents of 2 and 3, and computing cents by taking the dot product of the monzo with 1200×⟨lb(2) lb(3) lb(5) ... ], not multiplying or dividing by factors of 2 and 3. i.e. you will be working entirely in the log domain.cmloegcmluin wrote: ↑Sat Jan 02, 2021 4:14 am Then what is the reason [for considering log-product complexity rather than product complexity]? Sorry to make you spell it out.
Aye yaye yaye... I'm probably just being obtuse here, but it feels like you're being playfully cryptic sometimes, and I'm not always in the mood to play. Need some more coffee in my system maybe. Why don't I need to actually calculate any complexity measure? Do you mean because of your proposal below to tie-break via a binary exponent assignment scheme? I mean, we're still naming commas by complexity levels, right? And as we discussed earlier in this thread, the main thing we're doing here — (in/de)crementing across powers of 3 and 2 within a 2,3-free class "x" and counting commas in a size category zone between a comma and [0 0 {x}⟩, counting occurrences to assign a complexity tier, qualifies as a complexity metric in terms of the other types of metrics we've discussed recently (phew, almost just said "this year"... first 2021 catch of the year for me!)It's really not very important, since you don't need to actually calculate any complexity measure.
Nice trick! Man, it feels less like a trick, and more like something I should just be able to do naturallyIt's also log5(7).
I wanted 5n ≈ 7m. Take the base-2 log of both sides.
lb(5)×n ≈ lb(7)×m therefore
n/m ≈ lb(7)/lb(5)

<cartoon-villain>Blast!</cartoon-villain>Not so fast.I love this idea. Yes! Instead of agonizing over some arbitrary tie-breaker regarding their complexity, we flip back to the other aspect they exhibit — size — and implicitly recognize that while the size category bounds established work quite well for commas of musical significance, they fail us at higher levels of complexity, and therefore it's occasionally necessary within a size category to have a greater and lesser for the otherwise same named comma. Brilliant brilliant brilliant.
(yes the caffeine has kicked in now... in a better mood)
I think this is a good solution. So you take all the tied commas and get them into their super form. Then work your way from the highest prime down, and the first one you hit a negative sign for is the most complex, the second one you hit a negative sign for is the next most complex, and so on.I just realised it's possible for there to be more than 2. Repeat what I did with powers of 5 and 7 using powers of 11 and 13. So we have a small 2,3-comma, a small 5,7-comma (no 2's or 3's) and a small 11,13-comma (no 2's, 3's, 5's or 7's). Then we can add and subtract them in various ways to make 4 commas with the same complexity in the same size category. Then we can do the same again with powers of 17 and 19 and get 8 commas, and so on.
Sorry for the bad news. We can either fall back on: not musically relevant.
Or we come up with some way of sorting them based on the signs of all the primes. e.g. Treat the signs as a binary number where a minus is a 1 and a plus is a 0, and the sign of the 2-exponent is the least significant bit.
I've seen videos about the corners video game developers and composers would cut to squeeze their work into the extremely limited space available. To be battling through such constraints while still innovating design on such a young artform absolutely blows my mind.We had to be clever, to do a lot with so little.
I do, yeah (and little doubt that you do too). But it's way more important to keep a growth mindset, I think. Focus on the things you can learn and accomplish, not rest on intelligence stats. I'm grateful that my upbringing and education imparted that really helpful bit of insight. I've seen bad stuff happen to really bright folks who didn't pick up on that one simple guiding thought.Now, the world needs so many programmers we can't require them to all have IQs of 125 or more, because that's only 5% of the population (although you probably do).
So the onus nowadays is on the programming language creator who can create a language which humans can read naturally but which manages to compile down performantly.And we have computing power to squander. About a million times the computing power on our desks now, compared to what I had back then. So who cares if the searching of a list of 3000 words is so inefficient it is doing a thousand times more operations than it really needs to. An efficiency of 0.1%. It causes me pain to think of it. But it's far more important that the programmer can write it quickly and test it easily and still understand how it works a few weeks later.