-
-
Notifications
You must be signed in to change notification settings - Fork 503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
avoid unnecessary divisions and calls to gcd #38924
base: develop
Are you sure you want to change the base?
Conversation
Documentation preview for this PR (built with commit 46eddc8; changes) is ready! 🎉 |
This reduces the time for the example from #38108 and some others significantly. The old timing is
The new timing is about 5 seconds. With a more aggressive patch we could reduce it to 3.6 seconds.
The new timing is about 15 seconds. The drawback is that some computations give less beautiful (but of course, equivalent) results. For example:
instead of |
The failures are a bit puzzling. In
which gives with this patch
These are the same elements, just represented differently. In
we now get
I don't know, whether this is really the same thing. I guess they come from the fact that |
Yes, that is the same morphism as it is projective space (and thus you can scale the coordinates freely). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am pretty sure that b
divides d
in the last part because d
is the gcd of the coefficients of the initial A
and B
, which then we reduce B
by some common factor with A
and then get the gcd of the resulting coefficients. So it should be safe to do return (d // b) * B
. Although perhaps we should cc someone who is more of an expert to triple-check this.
It needs work, because I want to incorporate the much faster version I now have, and I think that the current version has some weak spots. For example, While my implementation is very much faster on some examples, there are others where it leads to problems, and I have to investigate this. |
Indeed, the gcd is only defined up to a unit.
It would be good with your push to add some tests for these cases (at least, given the bot results, it seems like they are not already included). |
It is maybe unrelated but it is possible to compute the |
Sorry for the noise, lcm is slow. |
I think the patch as is is quite good, so setting to needs-review, I will also advertise it on sage-devel. The results are not as beautiful as with the current version, but it is significantly faster. I have an additional improvement, which I am not so sure about. It avoids constructing a new polynomial ring for each variable. Instead, it constructs a univariate ring over the polynomial ring once, and reuses this until all coefficients are constants. I am sure this could be further improved and streamlined, but it is unclear whether it is worth the effort. |
This turns out not to be the case:
At the end of the algorithm, I did not yet check whether this is to be expected. |
I actually do not quite understand what happens concerning the beauty of results. For example, also in develop we have
|
That's a good example. Can you add it as a test and a note in the code? I don't know why that isn't reducing itself properly. That would be a bug IMO. |
Co-authored-by: mathzeta <[email protected]>
I'm not sure where, but yes.
I don't think it's a bug: the gcd is one:
We could require fractions of polynomials to have monic leading terms, but I don't think we always want that:
|
In fact:
|
Okay, but only because 7 is a unit. (Although if you do the gcd computation in QQbar, you get 7.) It is kind of a bug since it isn't detecting that the unit in the rational function is the leading coefficients of both... |
Yes, but then
is also a bug. Put differently, it is a decision to take in multivariate polynomial normalization, I think. |
This is an interesting inconsistency (tested on 10.5.beta3):
Personally, I feel that putting at least the leading coefficient of the denominator when working over a field to 1 is desirable to normalize the elements. Well, I guess this is off topic to this PR at the end of the day. |
OK, I'll adapt the doctests then. I really dislike
but I got no reaction on sage-devel. |
So, I just checked how univariate polynomial rings are treated specially. There is an implementation of the method |
@bhutz Do you have any comments on the change to the dynamics doctest? |
I'm fine with those doc test changes for dynamics |
@bhutz Thanks for checking. It is uglier now since it isn't canceling the leading coefficients. Of course mathematically this is the same, but computationally this might make a difference... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
My only guess for why the coefficients are no longer so nice is that you immediately stop the content
with a unit instead of 1. Perhaps we do want to add that into into the gcd since the coefficient rings want to do so for better normalization properties (if indeed that is the cause)?
I'm not sure. Indeed, one way the proposal differs from the old one is that the old version computes the content starting with the zero element, and stops when it reaches the one. But to make the differences precise seems non-obvious to me. For example, in the new version we have
In the old version, both give 7.
I don't really know what you mean here. Could you be more explicit? In preparing this comment, I discovered another potential optimisation: we currently have b = content(B)
if b is not None:
B //= b
...
_, R = A.pseudo_quo_rem(B)
while R.degree() > 0:
...
if R.is_zero():
b = content(B) If we have diff --cc src/sage/categories/unique_factorization_domains.py
index d12ff868eb3,d12ff868eb3..f544bf5d098
--- a/src/sage/categories/unique_factorization_domains.py
+++ b/src/sage/categories/unique_factorization_domains.py
@@@ -210,9 -210,9 +210,14 @@@ class UniqueFactorizationDomains(Catego
d = a.gcd(b)
else:
d = one
-- g = h = one
-- delta = A.degree() - B.degree()
++
_, R = A.pseudo_quo_rem(B)
++ if R.is_zero():
++ return d * B
++ if not R.degree():
++ return f.parent()(d)
++ delta = A.degree() - B.degree()
++ g = h = one
while R.degree() > 0:
A = B
h_delta = h**delta |
I tried to change I tried to find a ring where |
Perhaps working over Hmm..interesting, I am somewhat surprised by this. I thought that would have been the difference. Well, in that case, we might want to leave it as |
I don't think that I can create a UFD from that. |
Ah, right, we need a UFD. Then perhaps |
I don't know how to do that:
See also #38937 |
Hmmm...I don't know what a good example, much less one that is also easy to construct in Sage. Perhaps we should ask for an example on sage-devel. |
Since nobody was able to come up with an example, could we please deal with that problem once it arises? |
In an effort to make #38108 more usable, we optimize the computation of gcd's in generic polynomial rings.