> The authors are not using "constant time" in the complexity-theoretic sense.
Yes, they are. I've already made a top-level comment citing the paper's explicit definition from Section 2 and comparing it to canonical definitions from the usual literature of algorithm analysis.
> That would mean an algorithm whose running time doesn't depend on n,c.
It does not. Note the exponent, 2 + O(1). It is true that the execution time varies with the input size, but this does not preclude constant time asymptotics.
> The GCD algorithm here has the property that (I'm quoting section 1.4 here) "the number of operations is ... asymptotically n (log n)^(2+o(1))". That is not constant as n varies.
Yes it is. Constant time does not mean that execution time does not vary, in either complexity theory or cryptography.
Your "top-level comment citing the paper's explicit definition" is wrong. It says "The asymptotic runtime of the presented algorithm does not depend on the inputs, n and c", which is flatly untrue because (1) n and c are not the inputs to the algorithm and (2) the runtime does depend on n.
Having a bounded exponent is simply not the same thing as "constant time"; running time that increases unlimitedly with the size of the input precisely does preclude constant-time asymptotics.
In complexity theory, "constant time" means that the execution time is bounded as the size of the input increases. The execution time of this algorithm is not bounded as the size of the input increases.
In cryptography, I don't know for sure whether "constant time" always means the same thing, but in the context of algorithms immune to timing attacks (which is the relevant context here) it means that execution time doesn't depend on the details of the input.
In this case, the algorithm does not have complexity-theory "constant time" because the runtime grows roughly like n (log n)^2. It does have cryptography "constant time" because if you fix the size of the input, the runtime does not depend on the specific numbers you put in.
I'm sorry to be so harsh here, but you have made confident wrong statements and doubled down on them when challenged. Given that your profile says "My academic background is in complexity theory", this is pretty surprising, but there's really no question that what you're saying is wrong. I'm guessing that maybe you misread something and now don't want to lose face, but you need to look again: this algorithm is not a constant-time algorithm in the complexity-theoretic sense, and it is a constant-time algorithm in the sense that the execution time doesn't depend at all on the input if you fix its size.
I think there's a misunderstanding here. Can you please be specific about the operation you're saying is increasing commensurate with inputs n, c? The GCD algorithm is accomplished using O(1) polynomial multiplications, which is achieved because the coefficients for the polynomial multiplication are given by n, c.
EDIT: To make my position clear, what I am saying is this:
1. The presented algorithm will have variable computation time, but not variable asymptotic time,
2. The algorithm uses O(1) (i.e. constant time) polynomial multiplications in the worst case, and
3. The worst case bound does not change with n nor c, though they are used in the algorithm to calibrate the divsteps such that no more than O(1) operations are required.
I will concede it's possible I'm misunderstanding the paper itself, but I don't see that here.
The title of the paper is "Fast constant-time gcd computation and modular inversion". The algorithm presented in the paper does not compute GCDs or modular inverses in time O(1). The time taken for either of those tasks is O(n (log n)^(2+o(1))).
The algorithm presented in the paper also doesn't take a bounded number of polynomial multiplications, though I'm not sure why those should be the relevant thing to count. In section 5.4 they say they split a size-n problem into two problems whose size add up to n using O(1) polynomial multiplications of size about n, and that their algorithm takes them both to be of size ~ n/2. That means O(n) polynomial multiplications, but most of them are small so the total cost for given n is, as they say in section 5.5, Theta(log n) times the cost of polynomial multiplication provided a fast polynomial multiplication algorithm is used.
The worst-case bound, both for the number of polynomial multiplications and for the actual running time, does depend on n. (The former is of order n but, again, most of the polynomial multiplications are small; the latter is of order about n (log n)^2.)
(As for the cryptographic "constant time" notion, my understanding from the paper is that with a naive timing model -- e.g., assuming that all memory accesses take equal time -- their algorithm is exactly constant-time for fixed input size. They say in a footnote near the end of section 7: "There is considerable variation in the cycle counts, more than 3% between quartiles, presumably depending on the mapping from virtual addresses to physical addresses. There is no dependence on the secret input being inverted.")
Yes, they are. I've already made a top-level comment citing the paper's explicit definition from Section 2 and comparing it to canonical definitions from the usual literature of algorithm analysis.
> That would mean an algorithm whose running time doesn't depend on n,c.
It does not. Note the exponent, 2 + O(1). It is true that the execution time varies with the input size, but this does not preclude constant time asymptotics.
> The GCD algorithm here has the property that (I'm quoting section 1.4 here) "the number of operations is ... asymptotically n (log n)^(2+o(1))". That is not constant as n varies.
Yes it is. Constant time does not mean that execution time does not vary, in either complexity theory or cryptography.