My previous post raised some interest (i.e., one person asked me about it), so I decided to follow up a little bit on that. In a nutshell, I suggested to use logarithms and exponents to raise large matrices to some non-integer power because it's faster. That is, exp(log(A)*x)
is faster than A.^x
, subject to the constraint that A>0
everywhere. Referring to this, I can confirm that it works on Octave as well as in Matlab. I wouldn't know about other programming languages though, if you feel like trying and letting me know in the comments, I'll create a new post about it.
To that, I wanted to add something else. If we want to raise a large matrix to some integer power, it's worth using the exponentiation by squaring. The following Matlab code is significantly faster than the .^
operator already when the exponent is 4.
function y = pow(x,p) y = ones(size(x)); while (p > 0) if (mod(p,2) == 1) y = y.*x; end p = floor(p/2); x = x.*x; end
Interestingly, this does not work for Octave, which probably implements a similar algorithm when it realises that the exponent is an integer. So, if you use Octave keep going with your .^
operator.
In Matlab the code doesn't look great, probably because most of its commands are native for double precision floating-point arithmetic, while here we're dividing the exponent which is an integer. Somehow, the same function written in C looks a little bit more neat.
float powN(float x, unsigned int p) { float y = 1.0; while (p > 0) { if (p & 1) y *= x; x *= x; p >>= 1; } return y; }
Here we can use directly bitwise operators in the condition and in place of the division by 2. That said, this is a very low-level operation which is likely to be implemented more efficiently in whichever API you're using (except for Matlab, it seems). So I'd recommend you to double-check first, because it might very well be that you can use a vectorised version that outperforms this one.
No comments:
Post a Comment