Modular exponentiation

From Wikipedia, the free encyclopedia

Modular exponentiation is a type of exponentiation performed over a modulus. It is particularly useful in computer science, especially in the field of cryptography.

Doing a "modular exponentiation" means calculating the remainder when dividing by a positive integer m (called the modulus) a positive integer b (called the base) raised to the e-th power (e is called the exponent). In other words, problems take the form where given base b, exponent e, and modulus m, one wishes to calculate c such that:

c \equiv b^e \pmod{m}

For example, given b = 5, e = 3, and m = 13, the solution c is the remainder of dividing 53 by 13, namely the rest of the division 125 / 13, which works out to be 8.

If b, e, and m are non-negative and b < m, then a unique solution c exists and has the property 0 ≤ c < m.

Modular exponentiation can be performed with a negative exponent e by finding the multiplicative inverse d of b modulo m using the extended Euclidean algorithm. That is:

c \equiv b^{e} \equiv d^{\left|e\right|} \pmod{m} where e < 0 and b \cdot d \equiv 1 \pmod{m}

Modular exponentiation problems similar to the one described above are considered easy to do, even if the numbers involved are enormous. On the other hand, computing the discrete logarithm (finding e given b, c, and m) is believed to be difficult. This one way function behavior makes modular exponentiation a good candidate for use in cryptographic algorithms.

Contents

[edit] Straightforward method

The most straightforward method of calculating a modular exponent is to calculate be directly, then to take this number modulo m. Consider trying to compute c, given b = 4, e = 13, and m = 497:

c \equiv 4^{13} \pmod{497}

One could use a calculator to compute 413; this comes out to 67,108,864. Taking this value modulo 497, the answer c is determined to be 445.

Note that b is only one digit in length and that e is only two digits in length, but the value be is 10 digits in length.

In strong cryptography, b is often at least 256 binary digits (77 decimal digits). Consider b = 5 × 1076 and e = 17, both of which are perfectly reasonable values. In this example, b is 77 digits in length and e is 2 digits in length, but the value be is 1309 decimal digits in length. Such calculations are possible on modern computers, but the sheer magnitude of such numbers causes the speed of calculations to slow considerably. As b and e increase even further to provide better security, the value be becomes unwieldy.

The time required to perform the exponentiation depends on the operating environment and the processor. The method described above requires O(e) multiplications to complete.

[edit] Memory-efficient method

A second method to compute modular exponentiation requires more operations than the first method. Because the required memory footprint is substantially less, however, operations take less time than before. The end result is that the algorithm is faster.

This algorithm makes use of the fact that, given two integers a and b, the following two equations are equivalent:

c \equiv (a \cdot b) \pmod{m}
c \equiv ((a\ (\mbox{mod}\ m)) \cdot (b\ (\mbox{mod}\ m))) \pmod{m}

The algorithm is as follows:

  1. Set c = 1, e' = 0.
  2. Increase e' by 1.
  3. Set c \equiv (b \cdot c) \pmod{m}.
  4. If e' < e, goto step 2. Else, c contains the correct solution to c \equiv b^e \pmod{m}.

Note that in every pass through step 3, the equation c \equiv b^{e'} \pmod{m} holds true. When step 3 has been executed e times, then, c contains the answer that was sought. In summary, this algorithm basically counts up e' by ones until e' reaches e, doing a multiply by b and the modulo operation each time it adds one (to ensure the results stay small).

The example b = 4, e = 13, and m = 497 is presented again. The algorithm passes through step 3 thirteen times:

  • e' = 1. c = (1 * 4) mod 497 = 4 mod 497 = 4.
  • e' = 2. c = (4 * 4) mod 497 = 16 mod 497 = 16.
  • e' = 3. c = (16 * 4) mod 497 = 64 mod 497 = 64.
  • e' = 4. c = (64 * 4) mod 497 = 256 mod 497 = 256.
  • e' = 5. c = (256 * 4) mod 497 = 1024 mod 497 = 30.
  • e' = 6. c = (30 * 4) mod 497 = 120 mod 497 = 120.
  • e' = 7. c = (120 * 4) mod 497 = 480 mod 497 = 480.
  • e' = 8. c = (480 * 4) mod 497 = 1920 mod 497 = 429.
  • e' = 9. c = (429 * 4) mod 497 = 1716 mod 497 = 225.
  • e' = 10. c = (225 * 4) mod 497 = 900 mod 497 = 403.
  • e' = 11. c = (403 * 4) mod 497 = 1612 mod 497 = 121.
  • e' = 12. c = (121 * 4) mod 497 = 484 mod 497 = 484.
  • e' = 13. c = (484 * 4) mod 497 = 1936 mod 497 = 445.

The final answer for c is therefore 445, as in the first method.

Like the first method, this requires O(e) multiplications to complete. However, since the numbers used in these calculations are much smaller than the numbers used in the first algorithm's calculations, the computation time decreases by a factor of at least O(e) in this method.

[edit] An efficient method: the right-to-left binary algorithm

A third method drastically reduces both the number of operations and the memory footprint required to perform modular exponentiation. It is a combination of the previous method and a more general principle called exponentiation by squaring (also known as binary exponentiation).

First, it is required that the exponent e be converted to binary notation. That is, e can be written as:

e = \sum_{i=0}^{n-1} a_i 2^i

In such notation, the length of e is n bits. ai can take the value 0 or 1 for any i such that 0 ≤ i < n - 1. By definition, an - 1 = 1.

The value be can then be written as:

b^e = b^{\left( \sum_{i=0}^{n-1} a_i 2^i \right)} = \prod_{i=0}^{n-1} \left( b^{2^i} \right) ^ {a_i}

The solution c is therefore:

c \equiv \prod_{i=0}^{n-1} \left( b^{2^i} \right) ^ {a_i}\ (\mbox{mod}\ m)

The following example is in C# or C++ languages. Let the class Bignum represent an arbitrarily large positive integer. Input b is the base, e is the exponent, and m is the modulus.

Bignum modpow(Bignum b, Bignum e, Bignum m) {

   Bignum result = 1;

   while (e > 0) {
      if ((e & 1) == 1) result = (result * b) % m; // multiply in this bits' contribution while using modulus to keep result small
      e >>= 1;
      b = (b * b) % m;
   }

   return result;
}

This code, based on that on page 244 of Bruce Schneier's Applied Cryptography, 2e, ISBN 0-471-11709-9, uses a single while loop to perform all work necessary to compute the modular exponentiation.

Note that upon entering the loop for the first time, the code variable b is equivalent to b\ . However, the repeated squaring in the third line of code ensures that at the completion of every loop, the variable b is equivalent to b^{2^i}\ (\mbox{mod}\ m), where i is the number of times the loop has been iterated. (This makes i the next working bit of the binary exponent exp, where the least-significant bit is exp0).

The first line of code simply carries out the multiplication in \prod_{i=0}^{n-1} \left( b^{2^i} \right) ^ {a_i}\ (\mbox{mod}\ m). If ai is zero, no code executes since this effectively multiplies the running total by one. If ai instead is one, the variable b (containing the value b^{2^i}\ (\mbox{mod}\ m) of the original base) is simply multiplied in.

Finally, the example b = 4, e = 13, and m = 497 is put through this test. Note that e is 1101 in binary notation. Because e is four binary digits in length, the loop executes only four times:

  • Upon entering the loop for the first time, variables b = 4, e = 1101 (binary), and result = 1. Because the right-most bit of e is 1, result is changed to be (1 * 4) % 497, or 4. exp is right-shifted to become 110 (binary), and b is squared to be (4 * 4) % 497, or 16.
  • The second time through the loop, the right-most bit of e is 0, causing result to retain its present value of 4. e is right-shifted to become 11 (binary), and b is squared to be (16 * 16) % 497, or 256.
  • The third time through the loop, the right-most bit of e is 1. result is changed to be (4 * 256) % 497, or 30. e is right-shifted to become 1, and b is squared to be (256 * 256) % 497, or 429.
  • The fourth time through the loop, the right-most bit of e is 1. result is changed to be (30 * 429) % 497, or 445. e is right-shifted to become 0, and b is squared to be (429 * 429) % 497, or 151.

The loop then terminates since e is zero, and the result 445 is returned. This agrees with the previous two algorithms.

The running time of this algorithm is O(log e). When working with large values of e, this offers a substantial speed benefit over both of the previous two algorithms.

[edit] Further optimizations : a left to right algorithm

The binary algorithm written in Python can also be implemented by scanning over the exponent bits from left to right and cleverly using a single variable.

import math

def bits(integer): #Gets number of bits in integer
   result = 0
   while integer:
      integer >>= 1
      result += 1
   return result
def power(base, exponent, modulo=None): #Allows fast exponentiation with and without moduli
   result = 1L
   if modulo == None:
      iteration = bits(exponent)
      while iteration >= 0:
         result *= result
         if (exponent >> iteration) & 1:
            result *= base
         iteration -= 1
   else:
         firstModulus = base % modulo
         iteration = bits(exponent)
         while iteration >= 0:
            result = (result * result) % modulo
            if (exponent >> iteration) & 1:
               result = (result * firstModulus) % modulo
            iteration -= 1
   return result

This method has the advantage that, after initialization, only the value in the register result changes (along with variable iteration), and moduli are not required. This generally makes it cheaper to implement when working with hardware cryptographic accelerators.

Note the line firstModulus = base % modulo. This is a small optimization so that multiplying it by base adds only new information to the result. This optimization could be used in the previous code, as well.

Both of these binary methods requires one modular square for each bit in the binary representation of exponent, and one modular multiplication for each set bit in the exponent.

With left-to-right scanning, the bit windowing optimization or sliding window optimization can be used to further reduce the number of multiplications.

Modular reductions such as the Montgomery reduction can further be used to speed up the calculation of the residue, assuming that the required precomputations have taken place.

[edit] External links