Binary logarithm

From Wikipedia, the free encyclopedia

Plot of log2 x
Plot of log2 x

In mathematics, the binary logarithm (log2 n) is the logarithm for base 2. It is the inverse function of 2n.

The binary logarithm is often used in computer science and information theory (where it is frequently written lg n, or ld n, from Latin logarithmus dualis), because it is closely connected to the binary numeral system. The number of digits (bits) in the binary representation of a positive integer n is the integral part of 1 + lg n, i.e.

\lfloor \lg n\rfloor + 1.

In information theory, the definition of the amount of self-information and information entropy involves the binary logarithm; this is needed because the unit of information, the bit, refers to information resulting from an occurrence of one of two equally probable alternatives.

The binary logarithm also frequently appears in the analysis of algorithms. If a number n greater than 1 is divided by 2 repeatedly, the number of iterations needed to get a value at most 1 is again the integral part of lg n. This idea is used in the analysis of several algorithms and data structures. For example, in binary search, the size of the problem to be solved is halved with each iteration, and therefore roughly lg n iterations are needed to obtain a problem of size 1, which is solved easily in constant time. Similarly, a perfectly balanced binary search tree containing n elements has height lg n+1.

However, the running time of an algorithm is usually expressed in big O notation, ignoring constant factors. Since log2 n = (1/logk 2)logk n, where k can be any number greater than 1, algorithms that run in O(log2 n) time can also be said to run in, say, O(log13 n) time. The base of the logarithm in expressions such as O(log n) or O(n log n) is therefore not important. In other contexts, though, the base of the logarithm needs to be specified. For example O(2lg n) is not the same as O(2ln n) because the former is equal to O(n) and the latter to O(n0.6931...).

Algorithms with running time n lg n are sometimes called linearithmic. Some examples of algorithms with running time O(lg n) or O(n lg n) are:

[edit] Using calculators

An easy way to calculate the log2(n) on calculators that do not have a log2-function is to use the natural logarithm "ln" or the common logarithm "log" functions, which are found on most "scientific calculators". The formulae for this are:

log2(n) = ln(n)/ln(2) = log(n)/log(2)

so

log2(n) = loge(n)×1.442695... = log10(n)×3.321928...

and this produces the curiosity is that log2(n) is within 0.6% of loge(n)+log10(n).

[edit] Numerical value

The numeric value of the binary logarithm of a positive real number can easily be calculated using this algorithm.[1]

#!/usr/bin/python

from __future__ import division

def log2(X):
  epsilon = 1.0/(10**12)
  integer_value=0
  while X < 1:
    integer_value = integer_value - 1
    X = X * 2
  while X >= 2:
    integer_value = integer_value + 1
    X = X / 2
  decfrac = 0.0
  partial = 0.5
  X=X*X
  while partial > epsilon:
    if X >= 2:
      decfrac = decfrac + partial
      X = X / 2
    partial = partial / 2
    X=X*X
  return (integer_value + decfrac)

if __name__ == '__main__':
  value = 4.5
  print "     X  =",value
  print "LOG2(X) =",log2(value)

# Sample output
#
#    $ python log2.py 
#         X  = 4.5
#    LOG2(X) = 2.16992500144
#

[edit] References

[edit] See also

In other languages