Multiplication algorithm
A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are used. Efficient multiplication algorithms have existed since the advent of the decimal system.
Grid method
The grid method is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s.Both factors are broken up into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage.
The calculation 34 × 13, for example, could be computed using the grid:
300
40
90
+ 12
————
442
followed by addition to obtain 442, either in a single sum, or through forming the row-by-row totals + = 340 + 102 = 442.
This calculation approach is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage.
The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need.
Long multiplication
If a positional numeral system is used, a natural way of multiplying numbers is taught in schoolsas long multiplication, sometimes called grade-school multiplication, sometimes called Standard Algorithm:
multiply the by each digit of the and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits.
This is the usual algorithm for multiplying larger numbers by hand in base 10. Computers initially used a very similar [|shift and add] algorithm in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed.
Example
This example uses long multiplication to multiply 23,958,233 by 5,830 and arrives at 139,676,498,390 for the result.23958233
× 5830
———————————————
00000000
71874699
191665864
+ 119791165
———————————————
139676498390
Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation for compactness.
multiply // Operands containing rightmost digits at index 1
product = // Allocate space for result
for b_i = 1 to q // for all digits in b
carry = 0
for a_i = 1 to p // for all digits in a
product += carry + a * b
carry = product / base
product = product mod base
product = carry // last digit comes from final carry
return product
Optimizing space complexity
Let n be the total number of digits in the two input numbers in base D. If the result must be kept in memory then the space complexity is trivially Θ. However, in certain applications, the entire result need not be kept in memory and instead the digits of the result can be streamed out as they are computed. In these scenarios, long multiplication has the advantage that it can easily be formulated as a log space algorithm; that is, an algorithm that only needs working space proportional to the logarithm of the number of digits in the input. This is the double logarithm of the numbers being multiplied themselves. Note that operands themselves still need to be kept in memory and their Θ space is not considered in this analysis.The method is based on the observation that each digit of the result can be computed from right to left with only knowing the carry from the previous step. Let ai and bi be the i-th digit of the operand, with a and b padded on the left by zeros to be length n, ri be the i-th digit of the result and ci be the carry generated for ri then
or
A simple inductive argument shows that the carry can never exceed n and the total sum for ri can never exceed D * n: the carry into the first column is zero, and for all other columns, there are at most n digits in the column, and a carry of at most n from the previous column. The sum is at most D * n, and the carry to the next column is at most D * n / D, or n. Thus both these values can be stored in O digits.
In pseudocode, the log-space algorithm is:
multiply // Operands containing rightmost digits at index 1
tot = 0
for ri = 1 to p + q - 1 //For each digit of result
for bi = MAX to MIN //Digits from b that need to be considered
ai = ri − bi + 1 //Digits from a follow "symmetry"
tot = tot +
product = tot mod base
tot = floor
product = tot mod base //Last digit of the result comes from last carry
return product
Usage in computers
Some chips implement long multiplication, in hardware or in microcode, for various integer and floating-point word sizes. In arbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2w, where w is the number of bits in a word, for multiplying relatively small numbers.To multiply two numbers with n digits using this method, one needs about n2 operations. More formally: using a natural size metric of number of digits, the time complexity of multiplying two n-digit numbers using long multiplication is Θ.
When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, b, such that, for example, 8b is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than b. This process is called normalization. Richard Brent used this approach in his Fortran package, MP.
Lattice multiplication
Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire. Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death.As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002.
- During the multiplication phase, the lattice is filled in with two-digit products of the corresponding digits labeling each row and column: the tens digit goes in the top-left corner.
- During the addition phase, the lattice is summed on the diagonals.
- Finally, if a carry phase is necessary, the answer as shown along the left and bottom sides of the lattice is converted to normal form by carrying ten's digits as in long addition or multiplication.
Example
Binary or Peasant multiplication
The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the multiplication tables required for long multiplication. The algorithm was in use in ancient Egypt. Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers.Description
On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product.Examples
This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33.Decimal: Binary:
11 3 1011 11
5 6 101 110
2
1 24 1 11000
—— ——————
33 100001
Describing the steps explicitly:
- 11 and 3 are written at the top
- 11 is halved and 3 is doubled. The fractional portion is discarded.
- 5 is halved and 6 is doubled. The fractional portion is discarded. The figure in the left column is even, so the figure in the right column is discarded.
- 2 is halved and 12 is doubled.
- All not-scratched-out values are summed: 3 + 6 + 24 = 33.
A more complicated example, using the figures from the earlier examples :
Decimal: Binary:
5830
2915 47916466 101101100011 10110110110010010110110010
1457 95832932 10110110001 101101101100100101101100100
728
364
182
91 1533326912 1011011 1011011011001001011011001000000
45 3066653824 101101 10110110110010010110110010000000
22
11 12266615296 1011 1011011011001001011011001000000000
5 24533230592 101 10110110110010010110110010000000000
2
1 98132922368 1 1011011011001001011011001000000000000
———————————— 1022143253354344244353353243222210110
139676498390 10000010000101010111100011100111010110
Binary multiplication in computers
This is a variation of peasant multiplication.In base 2, long multiplication reduces to a nearly trivial operation. For each '1' bit in the, shift the by an appropriate amount, and then sum the shifted values. In some processors, it is faster to use bit shifts and additions rather than multiplication instructions, especially if the multiplier is small or always same.
Shift and add
Historically, computers used a "shift and add" algorithm to multiply small integers. Both base 2 long multiplication and base 2 peasant multiplication reduce to this same algorithm.In base 2, multiplying by the single digit of the multiplier reduces to a simple series of logical AND operations. Each partial product is added to a running sum as soon as each partial product is computed. Most currently available microprocessors implement this or other similar algorithms for various integer and floating-point sizes in hardware multipliers or in microcode.
On currently available processors, a bit-wise shift instruction is faster than a multiply instruction and can be used to multiply and divide by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition.
<< 1 # Here 10*x is computed as *2
+ # Here 10*x is computed as x*2^3 + x*2
In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form or often can be converted to such a short sequence.
These types of sequences have to always be used for computers that do not have a "multiply" instruction, and can also be used by extension to floating point numbers if one replaces the shifts with computation of 2*x as x+x, as these are logically equivalent.
Quarter square multiplication
Two quantities can be multiplied using quarter squares by employing the following identity involving the floor function that some sources attribute to Babylonian mathematics.If one of and is odd, the other is odd too; this means that the fractions, if any, will cancel out, and discarding the remainders does not introduce any error. Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to.
If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3.
Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856, and a table from 1 to 200000 by Joseph Blater in 1888.
Quarter square multipliers were used in analog computers to form an analog signal that was the product of two analog input signals. In this application, the sum and difference of two input voltages are formed using operational amplifiers. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier.
In 1980, Everett L. Johnson proposed using the quarter square method in a digital multiplier. To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29-1=511 entries or 29-1=511 entries, each entry being 16-bit wide =0 to.
The Quarter square multiplier technique has also benefitted 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502.
Fast multiplication algorithms for large inputs
Complex multiplication algorithm
Complex multiplication normally involves four multiplications and two additions.Or
But there is a way of reducing the number of multiplications to three.
The product · can be calculated in the following way.
This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point.
For fast Fourier transforms the complex multiplies are by constant coefficients c + di, in which case two of the additions can be precomputed. Hence, only three multiplies and three adds are required. However, trading off a multiplication for an addition in this way may no longer be beneficial with modern floating-point units.
Karatsuba multiplication
For systems that need to multiply numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, long multiplication is too slow. These systems may employ Karatsuba multiplication, which was discovered in 1960. The heart of Karatsuba's method lies in the observation that two-digit multiplication can be done with only three rather than the four multiplications classically required. This is an example of what is now called a divide and conquer algorithm. Suppose we want to multiply two 2-digit base-m numbers: x1 m + x2 and y1 m + y2:- compute x1 · y1, call the result F
- compute x2 · y2, call the result G
- compute ·, call the result H
- compute H − F − G, call the result K; this number is equal to x1 · y2 + x2 · y1
- compute F · m2 + K · m + G.
Karatsuba multiplication has a time complexity of O ≈ O, making this method significantly faster than long multiplication. Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of n; typical implementations therefore switch to long multiplication for small values of n.
Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication, and can thus be viewed as the starting point for the theory of fast multiplications.
In 1963, Peter Ungar suggested setting m to i to obtain a similar reduction in the complex multiplication algorithm. To multiply ·, follow these steps:
- compute b · d, call the result F
- compute a · c, call the result G
- compute ·, call the result H
- the imaginary part of the result is K = H − F − G = a · d + b · c
- the real part of the result is G − F = a · c − b · d
Toom–Cook
Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-3N multiplication for the cost of five size-N multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3.Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers.
Fourier transform methods
The basic idea due to Strassen is to use fast polynomial multiplication to perform fast integer multiplication. The algorithm was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the Schönhage–Strassen algorithm. The details are the following: We choose the largest integer w that will not cause overflow during the process outlined below. Then we split the two numbers into m groups of w bits as followsWe look at these numbers as polynomials in x, where x = 2w, to get,
Then we can then say that,
Clearly the above setting is realized by polynomial multiplication, of two polynomials a and b. The crucial step now is to use Fast Fourier multiplication of polynomials to realize the multiplications above faster than in naive O time.
To remain in the modular setting of Fourier transforms, we look for a ring with a th root of unity. Hence we do multiplication modulo N. Further, N must be chosen so that there is no 'wrap around', essentially, no reductions modulo N occur. Thus, the choice of N is crucial. For example, it could be done as,
The ring Z/NZ would thus have a th root of unity, namely 8. Also, it can be checked that ck < N, and thus no wrap around will occur.
The algorithm has a time complexity of Θ and is used in practice for numbers with more than 10,000 to 40,000 decimal digits. In 2007 this was improved by Martin Fürer to give a time complexity of n log 2Θ using Fourier transforms over complex numbers. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. In context of the above material, what these latter authors have achieved is to find N much less than 23k + 1, so that Z/NZ has a th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs.
In March 2019, David Harvey and Joris van der Hoeven released a paper describing an multiplication algorithm.
Using number-theoretic transforms instead of discrete Fourier transforms avoids rounding error problems by using modular arithmetic instead of floating-point arithmetic. In order to apply the factoring which enables the FFT to work, the length of the transform must be factorable to small primes and must be a factor of, where N is the field size. In particular, calculation using a Galois field GF, where k is a Mersenne prime, allows the use of a transform sized to a power of 2; e.g. supports transform sizes up to 232.
Lower bounds
There is a trivial lower bound of Ω for multiplying two n-bit numbers on a single processor; no matching algorithm nor any sharper lower bound is known. Multiplication lies outside of ACC0|AC0 for any prime p, meaning there is no family of constant-depth, polynomial size circuits using AND, OR, NOT, and MODp gates that can compute a product. This follows from a constant-depth reduction of MODq to multiplication. Lower bounds for multiplication are also known for some classes of branching programs.Polynomial multiplication
All the above multiplication algorithms can also be expanded to multiply polynomials. For instance the Strassen algorithm may be used for polynomial multiplicationAlternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication.
Long multiplication methods can be generalised to allow the multiplication of algebraic formulae:
14ac - 3ab + 2 multiplied by ac - ab + 1
14ac -3ab 2
ac -ab 1
————————————————————
14a2c2 -3a2bc 2ac
-14a2bc 3 a2b2 -2ab
14ac -3ab 2
———————————————————————————————————————
14a2c2 -17a2bc 16ac 3a2b2 -5ab +2
=
=
=
As a further example of column based multiplication, consider multiplying 23 long tons, 12 hundredweight and 2 quarters by 47. This example uses avoirdupois measures: 1 t = 20 cwt, 1 cwt = 4 qtr.
t cwt qtr
23 12 2
47 x
————————————————
161 84 94
920 480
29 23
————————————————
1110 587 94
————————————————
1110 7 2
First multiply the quarters by 2, the result 94 is written into the first workspace. Next, multiply 12 x 47 but don't add up the partial results yet. Likewise multiply 23 by 47. The quarters column is totaled and the result placed in the second workspace. 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down.
The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British £sd system.
Basic arithmetic
-
Advanced algorithms