Arithmetic Operations. 1

Design Issues for Arithmetic Operations. 2

Types of Operators. 2

Arithmetic Operators by Language. 4

Result of Arithmetic Operations. 4

Division. 4

Integer Truncation. 5

Exponentiation. 6

The Type of the Result 7

Negation, or the Lack of It 8

Questions. 9

COBOL Problems with Minus Symbol 9

Modulus or Remainder Operation. 10

PL/I mod and rem Functions. 12

Questions. 13

Increment and Decrement 14

Expression Evaluation Order 15

Associativity of Operations. 16

Operand Evaluation vs. Precedence. 18

Questions. 19

Answers. 19

Exponentiation. 20

Operator Precedence. 20

Modulus Operation Precedence. 20

Operator Precedence for Several Languages. 21

Coarseness of Operator Precedence. 23

Questions. 24

Mixed-Mode Arithmetic. 25

Casts. 26

Questions: 26

Fixed Decimal Arithmetic  (not near done) 26

Adjacent Operators. 27

Spaces in Operators. 27

Questions. 28

NULL and Arithmetic  (not near done) 28

Conclusion. 29

Questions. 29

Answers. 29



Creative Commons License
This work is licensed under a Creative Commons Attribution-No Derivative Works 3.0 United States License.

Copyright: Dennie Van Tassel 2008.

Please send suggestions and comments to



Arithmetic Operations


At first glance most people would assume that all languages agree on the arithmetic operations. The operations subtraction, addition, and multiplication are universally agreed on. After going beyond those three operators there is less agreement. The languages differ on what they do, the operators used, and the results obtained. In this chapter I will discuss some of these interesting differences. In early languages some operations were not clearly defined. Modern languages much more clearly indicate what they do, even if the languages do not agree on what is done.


But before we go into this interesting topic, we need a few definitions so we agree, at least in this chapter, what some terms or words mean. Here is a typical arithmetic expression:


   cost + profit


In this expression we have the arithmetic operator plus (+). Operators are used to indicate what type of arithmetic operation is needed, such as subtraction, multiplication, etc. The four most common operators are: +, -, *, and /. The operands in this expression are the variables cost and profit in the above statement. Arithmetic operands are often variables, but operands can be constants, expressions, parenthesized expressions, and many other things.


Design Issues for Arithmetic Operations

One might assume that arithmetic is done the same in most languages, which is partially true. But there are a lot of differences and here are some design issues:


  • What operators are provided? What operations are done by functions?
  • What precedence is used for operators?
  • What association is used for operations?
  • What results are provided for different types and signs?


As you will see as you read through this chapter, there are quite a few differences between language families.


Types of Operators

There are three categories of arithmetic operators, and they are classified according to the number of operands. The number of operands that are required is called the arity or adicity.


The first category of operators is the unary operators and they have an arity of one. Here are some unary operations:







Unary or monadic operators have one operand, that is, only a variable (or operand) on one side of the operator. Thus their arity is one. For example, in the first line, which indicates negation, there is an operand (B) after the minus sign, but no operand before the minus sign. In the last line above, there is only an operand (count) before the increment (++) operator. These examples also illustrate that some operators are a single character, but other operators require more than one symbol to express the operation. Functions such as abs are a special type of “operator.”


The second category of operators is binary operators and they have an arity of two. Here are some binary operations:


     a + b

     d - e

     z * b

     1 / 2


Binary or dyadic operators have two operands, that is, variables (or expressions) on both sides of the operator. Thus in the above set of examples, on the first line the plus (+) symbol has a variable before the symbol and after the plus symbol. Likewise, on the last example, there is a 1 before the division (/) symbol operator and a 2 after the division operator. Another term for this type of operator is an infix operator since the operators are with-in two operands.


The third category of operator is the ternary (3 parts) operator, which is rare outside the C family. The conditional expression (or ternary operator) has an arity of three. Here are examples of ternary statements from the C family:


   average = (count != 0) ? sum/count: 0;

   (color) ? color = false : color = true;


The conditional expression requires two operators (question mark and colon) and three (ternary) operands, loosely defined as the condition, true part, and false part. All three operands are required. If you compare this statement to the standard if-else-then statement, then the question mark indicates the start of the then part, and the colon indicates the start of the else part. In the first example above the conditional expression is (count != 0). The ternary operation is discussed in detail in the Conditional chapter.


We might say that named constants PI, MAX are zeroadic (zero operands) operations. At least it was a sneaky way to use that interesting word. Also, enumerations may be zeroadic.


There is one more item to point out and that is the placement or the fixity of the operator. We have already seen the infix operator, where the operator is between two operands. Prefix operators are placed before the operand. Here are some examples:


   -B        ++x


In contrast postfix operators are placed after the operand as these examples show


   z++     count—


There are a few prefix operators in our regular life. Two are the plus (+7) and minus (–45) symbols. Mathematics has a few postfix operators, such as 5! for 5 factorial, 4’ for four feet, and 40° for 40 degrees. Can you think of any other prefix and postfix operators outside of programming languages? Finally, I think we have finished defining the terms needed for this chapter. You may find out later in this chapter you need to refer back to this section.


Arithmetic Operators by Language


Table x.1 list arithmetic operations in several languages. As you can see the languages are in agreement on the operator used for addition, subtraction, multiplication, and part of division. Then for division, modulus, and exponentiation operations there is much less agreement. For example, both Pascal and VBScript have different operators for floating point and integer division.


























Real division






Integer division






Modulus (remainder division)





function used




function used

function used





Arithmetic Operators by Language

Table x.1


Result of Arithmetic Operations

Since all languages agreed on the symbol and result for addition, subtraction, and multiplication we will start our discussion where things start to disagree. When we look at division, modulus, and exponentiation we will see they have different rules by language and they even provide different results.



Division is the first operation to look at how different languages process it. Division is more complicated than the other binary operators since we can have integer division (13/3 – no decimal points), floating-point division (4.32/7.25 – both have decimal point), and mixed mode (2/3.5 – only one has a decimal point) division. Each language has their own rules for these divisions.


FORTRAN and the C family will use truncation to calculate:


   5/2 č 2


since both operands are integers, integer arithmetic is done. In contrast, BASIC, JavaScript, and Perl will do the following:


   5/2 č 2.5


These last two languages do not see integers or reals, but just numbers. Still other languages (Pascal, VBScript) have a separate operator for reals (floating-point) division and a different operator for integer division.


Pascal has two division operators using the slash for real division and div for integer division. Thus 4/3 will result in 1.333333 but 4 div 3 will result in 1. Likewise, VBScript uses different operators for real (/ slash) and integer (\ backslash) division.


Python has added floor division, which “rounds down.” This works with both integer and floating point values. The operator for floor division is //. So we can do


   4.0 // 2.3  


produces the value 1.0, since 2.3 will divide into 4.0 only once. Floor division solve the problem that integer division and float division produces different results with the same values. For example, in languages that have integer division, 1/2 will produce 0, while 1.0/2.0 will produce 0.5. With floor division, both 1//2 and 1.0//2.0 will produce the same value zero. Floor division rounds it to the next smallest whole number toward the left on the number line. With signed numbers, we get the following:


   1 // 2   # result is 0

  -1 // 2   # result is -1

In the above last line, -1 is the whole digit to the left of -0.5. Floor division has the advantage of being numeric-type-independent

Integer Truncation

Integer division is not as simple as it looks. What are the rules for integer truncation? For example, does 43/10 get us 4 with integer division? If you put this on a ruler as follows


   ---4 ----------5 ----


So what we did is choose the integer left of 4.3. So now we can come up with this rule for integer division: For integer division, divide and then pick the integer to the left (towards minus infinity) and throw away any fractional part. So we divide 43 by 10 and obtain 4.3. We throw away .3 and end up with 4.


But what happens when one or both of the operands are negative? Will we handle –43/10 the same way? Put this on a ruler too.

   ---(-5) ----------(-4) ----


If we choose the integer to the left, that will be –5, not –4! If we always go towards zero, the result is –4. So we could come up with another rule: Always pick the integer towards zero. With this rule, we obtain –4 instead of –5.


Which rule is correct? How about 43/-10 or –43/-10? The answer is not clear and a good argument can be made for either direction. Even Kernighan and Ritchie do not have a clear answer for the question. Here is what they say in the ANSI C edition of their book: “The direction of truncation for / and ... are machine-dependent for negative operands .....”[1] Thus we have two choices for integer truncation and both choices look reasonable.


 The choices for integer truncation are as follows:

1.         Always pick the integer closest to zero. Thus 4/3 will truncate to 1 and -4/3 will truncate to -1.

2.         Always pick the integer towards minus infinity. With this rule 4/3 will truncate to 1 and -4/3 will truncate to -2.


Early FORTRAN used the first rule and most languages followed along to be compatible. Their choice is not necessarily the correct choice. The C family and BASIC have methods to give us both choices. Both language groups have an integer division that will truncate towards zero, and thus matches rule 1. The C language floor function and the BASIC int function returns the whole number that is less than or equal to the argument, and thus matches Rule 2. Regular integer division will do Rule 1.


The desired sign for the result with integer division is also a question. For example, what sign do you want in these four situations: 4/3, -4/3, 4/-3, and -4/-3? At least the rules of mathematics provide us with a consistent sign, but I am not sure that is the sign we want in all cases.



Exponentiation is the next arithmetic operation that often varies. First some languages, such as C++ and Java do not have it as operator, but require a function. Languages that do have exponentiation use either ** (two asterisks) or ^ (caret) for the operator. Then when it is available, it usually looks like:


   x**y       or      x^y 


FORTRAN and Perl use two asterisks, while the BASIC family uses the caret. Hordes of Pascal, C++, and Java programmers would gladly have accepted either but these languages use a function. C++ and Java uses pow(2, 3). The rumor is that this was done to warn programmers of the expense of doing exponentiation! There are two operands needed no matter if a function or an operator is used.


Exponentiation is a very complicated arithmetic calculation. Here are some of the possible variations:


   2**3  repeated multiplication 2*2*2

   2**0.5  square root of an integer.

   2**-0.5 uses inverse, then square root: 1 / 2**0.5

           but it can not be done with an integer result.

   -2.0**0.5  can not be done in real numbers.


Besides the different methods of providing exponentiation (function vs. operator), there are major differences on what values are allowed for the operands. If we use this for our discussion:


  base^exponent  or  base**exponent  or  pow(base,exponent)


then what type of values can be used for the base value and for the exponent value. Some of the choices are integers, negative values, and real values. Early BASIC restricted the base to positive values and integer exponents. Some examples of possible combinations are:


   3^2     result 9

  -3^2   either 9 or –9, depending on operator precedence.

   3^-2    result 1/9

   3^0.5   square root of 3.

   3^-0.5   1 over square root of 3.

   2.3^2    float base, integer exponent

   2.3^0.5  float base, float exponent


If I have not made any mistakes all of these are at least well defined mathematically. The first one squares 3, the second one squares negative 3 (maybe, depends on association). Because of the negative exponent, the third one (3^-2), squares the inverse of 3, so we obtain 1/9.


The Type of the Result

One question for exponentiation is what type should the result be: integer or floating point.  For positive exponents, the operation corresponds to repeated multiplication. For example:


   3**4 = 3 * 3 * 3 * 3 = 81

   3.0**4 = 3.0 * 3.0 * 3.0 * 3.0 = 81.0


Thus for positive integer exponents the result could be the same type as the first operand (in this example 81 or 81.0).


But when the exponent is a negative integer we can have some problems with the above rule. Exactly, what answer we get in the last case varies by language. For example, if we step through the following:


   2**-3 => 1/8


What answer would you get if the arithmetic is done with floating point values and what answer will we get with integer arithmetic. Careful! If we do the arithmetic using floating point we will get the obvious value 0.125, but if we use integers we will obtain zero, because of integer truncation in some languages. That is:


   2**-3 => 1/8 => 0.125  // floating point arithmetic

   2**-3 => 1/8 => 0.125 = 0 // integer truncation


Try this on some compilers in different languages and see what happens. Thus two reasonable solutions are possible: using floating point to do the arithmetic or use integer arithmetic. Ada forbids a negative exponent for an integer base, since the result will not be a whole number (i.e., 1/8). Thus in Ada 2**(-3) is not allowed, but 2.0**(-3) is allowed.


For the mathematically challenged (I looked it up before writing this) some operations are not possible. We cannot mathematically do operations like this in the real number system:

   -3^0.5   // can’t take square root of negative value.

   0^0.5    // can’t take square root of zero.

and probably other combinations. I will leave it as an exercise for you to find out what is mathematically possible.


A commn rule is a negative quantity cannot be raised to a real power. This brings us to the odd situation that (-4.0)**2.0 (used real exponent) is undefined, but (-4.0)**2 (used integer exponent) is defined. Exactly what happens varies by language. Some languages produce a rude error and stop executing, while other languages produce NaN and keep going.


Negation, or the Lack of It

The use of the minus symbol is next operation causing problems. The first problem is for the language to recognizing that the unary operation negation “-d”, is a different operation than the binary operation subtraction “a - b”. If you browse through older (and present) computer language books, you will often find no mention of the unary negation operation. Early FORTRAN, COBOL, and BASIC did not really recognize this distinction. As proof that we can learn from our mistakes, many (but not all) modern language books discuss the unary negation operation.


Since early programming books and languages did not even discuss or recognize negation, the operation –3^2 was not well defined. See the Precedence of Operators section later in this chapter. The “discovery” of negation as a different operation in contrast to subtraction helped solve this problem. This is similar to the “discovery” of how useful a symbol for zero might be by ancient mathematicians.


The classical example of where this absence of notice of negation causing problems is with the following pseudo code (^ is the symbol for exponentiation):

   a = -3^2

The question is are we squaring minus 3 (answer is then 9) or are we negating the square of 3 (then the answer is –9). Once negation is recognized as a separate operation, then we can locate it in our precedence table and solve this problem.



1. Look through some programming text books and see if they discuss negation as a separate operation and if the books indicate the precedence of the operation, especially with regard to exponentiation. Look at some new books on versions of BASIC, FORTRAN, and COBOL.

2. One interesting problem is raising an integer to a negative power. For example, 3^-2. Both are integers so this becomes 1/9, which returns an integer result of zero, due to truncation! Try this in a couple different languages see what happens. Ada forbids this operation. What do your languages do with it?

3. Look at the exponentiation operation in several different languages. See if you can determine from documentation, exactly what can be used for the base and exponent (integer, float, negative). Then test your observations with some programs.



COBOL Problems with Minus Symbol

COBOL has another problem with the dash or minus symbol. Early versions of COBOL used long sentences to do arithmetic:




   ADD 30 TO PAY.


Besides the wordiness of the above we cannot do two operations, such as multiplication and addition in the same statement. We need one COBOL sentence to multiply and another sentence to add.


This soon became very tiring, so they invented the COMPUTE verb. Now we can code neat things like:





Notice the last COMPUTE statement does multiplication and addition. But we have been using the dash for separating words in COBOL fields and now use the minus symbol for subtraction. So the field PRICE-DISCOUNT looks a lot like the subtraction PRICE - DISCOUNT. That is, you may have noticed that the dash looks very similar to the minus sign, since they are the same symbol. COBOL got around this problem with the rule that the subtraction symbol, the minus sign, must always have a space before and after it. And field names are not allowed to have spaces, so the dash in field names does not have spaces around it.


While you readers may wonder why the COBOL committee did not just pick the underscore for the field separator, a close examination of a 1965 card keypunch machine will locate no underscore character. Another problem special character in COBOL is the period which is used terminate statements and used for decimal points This topic is covered in the Statements and Terminator section of an early chapter.


One good rule to remember when designing a language is try to avoid using the same symbol for two purposes. This rule is easier said then followed. For example, look how the exclamation ! symbol is used in UNIX. Also look at the many different uses of $ (end of line, last line, shell variables) and ^ (beginning of line, not in ranges) in UNIX. Also in UNIX, the ! can mean (history, not, escape to the shell). The = symbol is often use for both assignment and comparison. A second rule is: Beware of similar but distinct symbols. For example = (assignment) and == (comparison). Many a C++ programmer has meant to type if (a == 0) and typed if (a = 0), and spend minutes or hours looking for that error!


Modulus or Remainder Operation

The sixth most common arithmetic operator in programming languages is the modulus operator that was popularized by C. How the modulus operation is provided varies by language. The C family has the % operator available, while FORTRAN uses a function. In contrast FORTRAN (and other languages) has the exponentiation operator (either ** or ^), while the C family uses a function. The method used for providing (operator vs. a function) changes the precedence of the operation, since functions are done before other operations. Then there are two other areas where there are differences with the remainder operator: signed values and non-integer values.


Next there is a lot of difference and even confusion over this operation. Part of the confusion is over exactly what operation is being provided. We can remove this confusing by deciding the modulus operation is only defined for positive integers. If negative operands or non-integer numbers are used, then we have the remainder operation.


Even after accepting this definition of the two operations, languages vary drastically by what they do and what answer they provide. For example, some languages (Pascal, BASIC) will take 3.8%2.3 and round both operand and then provide integer result. Thus we would have something similar to this:


   3.8%2.3 č 4%2  č 0


Another language (Perl) will truncate the real values and obtain:


   3.8%2.3 č 3%2 č 1


Finally, Visual Basic, JavaScript, and Python will do the floating-point arithmetic and do it as follows:


   3.8%2.3 č 1.5


Still other languages will just reject the whole mess, only willing to deal with integers. Nowadays, most compilers in the same languages will do this operation in a consistent way, but previously different compilers in the same language may do the operation differently.


Reading through popular textbooks will often give you no hint on what actually happens with signed values. So often experimentation is the best way to find out what happens. But before we blame the textbook authors, we need to observe the language definition may be unclear on what to do, leaving it up to the compiler writer. This is a very dangerous situation since then different compilers have the freedom to interpret it in different ways.


The rules for signed numbers are ambiguous for modulus operation, like they are for integer division. For example, if we use 43%10 then we have (4 x 10) + 3 and the remainder is 3. But if we follow the same logic, then –43/10 is (-5 x 10) + 7, so some could argue the remainder is 7. Others might argue that the remainder is –3 or 3. Similar problems arise with –43%-10 and 43%-10. The answer is not clear and a good argument can be made for either direction. Kernighan and Ritchie do not have a clear answer for this question either. Here is what they say in the ANSI C edition of their book: “...and the sign of the result for % are machine-dependent for negative operands .....”[2] If I remember right, in my youth I tried to solve this problem, and I found out that the problem is mathematicians do not agree on the correct answer, so how can mere programmers?


Since the sign of the answer is not clearly defined in documentation, experimentation in a language is often more revealing than reading language documentation. Ada has solved this important problem by having two operands. The rem operator gets the sign of the first operator and mod gets the sign of the second operator. In addition, the rem operator uses truncation towards zero, but mod uses truncation towards minus infinity. Table x.x illustrates the differences x rem z and x mod z.






















Ada rem and mod Operations

Table x.x


The mod operator is similar to integer division truncation towards minus infinity instead of towards zero. While it is nice that Ada has made this clear and given us a choice, but it still leaves us with the problem of deciding which command we want to use!


If we use positive integers most all languages agree on the result. But if we try to use negative arguments or floating point values, the best way to find out what happens is write a short program and check the results. Textbooks often are not clear and often wrong. This is one operation where Java and C++ differ.


PL/I mod and rem Functions

The following example contrasts the MOD and REM built-in functions.


       rem( +10, +8 ) = 2

       mod( +10, +8 ) = 2


       rem( +10, -8 ) = 2

       mod( +10, -8 ) = 2


       rem( -10, +8 ) = -2

       mod( -10, +8 ) = 6


       rem( -10, -8 ) = -2

       mod( -10, -8 ) = 6


To make things even more complicated, PL/I will accept floating point values for these operations.



Visual Basic .NET

VB .NET uses the mod operator, but how it works is different than the above ways. They return the remainder after the divisor is divided into the dividend an integral number of times. This is done for both integers and floating-point values. Here are a couple examples:


   19 mod 5   remainder is 4

   10.5 mod 4.1   remainder is 2.3


For those of you that have not had your second cup of coffee today, I will explain the last one. The dividend (4.1) is divided into the divisor an integer number of times (i.e., 2). So 10.5 - (2 x 4.1) = 10.5 – 8.2 = 2.3. Thus the remainder is 2.3. VB .NET clearly indicate the sign of the result. The sign of the xxxx?? LOOK up.


In contrast regular Visual Basic 6.0 rounds floating-point values to integers and then does the division. So now we obtain:


   10.5 mod 4.1 becomes 11 mod 4


and then the remainder is 3. So here we have two dialects of Basic doing mod calculations differently. The exercises ask you to try some signed values to see what happens in various languages. I think VB 1.0 to VB 6.0 stayed with its BASIC heritage, but VB .NET broke off that relationship.



1. Review the discussion above about division. Should OPL have separate integer and real division? Do we want to use the same operator for both integer and real division like C++ does, or use separate operators like VBScript? Shall we use an operator or function for integer division?


2. Review the discussion above about modulus or remainder operation. Should OPL use a function or operator? How shall we handle signed operands? How shall we handle floating point values?


3. Review the discussion about integer division with signed numbers. Then what should the result be with the following: 43/10, -43/10, -43/-10, 43/-10? Justify your answers from a mathematical point of view. Then write programs in a couple different languages (Java and C++ are not different enough, but C++ and BASIC are) and see what happens.


4. Review the discussion about modulus arithmetic with signed numbers. Then what should the result be with the following: 43%10, -43%10, -43%-10, 43%-10? Justify your answers from a mathematical point of view. Compare your answers to what a couple different languages do. Finally, check out a couple degenerate examples, such as 5%0 or –5%0. What should happen here?


5. Computer applications have similar problems and differences with modulus operations. For example, try using MS Excel with signed values and floating point values. Try something similar to 17 mod –3.


6x. Review the discussion about modulus arithmetic with floating point numbers. Find out which languages allow floating point values for the modulus operation. Then write some programs to see how the results in different languages vary. For example, what result do you get from 3.11%0.7? You might want to look at how Python does this operation.


6. Review the discussion about exponentiation. Exactly what do we want to allow in OPL for the base and exponent? We are concerned about negative and real values. You may want to find a math major and butter them up a little to find out what is mathematically possible.


7. Look in some programming language textbooks and see if you can determine from the books exactly how the modulus or remainder operation works. Do books explain exactly what happens with negative operands or floating point operands? Then write some programs to see if you can figure out what happens. Were the books correct or clear? Set up a chart by language showing different results.


8. Do the previous problem with Java and C or C++. In many areas Java is similar to C++. Are they the same with how they do the % operations? You will want to test signed operands and floating point values. There are two questions about the result: 1) The value of the result. 2) The sign of the result.


9. What exponential operations are allowed mathematically? Some examples to test are 0^2, 0^0.5, 4.0^0.5, -4.0^0.5, and 4^-0.5. You need to look at combinations of negative and floating-point numbers. Then check to see what the programming languages can do.


Increment and Decrement

C introduced the unary operations increment and decrement operators. There are the pre- increment and decrement operators (++x or --x) and the post- increment and decrement operators (x++ or x--) versions. The pre- versions do the operations before the use of the variable. These are called prefix operators. The post- versions do the operations after the use of the variable. These are called postfix operators. Thus


   x = 1;  y = 3;

   cout << ++x << "  " << --y << endl;  // prints 2  2


prints the numbers 2 for both x and y, since the operators are done before printing. While


   x = 1;  y = 3;

   cout << x++ << "  " << y-- << endl;  // prints 1  3


prints the value 1 for x and 3 for y, since the variables are changed after the print.


These decrement and increment operations are available in most languages related to C such as C++, Perl, JavaScript, and Java, but not in older languages such as Ada, BASIC, or FORTRAN.


In PHP increment works on character strings. Here is an example from Jeremy Allen[3] PHP’s book:



       $a = “G89”;

       print(“\$a \”” . ++$a . “\”<br />”);


In the above code, $a is incremented to G90. Not many languages will do this.


Expression Evaluation Order

There are three factors controlling the order of evaluation in expressions. They are:


1.         Operator precedence: the order in which operators of different precedence levels are evaluated. For example, multiplication is of a higher level than addition, so multiplication is done before addition. Thus if we have the following:

   a + b * c

this will be evaluated as:

   a + (b * c)

because multiplication is a higher level than addition.


2.         Associativity: the order in which operators with the same precedence are evaluated. For example, division and multiplication are of the same level and evaluated left to right. Thus if we have the following:

   100/5 * 10

then these will be evaluated as:

   (100/5) * 10 --> 200

because these operators are evaluated left to right. We could change the order by using parentheses 100/(5 * 10) and then get 2 as the result. By contrast exponentiation is often evaluated right to left so a**b**c gets evaluated as a**(b**c).


3.         Operand evaluation: the order that the operands are processed. The process of operand evaluation can be one of these:


·        Variables: just fetch the value.

·        Constants: sometimes fetch from memory or it may be machine language instruction.

·        Parenthesized expressions: evaluate the expression.

·        Functions: may cause side effects and order of evaluation may be crucial.


In some languages (C, Perl, Pascal, Ada) the operands can be evaluated in any order.  This lets the compiler do it in the most efficient manner but you do now know what order will be used. Java has avoided this situation by telling us binary operands are processed left to right when the operators are of same precedence.


All this is a bit subtle, so I will go over some examples. Early C manuals warned programmers that for binary operators of the same precedence either order could be used for evaluating operands. For example:


   x = maxx + minx;


In this code the compiler must fetch the value for the variable maxx and also fetch the value for the variable minx. This could be done in either order. In this example, the order of operand evaluation does not matter. The problem appears when the operands have side effects. Here are some examples:


   z = x + fun(&x);   // function fun modifies variable x.

   a = b + b++;     // the value of the variable b changes.

   a = prt(1) + prt(2) + prt(3);  // prints the constants.


In all three examples, an optimizing compiler would obtain different results depending on the operand evaluation order (left-to-right, or right-to-left). You might try these examples on a few compilers and see what happens.


Now I will attempt to show you how operator precedence, associativity, and operand evaluation differ by language. Even I get confused here since the languages differ so much.


Associativity of Operations

The direction or order of operations on the same level is their associativity. Left-associative operators are processed from left to right. Left-associative operations are the most common. Addition and subtraction operations are at the same precedence level and processed left to right. Thus (a + b + c +d) is evaluated as ((a + b) + c) + d. Likewise, multiplication and division are at the same precedence level and processed left to right. Thus (x*y/8*z) is evaluated as (((x*y)/8)*z).


While left-to-right evaluation may seem the normal direction, assignment goes from right to left. Here is an example:


   x = a + 3

which is equivalent to:


   x = (a + 3)


Another example that is clearer is multiple assignment. Thus


   x = y = z = 0


is equivalent to


   x = (y = (z = 0))


In this example the expression on the right is evaluated first, and then assigned to the variable on the left.


To learn more about multiple assignments read that section in Assignment chapter. Multiple assignments are not as simple as they look, and has several variations and related problems.


When two unary operators are used on the same operand, the association tends to be right to left. Here is an example in C++:

   -x++   which is evaluated as   -(x++)


But if negation has higher precedence than unary increment, the above will be evaluated as (-x)++, with a different result.


On a computer the associativity is important for most all operations because computers only have finite precision. While mathematically

   a + b + c   is the same as   a + (b + c)

it is not the same on a computer with a fixed number of decimal positions. To illustrate why the order of evaluation matters on computers, let’s assume we have a very small computer that can only store numbers with a maximum of 4 digits. Then we can have:


   a + b + c


where a = 1234, b= -1233, and c = .003. If we use left association:


    1234 + (-1233) + .003

  = 1234 – 1233 + .003 =

  = 1 + .003 = 1.003


And we have never gone over 4 digits. Now if we go from right to left, it works as follows:


    1234 + ((-1233) + .003)

    1234 + (-1233 + .003)  ! too many places.

  = 1234 –1233             ! lost precision, over 4 digits.

  = 1234 –1233

  = 1


So this example illustrates that the order matters even in addition and subtractions (the values or operands can be positive or negative). Java guarantees that the operand will be evaluated in a specific order. In this example, it will be done left to right. Earlier languages were often free to go in either order since it seemed to not matter.


Operand Evaluation vs. Precedence

While operand evaluation is often confused with precedence, they are different when side effects are considered. Precedence [find definition] is the order of operator used. Optimizing compilers may alter the order of expression evaluation for operators with the same precedence. For example, suppose we have a function similar to this in the language of your choice:


   int f(int n)


      cout << n << endl;

      return n;



Then we have an expression similar to the following:


   f(1) * f(2) + f(3)


The result from the expression will be 1 * 2 + 3, or 5, but we are unclear in what order the values will print. The + operator can evaluate its operand in either order and * can too. For example, when we have:


   f(4) + f(5)


the precedence is left-to-right, since addition used. But the compiler is free to evaluate f(4) or f(5) in any order and that is operand evaluation. The only restriction is that the addition must be done left-to-right. Many books confuse the two concepts.


Optimizing compilers reserve the right to rearrange operations that “will” not change the result. For example, look at the following code:


   a = a + b + c + d  + e;

   d  = 3.5 * (b + c + d) / e;

An optimizing compiler will notice that “b + c + d” is in both expressions and change the code to:


   z = b + c + d;

   a = a + z  + e;

   d  = 3.5 * (z) / e;


In this situation we no longer know how the expression will be evaluated, but it will run faster, even if incorrect!


Try these examples (and some of your own) in a couple different languages, or different compilers of the same language and see what happens.


The Java language developers have recognized these problems and guarantees to evaluate fully a binary operand on left before any part of the right-hand side. For example, for in “zap + kat”, the variable zap will be evaluated then kat will be evaluated. Here is a more complicated example:


   int x = 5;

   int z = (x = 6) * x;


What do you think the correct answer should be? Java produces the correct result 36, while some languages will produce 30.



1. On the following examples figure out what the answer might be, then write a program in a couple different languages to test out your guess.

a) k = xx[k++]
b) x[k] = k++

c) z = 10, z++, z++  // uses comma operator.

d) x = f(1) * f(2) + f(3)   // where the function prints the argument.


All the above variables need a value before evaluating the expressions.


2. Not all languages have increment and decrement operations. Languages (like BASIC) that do not have these operations may still process the statement, but not as expected. Try out the following or other variations of the following:

   x = 5

   x = --x

Will these other languages process the above or flag it as an error? If the language accepts the above, what is the result? In these other language two negative symbols may be treated as two negations.


Perl (and many older languages) does not specify the order of evaluation of the operands of a binary operation. Here is an obfuscated example from the nice Perl[4] book by Randal Schwartz:


   $a = 3;

   $b = ($a += 2) * ($a -= 2);


What answer to you get by evaluating the last line left to right vs. right to left? Try this code in some other languages.



1a The value for k is ambiguous in many languages.

1b The value for k is ambiguous in many languages.

1c Since comma operator proceeds left to right, z is probably 12.

1d The order of the printed results is not clearly defined in most languages. Java processes the operands left to right.


2. Test this in C# or Visual Basic. Ada forbids adjacent operators.



The next operation that causes trouble in regard to associativity is exponentiation. FORTRAN, Perl, and some other languages use right to left. So


   a ** b ** c   evaluates to   a ** (b ** c)


But VBScript and other BASIC type languages evaluate left to right. So


   a ^ b ^ c   evaluates to   (a ^ b) ^ c


With the expression 2^3^2 evaluate left to right, and then evaluate right to left, and see what answers you get! Ada solved this problem making exponentiation non-associative.  Thus in Ada a**b**c is not allowed. Parentheses must to be used to indicate which order, (a**b)**c or a**(b**c) is desired.


Operator Precedence

The next area of difference among languages is the order of evaluation or precedence of arithmetic operators. Operator precedence means some operations are evaluated before others. The main differences are negation and exponentiation, and where the modulus operator is placed. Since these three operators have had problems in all other areas, it is not surprising it continues.


If we do interviews on the street and ask people what the answer to –3^2 is we will get –9 or 9, depending on their programming and lack of programming background. BASIC programmers will tell you the answer is –9, Perl programmers will tell you the answer is +9, and FORTRAN programmers will tell you the answer is not well defined, and they need to write a short program before they answer.


Modulus Operation Precedence

Some languages provide a modulus operator, while other languages use a function. The percent symbol (%) is often used to indicate a modulus or remainder operation. The question is where does the modulus operator fit into the arithmetic precedence chart? The C family provides a modulus operator and lumps it with multiplication and division operations. And this group of operators are evaluated left to right. So


   a % b * c    evaluates to   (a % b) * c


But the BASIC family such as VBScript places the module operation in a separate category below multiplication and division in precedence. So in these languages


   a mod b * c    evaluates to   a mod (b * c)


Other languages such as FORTRAN use a function for the module operation so then we must indicate exactly what we want.


   mod(a, b) * c    or   mod(a, (b * c))


Then the Fortran programmers will again ask what the all the fuss is about using a few parentheses.


Operator Precedence for Several Languages

Here are the operator precedence for several languages: VBScript, VB 6.0, JavaScript, Perl, Ada, and FORTRAN. These languages were picked to demonstrate how languages can differ in operator precedence. In all these precedence tables the precedence is ordered from highest at the top of the table to lowest precedence at the bottom of the table. Operations in the same row have the same precedence, while operations in different rows have different precedence. These languages will demonstrate how much precedence can differ by language. The tables also list the associativity of operations, left-to-right or right-to-left.


Table x.2 lists the precedence for VBScript arithmetic operations. There are several interesting things to notice about VBScript operator precedence. First unary negation is done before exponentiation. So –3^2 evaluates to (-3)^2. Next modulus arithmetic is done after multiplication and division. Finally, all operations are done left to right. So A^B^C evaluates to ((A^B)^C). In VBScript –4.0 * A ^2 evaluates to (-4.0) * (A^2)). Look at Ada to see it done differently.





Unary negation






Multiplication/Division/ Integer division

*  /  \






+  -


VBScript Operator Precedence

Table x.2


A similar language is Visual Basic 6.0 but operations are done in a slightly different order. Table x.3 lists the Visual Basic operators. VB 6.0 does exponentiation before negation. So VBScript would evaluate –3^2 to (-3)^2, while VB 6.0 would evaluate –3^2 to –(3^2). This keeps Visual Basic compatible with earlier versions of Basic, while VBScript does it the way used in most other modern languages. Also, notice that in VB integer division and modulus have separate levels. They are not included with multiplication and division as in many other languages.








Unary negation




* / \


Integer division







+  -


Visual Basic Operator Precedence

Table x.3



Table x.4 has the same operations with JavaScript. Again there are several things to notice. First, exponentiation is done by a function like in C and Java. Next the remainder (not modulus) operation is at the same level as multiplication and division. And negation is done before multiplication. Besides the major difference between these two scripting languages, is the interesting observation that both languages are used for web page scripting.





Unary negation



++  --



*  /  %



+  -


JavaScript Operator Precedence

Table x.4


Next, Table x.5 lists precedence for arithmetic in Perl. Notice there is an exponentiation operator and it is evaluated right to left so A**B**C evaluates to (A**(B**C)). Negation is below exponentiation, so –3**2 is the same as –(3**2). And the modulus operation is included on the same level as multiplication and division.






++  --





Unary negation




*  /  %



+  -


Perl Operator Precedence

Table x.5


Table x.6 lists arithmetic precedence for Ada. The Ada precedence for arithmetic operators is a bit different than other languages. Exponentiation is highest, with unary negation below the multiplication operators. The modulus operator is at the same level as multiplication and division operators. The mod and rem operator work differently with signed numbers. In Ada –4.0 * A **2 evaluates to –(4.0 * (A**2)).






**, abs



*  /  mod  rem


Unary negation, addition

-  +


Binary addition/ subtraction

+  -


Ada Operator Precedence

Table x.6


Table x.7 lists a short table for FORTRAN. Exponentiation associate right to left. So 2**3**2 evaluates to 2**(3**2). And exponentiation is of a higher level than negation so –3**2 evaluates to –(3**2). The modulus operation is provided by a function.








Unary negation




*  /



+  -


FORTRAN Operator Precedence

Table x.7


Finally, PL/I is similar FORTRAN except negation is in the same level as exponentiation and then both are right to left evaluation. So in PL/I , so X**-Z gets evaluated as



Java has had a lot of time to learn all about the above mess and has very carefully defined everything. Look up precedence for Java operators. Likewise, C# and Visual Basic .NET has done a good job of carefully defining how operations work.


By now you are probably totally confused about arithmetic precedence in your favorite languages, so you might get a textbook and see how they compare to the above situations.


Coarseness of Operator Precedence

The coarseness or granularity of operator precedence in different languages is interesting to look at. For example, if we look just at the arithmetic operations in Java/C++ we have the following






Unary negation



++  --



*  /  %



+  -


Java Operator Precedence

Table x.4


There are only three levels if we exclude functions and assignment. APL had one level for their operations. So you needed to use parentheses to force any precedence. Here is a short chart showing operator precedence levels for several languages:








Ada, Fortan


Perl, VBScript


Visual Basic


Coarseness of Operator Precedence

Table x.x


Even though several languages have the same number of precedence levels, that does not indicate that same operator precedence. They are often quite different. I wonder what the effect of is of little or more coarseness.


1. For the languages that have an exponentiation operator, evaluate the following:

a) 2**3**4

b) -5**2

b) -5**-2

Do the same operation for languages that use a function for exponentiation.

 [find more examples ====Find textbooks]


2. For languages that have a modulus operator, evaluate the following expressions:

a) -13 % 4

b) 13 % -4

c) 34 % 5 * 2

d) –13 % 4 * 2

e) 2.7 % 1.3


3. What is the result in the variable x after evaluating the following statements:

a)  x = 5;  -x++;

b)  x = 0; x *= x++;


4. What is the is the result for the following:

a)      x = 2 **-3

b)      x = 2.0 ** -3


5. Look up operator precedence for two languages not covered in the above tables. For example, you could look up Pascal and Java.


6. Look at my chart of precedence coarseness. Check my chart to see if it is correct, and then add some more languages.


Mixed-Mode Arithmetic

Mixed-mode arithmetic is when the program does arithmetic on two operands of different types. For example, 4 + 6.7 has an integer and a double value in most languages. The same situation occurs with variables:


   someInt + someFloat


The computer can not do arithmetic operations on two values having different types in strongly typed languages. The compiler needs to do an implicit conversion on one of the operands, so the arithmetic can be done. In the above example, the integer value is promoted to a float before the addition is done. Implicit type conversions are also called coercions. Since the conversion is to the larger type, it is a type promotion.


The number of different numeric types varies drastically by language. The older languages PL/I and COBOL have many different types. Newer languages such as C#, Visual Basic .NET, and Java have many different numeric types. So much type conversion is necessary for numeric calculations. By contrast the newer scripting languages Perl, PHP, and JavaScript, and the early form of BASIC had only numbers, that is, is no separation of integers and floats. Both camps seem to do fine, and both camps feel their way is the correct way.


Not all values can handle implicit type conversions. PL/I has as many types as any modern language and it attempted to convert almost any type to another reasonable type, sometimes not successful. For example, PL/I will attempt to convert a character string to a number if the character string is used in a calculation. PL/I has Fixed Decimal types that are useful for currency. PL/I will convert a Fixed Decimal value to a number that can be used with non Fixed Decimal values.


C# has decimal type which is intended for monetary calculations but can have up to 28 decimal places. But in contrast to PL/I, C# decimal values can not automatically be converted to double values. If you want to look at what types of conversions can be done (and many would say should not be done) you might look at PL/I with its many different data types and allowed conversions.



Some languages have another way to indicate what type of conversion is supposed to be done. For example, Java and C++ uses casts. Type casting is an explicit conversion from one type to another type. So we could type:


             (float) k/n;  // forces float division


      (int) (count/n);   // fractional part lost!


In C++ we are not required to use casts. But C++ may do a conversion that is quite inappropriate or error-causing. In this way it is like FORTRAN, but C++ has more types to choice from for their errors. The parentheses are needed because some types are two words in C (short int).


In Java we can do implicit conversions when the conversion is widening, such as converting from integer to float. But narrowing is restricted. An example of narrowing would be to convert a double to an integer. When the language allows narrowing, then there is a strong potential for losing information. The Assignment chapter covers type conversions in more detail.



1.      In OPL what sort of implicit type conversions should we allow? Will we allow widening? How about narrowing? And exactly what implicit conversions will be allowed between what types?

2.      How do modern languages handle type conversions in assignment statements? Languages you might look at are Java, C#, and Visual Basic .NET.

3.      Scripting languages handle assignment conversions totally different. Look at some of the scripting languages such as Perl, PHP, Python, or others and see what you can conclude.




Fixed Decimal Arithmetic  (not near done)

Languages that are developed to process reports for accounting or producing checks need fixed decimal arithmetic and have types for that purpose. Examples of such languages are COBOL, RPG, Ada, PL/I, spreadsheets, and database languages. Fixed decimal arithmetic is useful for printing amounts on checks such as 45.33 instead of 45.3333 which would confuse most ATM machines and bank tellers. This need is indicated when the fields are declared. For example, in COBOL we could have something similar to this:


05  PAY-HOURS     PICTURE 999.9.

05  PAY-RATE      PICTURE 99.99.

05  TOTAL-PAY     PICTURE 9999.99.


While taking a few liberties as far as input and output in COBOL in the above lines, the general idea is that PAY-HOURS has one decimal place and PAY-RATE and TOTAL-PAY have two decimal places. All arithmetic is done correctly for these decimal places, and necessary rounding or truncation can easily be indicated.


Adjacent Operators

Most languages have some restrictions on arithmetic operators being next to or adjacent each other. At one level are operations that make sense like these:






By contrast there are sequences that do not make mathematical sense. Examples are:



Here are a few others that may make sense depending on how you feel today:





Some languages (Ada) forbid all adjacent operators and require using parentheses to separate operators. Thus we need to change “x*-6” to “x*(-6)” in Ada. You might try a few of these in some language you work on and see if you can determine the rules for adjacent operators. This topic is often an area not clearly defined in programming textbooks.


Spaces in Operators

A similar question is the effect of spaces in adjacent operators. If the compiler had trouble with any of the above operations where adjacent operators were discussed, would a space between two adjacent operators help? For example, would (d-  +5) be OK?  A more interesting question is the effect of spaces within increment or decrement operators. For example, what happens with:


   +  +x


Do they increment the variable x or not. Try this in a couple different languages that have increment or decrement. Most C/C++ compilers will ignore the space between the two plus signs and increment x. But C# processes the two symbols as two unary plus signs and does not decrement x!


The next interesting test is to experiment with spaces in some commands similar to these:


   w = x+++z;

   w = x+ ++z;

   w = x++ +z;

Do these work? What about four plus signs, with or with out spaces, between variables? Any operator that requires two symbols could be used to check this out. Other examples are the two asterisks in exponentiation (a**b) and compound assignment operators (z += p).



1. Go back and read the paragraph about "Adjacent Operators." Determine the rules for adjacent operators in one or two languages. You can try finding the answer in language documentation, but will probably need to write programs and experiment. After you figure out the rules for at least one language, decide what you think the rules should be.


2. Go back and read the paragraph about " Spaces in Operators." Determine the rules for spaces in operators in one or two languages. You can try finding the answer in language documentation, but will probably need to write programs and experiment. After you figure out the rules for at least one language, decide what you think the rules should be.


NULL and Arithmetic  (not near done)

Some database languages allow use of nulls in variables. So an interesting question is what happens when one of the variables is set to NULL. SQL handles this problem by stating that if one of the column specifications in a numeric expression has the value NULL, the value of the whole numeric expression is by definition NULL. For example:


            Numeric Expression              Value

6 + 5                                        11

8 + NULL                                NULL


At least the result of arithmetic use of NULL is very well defined. And it is an actual value, or that may be stretching the definition of a value. This is not necessarily an error, but using a NULL in numeric expressions is at least unusual.


Perl and some other scripting languages do things a little differently. Perl has the value undef for variables which have not been initialized. Perl converts undef to zero in arithmetic expressions. So


   $x = 10 + $undefinedvariable;   # result is 10


So this is the opposite of what most data base languages will do.


Scripting languages like Perl will convert character strings to numbers when possible for arithmetic operations. For example:


   $z = 10 + 5;         # result is 15

   $z = 10 + “5”;       # result is 15

   $z = 10 + “5times”;  # result is 15

   $z = 10 + “times5”;  # result is 10



On the second line above since we are doing arithmetic the character string “5” is converted to the number 5. On the third line, again since we are doing arithmetic, the character string “5times” is converted to 5 and the alphabetic “times” is tossed away. The conversion stops with the first non-number.



In this chapter I have shown how arithmetic operations vary across language families. And while operations  were not carefully defined in early languages, modern languages are much more carefully in defining exactly ow arithmetic is done. But there are still quite a few differences in modern languages.



  1. Does the associativity matter for addition or subtraction? Can you come up with an example to show that it does matter?
  2. Does the associativity matter for multiplication or division? Can you come up with an example to show that it does matter?
  3. What associativity should we use for exponentiation in OPL? Look at what some other languages do, and see if you can get a hints on what mathematicians do.
  4. Where will your modulus operator fit into operator precedence? The first question is whether to include it on the same level as multiplication and division like the C family, or on a level of its own like the BASIC family.
  5. Read the section "Adjacent Operators" in this chapter. Ada forbid most adjacent operators, while other languages allow most. See if you can determine the rules for your favorite language, then try some of the examples from the book. Then try some examples of your own.
  6. PL/I groups negation and exponentiation at the same level and then processes these operations right to left. So –A**-D gets interpreted as –(A**(-D)). Since you probably do not have PL/I available, find another compiler and see how it does this expression.
  7. Now that you are close to finishing this chapter, develop a precedence table for all the arithmetic operators. Assume a fairly wide selection of operators, including negation, exponentiation, and modulus, plus all the rest.



1. First, since the values can be positive or negative, the answer is the same for subtraction and addition. If you assume you are using a computer with 4-digit accuracy, then two large similar numbers with different signs and one small value demonstrate the problem. For example, 2000-1999 + .003 will result in different answers depending on the order of evaluation. Remember that only 4 places are allowed. If we start on the left we obtain 1.003. But if we start on the right, the value .003 is lost since we need 7 places to keep it, and only have 4 places.

2. Associativity matters for division. If we take 2.0/3.0/4.0, we get different answers depending on which side we start. I do not have an example for multiplication????????


This file is from:

Date last revised March 21, 2008.

Copyright Dennie Van Tassel, 2008.

Send comments or suggestions to

I am especially interested in errors or omissions and I have other chapters on History of Programming Languages.


Creative Commons License
This work is licensed under a Creative Commons Attribution-No Derivative Works 3.0 United States License.






decrement.......................................... 15


modulus.............................. 4, 11, 21


exponentiation..................... 4, 7


floor division........................................ 5


ternary operator................................... 3


ternary operator................................... 3


exponentiation..................... 4, 7


increment........................................... 15


arithmetic........................................... 23

adicity..................................................... 2

adjacent operators................................. 27


design issues........................................ 2

increment........................................... 15

arithmetic operations

differences........................................... 1

arity........................................................ 2

associativity..................................... 16, 17

binary operators...................................... 3

caret............................. See exponentiation

conditional expression.............................. 3

decimal places....................................... 27

decrement............................................. 15


operator............................................... 4

design issues

arithmetic............................................. 2

div................................................... 4, 5

division.................................................... 5

integer.................................................. 5

mixed mode......................................... 5

dyadic operators..................................... 3


PL/I................................................... 24

problems.............................................. 7

factorial................................................... 4

fixed decimal arithmetic.......................... 27


of operators......................................... 3

floor division............................................ 5

implicit type conversion.......................... 25

increment.............................................. 15

infix operator........................................... 3


trunction............................................... 6

integer division......................................... 5


arithmetic........................................... 22

Kernighan......................................... 6, 12

minus infinity...................................... 6, 12

minus symbol

problems.............................................. 9

mixed mode division................................ 5

mixed-mode arithmetic.......................... 25

mod....................................................... 4

Ada modulus...................................... 12

modulus................................................ 21

modulus operator.................................. 10

monadic operators................................... 2

narrowing.............................................. 26

negation operation

recognition........................................... 9


exponentiation.................................... 20

NULL................................................... 28


definition.............................................. 2

evaluation........................................... 16


definition.............................................. 2


arithmetic........................................... 23

postfix operators............................... 4, 15

precedence..................................... 15, 21

prefix operators................................. 3, 15

promotion............................................. 26


Ada modulus...................................... 12

right to left............................................. 20

Ritchie............................................... 6, 12

rounds down........................................... 5

ternary operator...................................... 3

The C Programming Language............ 6

truncation................................................ 6

unary operators................................. 2, 15


arithmetic........................................... 22

widening................................................ 26

zeroadic.................................................. 3




ask hitoshi about other math pre and post operators.


Use in an exercise:


Do the following two statements obtain the same result:


x = x + zap(z)

x = zap(x) + x


where zap is a function that changes the value of its argument?



done twice in India

[1] Brian W. Kernighan and Dennis M. Ritchie, The C Programming Language, Second Edition, ANSI C, Englewood Cliffs, NJ: Prentice Hall, 1998, p. 41.

[2] Ibid.

[3] Jeremy Allen, and Charles Hornberger. Mastering PHP 4.1. Sybex, Inc., San Francisco. 2002, p. 53.

[4] Schwartz, Randal L., Learning Perl, Sebastopol, CA: O’Reilly & Associates, Inc., 1993, p. 46.