The model of computation that underlies Haskell is rewriting. In a sense, rewriting is at the heart of all formulations of computability--any computational device's fundamental behaviour is to read symbols and generate other symbols, which amounts to rewriting the input as the output.
Function definitions in Haskell provide the rules for rewriting. Normally, rewriting reduces the complexity of an expression, hence reduction is a synonym for rewriting. Here, complexity is a semantic notion associated with the expression, not a syntactic notion. The expression that results after ``reduction'' is often longer and more complex, syntactically, than the orignal expression! In the literature, the term rewriting system is usually used to signify a set of rules that may be applied in either direction, while reduction refers to a rewriting process that uses the rules in one direction only.
For instance, given the function definition
square: Int -> Int square n = n*n
we have square 7 7*7 49, where the symbol denotes ``rewrites to''.
The first reduction is user specified, the second is built in.
Reduction continues till a result is obtained. A result is just an expression that cannot be rewritten further. In Haskell, results are normally constants that look sensible--values of type Int, Float, .... However, as we shall see in the lambda calculus, this may not always be the case--we may have complicated expressions that cannot be further reduced.
What is the reduction sequence for square (4+3)? We have more than one possibility:
Is it a coincidence that all three sequences yield the same result? No. In fact, we have the following theorem.