[source]

compiler/coreSyn/CoreArity.hs

Note [exprArity invariant]

[note link]

exprArity has the following invariant:

  1. If typeArity (exprType e) = n, then manifestArity (etaExpand e n) = n
That is, etaExpand can always expand as much as typeArity says
So the case analysis in etaExpand and in typeArity must match
  1. exprArity e <= typeArity (exprType e)
  2. Hence if (exprArity e) = n, then manifestArity (etaExpand e n) = n
That is, if exprArity says "the arity is n" then etaExpand really
can get "n" manifest lambdas to the top.
Why is this important? Because
  • In TidyPgm we use exprArity to fix the final arity of each top-level Id, and in
  • In CorePrep we use etaExpand on each rhs, so that the visible lambdas actually match that arity, which in turn means that the StgRhs has the right number of lambdas

An alternative would be to do the eta-expansion in TidyPgm, at least for top-level bindings, in which case we would not need the trim_arity in exprArity. That is a less local change, so I’m going to leave it for today!

Note [Newtype classes and eta expansion]

[note link]

NB: this nasty special case is no longer required, because
for newtype classes we don't use the class-op rule mechanism
at all.  See Note [Single-method classes] in TcInstDcls. SLPJ May 2013

——– Old out of date comments, just for interest ———– We have to be careful when eta-expanding through newtypes. In general it’s a good idea, but annoyingly it interacts badly with the class-op rule mechanism. Consider

class C a where { op :: a -> a }
instance C b => C [b] where
  op x = ...

These translate to

co :: forall a. (a->a) ~ C a
$copList :: C b -> [b] -> [b]
$copList d x = ...
$dfList :: C b -> C [b]
{-# DFunUnfolding = [$copList] #-}
$dfList d = $copList d |> co@[b]

Now suppose we have:

dCInt :: C Int
blah :: [Int] -> [Int]
blah = op ($dfList dCInt)
Now we want the built-in op/$dfList rule will fire to give
blah = $copList dCInt

But with eta-expansion ‘blah’ might (and in #3772, which is slightly more complicated, does) turn into

blah = op (\eta. ($dfList dCInt |> sym co) eta)

and now it is much harder for the op/$dfList rule to fire, because exprIsConApp_maybe won’t hold of the argument to op. I considered trying to make it hold, but it’s tricky and I gave up.

The test simplCore/should_compile/T3722 is an excellent example. ——– End of old out of date comments, just for interest ———–

Note [exprArity for applications]

[note link]

When we come to an application we check that the arg is trivial.
eg f (fac x) does not have arity 2,
even if f has arity 3!
  • We require that is trivial rather merely cheap. Suppose f has arity 2. Then f (Just y) has arity 0, because if we gave it arity 1 and then inlined f we’d get

    let v = Just y in w. <f-body>

    which has arity 0. And we try to maintain the invariant that we don’t have arity decreases.

  • The max 0 is important! (x y -> f x) has arity 2, even if f is unknown, hence arity 0

Note [Definition of arity]

[note link]

The “arity” of an expression ‘e’ is n if
applying ‘e’ to fewer than n value arguments converges rapidly

Or, to put it another way

there is no work lost in duplicating the partial
application (e x1 .. x(n-1))

In the divegent case, no work is lost by duplicating because if the thing is evaluated once, that’s the end of the program.

Or, to put it another way, in any context C

C[ (x1 .. xn. e x1 .. xn) ]
is as efficient as

C[ e ]

It’s all a bit more subtle than it looks:

Note [One-shot lambdas]

[note link]

Consider one-shot lambdas
let x = expensive in y z -> E

We want this to have arity 1 if the y-abstraction is a 1-shot lambda.

Note [Dealing with bottom]

[note link]

A Big Deal with computing arities is expressions like

f = \x -> case x of
            True  -> \s -> e1
            False -> \s -> e2

This happens all the time when f :: Bool -> IO () In this case we do eta-expand, in order to get that s to the top, and give f arity 2.

This isn’t really right in the presence of seq. Consider
(f bot) seq 1

This should diverge! But if we eta-expand, it won’t. We ignore this “problem” (unless -fpedantic-bottoms is on), because being scrupulous would lose an important transformation for many programs. (See #5587 for an example.)

Consider also
f = x -> error “foo”
Here, arity 1 is fine. But if it is
f = x -> case x of
True -> error “foo” False -> y -> x+y
then we want to get arity 2. Technically, this isn’t quite right, because
(f True) seq 1

should diverge, but it’ll converge if we eta-expand f. Nevertheless, we do so; it improves some programs significantly, and increasing convergence isn’t a bad thing. Hence the ABot/ATop in ArityType.

So these two transformations aren’t always the Right Thing, and we have several tickets reporting unexpected behaviour resulting from this transformation. So we try to limit it as much as possible:

  1. Do NOT move a lambda outside a known-bottom case expression

    case undefined of { (a,b) -> y -> e }

    This showed up in #5557

  2. Do NOT move a lambda outside a case if all the branches of the case are known to return bottom.

    case x of { (a,b) -> y -> error “urk” }

    This case is less important, but the idea is that if the fn is going to diverge eventually anyway then getting the best arity isn’t an issue, so we might as well play safe

  3. Do NOT move a lambda outside a case unless (a) The scrutinee is ok-for-speculation, or (b) more liberally: the scrutinee is cheap (e.g. a variable), and

    -fpedantic-bottoms is not enforced (see #2915 for an example)

Of course both (1) and (2) are readily defeated by disguising the bottoms.

4. Note [Newtype arity]

Non-recursive newtypes are transparent, and should not get in the way. We do (currently) eta-expand recursive newtypes too. So if we have, say

newtype T = MkT ([T] -> Int)
Suppose we have
e = coerce T f

where f has arity 1. Then: etaExpandArity e = 1; that is, etaExpandArity looks through the coerce.

When we eta-expand e to arity 1: eta_expand 1 e T we want to get: coerce T (x::[T] -> (coerce ([T]->Int) e) x)

HOWEVER, note that if you use coerce bogusly you can ge
coerce Int negate

And since negate has arity 2, you might try to eta expand. But you can’t decopose Int to a function type. Hence the final case in eta_expand.

Note [The state-transformer hack]

[note link]

Suppose we have
f = e

where e has arity n. Then, if we know from the context that f has a usage type like

t1 -> … -> tn -1-> t(n+1) -1-> … -1-> tm -> …

then we can expand the arity to m. This usage type says that any application (x e1 .. en) will be applied to uniquely to (m-n) more args Consider f = x. let y = <expensive>

in case x of
True -> foo False -> (s:RealWorld) -> e

where foo has arity 1. Then we want the state hack to apply to foo too, so we can eta expand the case.

Then we expect that if f is applied to one arg, it’ll be applied to two (that’s the hack – we don’t really know, and sometimes it’s false) See also Id.isOneShotBndr.

Note [State hack and bottoming functions]

[note link]

It’s a terrible idea to use the state hack on a bottoming function. Here’s what happens (#2861):

f :: String -> IO T
f = \p. error "..."

Eta-expand, using the state hack:

f = \p. (\s. ((error "...") |> g1) s) |> g2
g1 :: IO T ~ (S -> (S,T))
g2 :: (S -> (S,T)) ~ IO T

Extrude the g2

f' = \p. \s. ((error "...") |> g1) s
f = f' |> (String -> g2)

Discard args for bottomming function

f' = \p. \s. ((error "...") |> g1 |> g3
g3 :: (S -> (S,T)) ~ (S,T)

Extrude g1.g3

f'' = \p. \s. (error "...")
f' = f'' |> (String -> S -> g1.g3)

And now we can repeat the whole loop. Aargh! The bug is in applying the state hack to a function which then swallows the argument.

This arose in another guise in #3959. Here we had

catch# (throw exn >> return ())

Note that (throw :: forall a e. Exn e => e -> a) is called with [a = IO ()]. After inlining (>>) we get

catch# (\_. throw {IO ()} exn)

We must not eta-expand to

catch# (\_ _. throw {...} exn)

because ‘catch#’ expects to get a (# _,_ #) after applying its argument to a State#, not another function!

In short, we use the state hack to allow us to push let inside a lambda, but not to introduce a new lambda.

Note [ArityType]

[note link]

ArityType is the result of a compositional analysis on expressions, from which we can decide the real arity of the expression (extracted with function exprEtaExpandArity).

Here is what the fields mean. If an arbitrary expression ‘f’ has ArityType ‘at’, then

  • If at = ABot n, then (f x1..xn) definitely diverges. Partial applications to fewer than n args may or may not diverge.
We allow ourselves to eta-expand bottoming functions, even
if doing so may lose some `seq` sharing,
    let x = <expensive> in \y. error (g x y)
    ==> \y. let x = <expensive> in error (g x y)
  • If at = ATop as, and n=length as, then expanding ‘f’ to (x1..xn. f x1 .. xn) loses no sharing, assuming the calls of f respect the one-shot-ness of its definition.

    NB ‘f’ is an arbitrary expression, eg (f = g e1 e2). This ‘f’ can have ArityType as ATop, with length as > 0, only if e1 e2 are themselves.

  • In both cases, f, (f x1), … (f x1 … f(n-1)) are definitely really functions, or bottom, but not casts from a data type, in at least one case branch. (If it’s a function in one case branch but an unsafe cast from a data type in another, the program is bogus.) So eta expansion is dynamically ok; see Note [State hack and bottoming functions], the part about catch#

Example:
f = xy. let v = <expensive> in
s(one-shot) t(one-shot). blah

‘f’ has ArityType [ManyShot,ManyShot,OneShot,OneShot] The one-shot-ness means we can, in effect, push that ‘let’ inside the st.

Suppose f = xy. x+y Then f :: AT [False,False] ATop

f v :: AT [False] ATop f <expensive> :: AT [] ATop

——————– Main arity code —————————-

See Note [ArityType]

Note [Arity analysis]

[note link]

The motivating example for arity analysis is this:

f = \x. let g = f (x+1)
        in \y. ...g...

What arity does f have? Really it should have arity 2, but a naive look at the RHS won’t see that. You need a fixpoint analysis which says it has arity “infinity” the first time round.

This example happens a lot; it first showed up in Andy Gill’s thesis, fifteen years ago! It also shows up in the code for ‘rnf’ on lists in #4138.

The analysis is easy to achieve because exprEtaExpandArity takes an argument

type CheapFun = CoreExpr -> Maybe Type -> Bool

used to decide if an expression is cheap enough to push inside a lambda. And exprIsCheapX in turn takes an argument

type CheapAppFun = Id -> Int -> Bool

which tells when an application is cheap. This makes it easy to write the analysis loop.

The analysis is cheap-and-cheerful because it doesn’t deal with mutual recursion. But the self-recursive case is the important one.

Note [Eta expanding through dictionaries]

[note link]

If the experimental -fdicts-cheap flag is on, we eta-expand through dictionary bindings. This improves arities. Thereby, it also means that full laziness is less prone to floating out the application of a function to its dictionary arguments, which can thereby lose opportunities for fusion. Example:

foo :: Ord a => a -> …
foo = /a (d:Ord a). let d’ = …d… in (x:a). ….
– So foo has arity 1
f = \x. foo dInt $ bar x
The (foo DInt) is floated out, and makes ineffective a RULE
foo (bar x) = …

One could go further and make exprIsCheap reply True to any dictionary-typed expression, but that’s more work.

See Note [Dictionary-like types] in TcType.hs for why we use isDictLikeTy here rather than isDictTy

Note [Eta expanding thunks]

[note link]

We don’t eta-expand
  • Trivial RHSs x = y
  • PAPs x = map g
  • Thunks f = case y of p -> x -> blah
When we see
f = case y of p -> x -> blah

should we eta-expand it? Well, if ‘x’ is a one-shot state token then ‘yes’ because ‘f’ will only be applied once. But otherwise we (conservatively) say no. My main reason is to avoid expanding PAPSs

f = g d ==> f = x. g d x

because that might in turn make g inline (if it has an inline pragma), which we might not want. After all, INLINE pragmas say “inline only when saturated” so we don’t want to be too gung-ho about saturating!

Note [ABot branches: use max]

[note link]

Consider case x of
True -> x. error “urk” False -> xy. error “urk2”

Remember: ABot n means “if you apply to n args, it’ll definitely diverge”. So we need (ABot 2) for the whole thing, the /max/ of the ABot arities.

Note [Combining case branches]

[note link]

Consider
go = x. let z = go e0
go2 = x. case x of
True -> z False -> s(one-shot). e1

in go2 x

We really want to eta-expand go and go2. When combining the barnches of the case we have

ATop [] andAT ATop [OneShotLam]

and we want to get ATop [OneShotLam]. But if the inner lambda wasn’t one-shot we don’t want to do this. (We need a proper arity analysis to justify that.)

So we combine the best of the two branches, on the (slightly dodgy) basis that if we know one branch is one-shot, then they all must be.

Note [No crap in eta-expanded code]

[note link]

The eta expander is careful not to introduce “crap”. In particular, given a CoreExpr satisfying the ‘CpeRhs’ invariant (in CorePrep), it returns a CoreExpr satisfying the same invariant. See Note [Eta expansion and the CorePrep invariants] in CorePrep.

This means the eta-expander has to do a bit of on-the-fly simplification but it’s not too hard. The alernative, of relying on a subsequent clean-up phase of the Simplifier to de-crapify the result, means you can’t really use it in CorePrep, which is painful.

Note [Eta expansion for join points]

[note link]

The no-crap rule is very tiresome to guarantee when we have join points. Consider eta-expanding

let j :: Int -> Int -> Bool
j x = e

in b

The simple way is
(y::Int). (let j x = e in b) y
The no-crap way is
(y::Int). let j’ :: Int -> Bool
j’ x = e y

in b[j’/j] y

where I have written to stress that j’s type has changed. Note that (of course!) we have to push the application inside the RHS of the join as well as into the body. AND if j has an unfolding we have to push it into there too. AND j might be recursive…

So for now I’m abandonig the no-crap rule in this case. I think that for the use in CorePrep it really doesn’t matter; and if it does, then CoreToStg.myCollectArgs will fall over.

(Moreover, I think that casts can make the no-crap rule fail too.)

Note [Eta expansion and SCCs]

[note link]

Note that SCCs are not treated specially by etaExpand. If we have
etaExpand 2 (x -> scc “foo” e) = (xy -> (scc “foo” e) y)

So the costs of evaluating ‘e’ (not ‘e y’) are attributed to “foo”

Note [Eta expansion and source notes]

[note link]

CorePrep puts floatable ticks outside of value applications, but not type applications. As a result we might be trying to eta-expand an expression like

(src<...> v) @a

which we want to lead to code like

\x -> src<...> v @a x

This means that we need to look through type applications and be ready to re-add floats on the top.