[source]

compiler/typecheck/TcRules.hs

Note [Typechecking rules]

[note link]

We infer the typ of the LHS, and use that type to check the type of the RHS. That means that higher-rank rules work reasonably well. Here’s an example (test simplCore/should_compile/rule2.hs) produced by Roman:

foo :: (forall m. m a -> m b) -> m a -> m b
foo f = ...
bar :: (forall m. m a -> m a) -> m a -> m a
bar f = ...
{-# RULES "foo/bar" foo = bar #-}

He wanted the rule to typecheck.

Note [TcLevel in type checking rules]

[note link]

Bringing type variables into scope naturally bumps the TcLevel. Thus, we type check the term-level binders in a bumped level, and we must accordingly bump the level whenever these binders are in scope.

Note [The SimplifyRule Plan]

[note link]

Example. Consider the following left-hand side of a rule
f (x == y) (y > z) = …
If we typecheck this expression we get constraints
d1 :: Ord a, d2 :: Eq a
We do NOT want to “simplify” to the LHS
forall x::a, y::a, z::a, d1::Ord a.
f ((==) (eqFromOrd d1) x y) ((>) d1 y z) = …
Instead we want
forall x::a, y::a, z::a, d1::Ord a, d2::Eq a.
f ((==) d2 x y) ((>) d1 y z) = …
Here is another example:
fromIntegral :: (Integral a, Num b) => a -> b {-# RULES “foo” fromIntegral = id :: Int -> Int #-}

In the rule, a=b=Int, and Num Int is a superclass of Integral Int. But we dont want to get

forall dIntegralInt.
fromIntegral Int Int dIntegralInt (scsel dIntegralInt) = id Int
because the scsel will mess up RULE matching. Instead we want
forall dIntegralInt, dNumInt.
fromIntegral Int Int dIntegralInt dNumInt = id Int
Even if we have
g (x == y) (y == z) = ..
where the two dictionaries are identical, we do NOT WANT
forall x::a, y::a, z::a, d1::Eq a
f ((==) d1 x y) ((>) d1 y z) = …

because that will only match if the dict args are (visibly) equal. Instead we want to quantify over the dictionaries separately.

In short, simplifyRuleLhs must only squash equalities, leaving all dicts unchanged, with absolutely no sharing.

Also note that we can’t solve the LHS constraints in isolation: Example foo :: Ord a => a -> a

foo_spec :: Int -> Int {-# RULE “foo” foo = foo_spec #-}

Here, it’s the RHS that fixes the type variable

HOWEVER, under a nested implication things are different Consider

f :: (forall a. Eq a => a->a) -> Bool -> … {-# RULES “foo” forall (v::forall b. Eq b => b->b).

f b True = …

#-}

Here we must solve the wanted (Eq a) from the given (Eq a) resulting from skolemising the argument type of g. So we revert to SimplCheck when going under an implication.

——— So the SimplifyRule Plan is this ———————–

  • Step 0: typecheck the LHS and RHS to get constraints from each
  • Step 1: Simplify the LHS and RHS constraints all together in one bag
    We do this to discover all unification equalities
  • Step 2: Zonk the ORIGINAL (unsimplified) LHS constraints, to take
    advantage of those unifications
  • Setp 3: Partition the LHS constraints into the ones we will
    quantify over, and the others. See Note [RULE quantification over equalities]
  • Step 4: Decide on the type variables to quantify over
  • Step 5: Simplify the LHS and RHS constraints separately, using the
    quantified constraints as givens

Note [Solve order for RULES]

[note link]

In step 1 above, we need to be a bit careful about solve order. Consider

f :: Int -> T Int type instance T Int = Bool
RULE f 3 = True
From the RULE we get
lhs-constraints: T Int ~ alpha rhs-constraints: Bool ~ alpha

where ‘alpha’ is the type that connects the two. If we glom them all together, and solve the RHS constraint first, we might solve with alpha := Bool. But then we’d end up with a RULE like

RULE: f 3 |> (co :: T Int ~ Bool) = True

which is terrible. We want

RULE: f 3 = True |> (sym co :: Bool ~ T Int)

So we are careful to solve the LHS constraints first, and then the RHS constraints. Actually much of this is done by the on-the-fly constraint solving, so the same order must be observed in tcRule.

Note [RULE quantification over equalities]

[note link]

Deciding which equalities to quantify over is tricky:
  • We do not want to quantify over insoluble equalities (Int ~ Bool)
    1. because we prefer to report a LHS type error
    2. because if such things end up in ‘givens’ we get a bogus “inaccessible code” error
  • But we do want to quantify over things like (a ~ F b), where F is a type function.

The difficulty is that it’s hard to tell what is insoluble! So we see whether the simplification step yielded any type errors, and if so refrain from quantifying over any equalities.

Note [Quantifying over coercion holes]

[note link]

Equality constraints from the LHS will emit coercion hole Wanteds. These don’t have a name, so we can’t quantify over them directly. Instead, because we really do want to quantify here, invent a new EvVar for the coercion, fill the hole with the invented EvVar, and then quantify over the EvVar. Not too tricky – just some impedance matching, really.

Note [Simplify cloned constraints]

[note link]

At this stage, we’re simplifying constraints only for insolubility and for unification. Note that all the evidence is quickly discarded. We use a clone of the real constraint. If we don’t do this, then RHS coercion-hole constraints get filled in, only to get filled in again when solving the implications emitted from tcRule. That’s terrible, so we avoid the problem by cloning the constraints.