[source]

compiler/typecheck/TcCanonical.hs

Note [Canonicalization]

[note link]

Canonicalization converts a simple constraint to a canonical form. It is unary (i.e. treats individual constraints one at a time).

Constraints originating from user-written code come into being as CNonCanonicals (except for CHoleCans, arising from holes). We know nothing about these constraints. So, first:

Classify CNonCanoncal constraints, depending on whether they
are equalities, class predicates, or other.

Then proceed depending on the shape of the constraint. Generally speaking, each constraint gets flattened and then decomposed into one of several forms (see type Ct in TcRnTypes).

When an already-canonicalized constraint gets kicked out of the inert set, it must be recanonicalized. But we know a bit about its shape from the last time through, so we can skip the classification step.

Top-level canonicalization

Note [The superclass story]

[note link]

We need to add superclass constraints for two reasons:

  • For givens [G], they give us a route to proof. E.g.

    f :: Ord a => a -> Bool f x = x == x

    We get a Wanted (Eq a), which can only be solved from the superclass of the Given (Ord a).

  • For wanteds [W], and deriveds [WD], [D], they may give useful functional dependencies. E.g.

    class C a b | a -> b where … class C a b => D a b where …

    Now a [W] constraint (D Int beta) has (C Int beta) as a superclass and that might tell us about beta, via C’s fundeps. We can get this by generating a [D] (C Int beta) constraint. It’s derived because we don’t actually have to cough up any evidence for it; it’s only there to generate fundep equalities.

See Note [Why adding superclasses can help].

For these reasons we want to generate superclass constraints for both Givens and Wanteds. But:

  • (Minor) they are often not needed, so generating them aggressively is a waste of time.
  • (Major) if we want recursive superclasses, there would be an infinite number of them. Here is a real-life example (#10318);
class (Frac (Frac a) ~ Frac a,
       Fractional (Frac a),
       IntegralDomain (Frac a))
    => IntegralDomain a where
 type Frac a :: *
Notice that IntegralDomain has an associated type Frac, and one
of IntegralDomain's superclasses is another IntegralDomain constraint.

So here’s the plan:

  1. Eagerly generate superclasses for given (but not wanted) constraints; see Note [Eagerly expand given superclasses]. This is done using mkStrictSuperClasses in canClassNC, when we take a non-canonical Given constraint and cannonicalise it.
However stop if you encounter the same class twice.  That is,
mkStrictSuperClasses expands eagerly, but has a conservative
termination condition: see Note [Expanding superclasses] in TcType.
  1. Solve the wanteds as usual, but do no further expansion of superclasses for canonical CDictCans in solveSimpleGivens or solveSimpleWanteds; Note [Danger of adding superclasses during solving]
However, /do/ continue to eagerly expand superlasses for new /given/
/non-canonical/ constraints (canClassNC does this).  As #12175
showed, a type-family application can expand to a class constraint,
and we want to see its superclasses for just the same reason as
Note [Eagerly expand given superclasses].
  1. If we have any remaining unsolved wanteds

    (see Note [When superclasses help] in TcRnTypes)

    try harder: take both the Givens and Wanteds, and expand superclasses again. See the calls to expandSuperClasses in TcSimplify.simpl_loop and solveWanteds.

This may succeed in generating (a finite number of) extra Givens,
and extra Deriveds. Both may help the proof.
3a An important wrinkle: only expand Givens from the current level.
Two reasons:
  • We only want to expand it once, and that is best done at the level it is bound, rather than repeatedly at the leaves of the implication tree
  • We may be inside a type where we can’t create term-level evidence anyway, so we can’t superclass-expand, say, (a ~ b) to get (a ~# b). This happened in #15290.
  1. Go round to (2) again. This loop (2,3,4) is implemented in TcSimplify.simpl_loop.

The cc_pend_sc flag in a CDictCan records whether the superclasses of this constraint have been expanded. Specifically, in Step 3 we only expand superclasses for constraints with cc_pend_sc set to true (i.e. isPendingScDict holds).

Why do we do this? Two reasons:

  • To avoid repeated work, by repeatedly expanding the superclasses of same constraint,
  • To terminate the above loop, at least in the -XNoRecursiveSuperClasses case. If there are recursive superclasses we could, in principle, expand forever, always encountering new constraints.

When we take a CNonCanonical or CIrredCan, but end up classifying it as a CDictCan, we set the cc_pend_sc flag to False.

Note [Superclass loops]

[note link]

Suppose we have
class C a => D a class D a => C a

Then, when we expand superclasses, we’ll get back to the self-same predicate, so we have reached a fixpoint in expansion and there is no point in fruitlessly expanding further. This case just falls out from our strategy. Consider

f :: C a => a -> Bool f x = x==x

Then canClassNC gets the [G] d1: C a constraint, and eager emits superclasses G] d2: D a, [G] d3: C a (psc). (The “psc” means it has its sc_pend flag set.) When processing d3 we find a match with d1 in the inert set, and we always keep the inert item (d1) if possible: see Note [Replacement vs keeping] in TcInteract. So d3 dies a quick, happy death.

Note [Eagerly expand given superclasses]

[note link]

In step (1) of Note [The superclass story], why do we eagerly expand Given superclasses by one layer? (By “one layer” we mean expand transitively until you meet the same class again – the conservative criterion embodied in expandSuperClasses. So a “layer” might be a whole stack of superclasses.) We do this eagerly for Givens mainly because of some very obscure cases like this:

instance Bad a => Eq (T a)
f :: (Ord (T a)) => blah
f x = ....needs Eq (T a), Ord (T a)....

Here if we can’t satisfy (Eq (T a)) from the givens we’ll use the instance declaration; but then we are stuck with (Bad a). Sigh. This is really a case of non-confluent proofs, but to stop our users complaining we expand one layer in advance.

Note [Instance and Given overlap] in TcInteract.

We also want to do this if we have

f :: F (T a) => blah
where
type instance F (T a) = Ord (T a)

So we may need to do a little work on the givens to expose the class that has the superclasses. That’s why the superclass expansion for Givens happens in canClassNC.

Note [Why adding superclasses can help]

[note link]

Examples of how adding superclasses can help:

— Example 1
class C a b | a -> b
Suppose we want to solve
[G] C a b [W] C a beta

Then adding [D] beta~b will let us solve it.

– Example 2 (similar but using a type-equality superclass)
class (F a ~ b) => C a b
And try to sllve:
[G] C a b [W] C a beta
Follow the superclass rules to add
[G] F a ~ b [D] F a ~ beta

Now we get [D] beta ~ b, and can solve that.

—Example (tcfail138) class L a b | a -> b class (G a, L a b) => C a b

instance C a b' => G (Maybe a)
instance C a b  => C (Maybe a) a
instance L (Maybe a) a
When solving the superclasses of the (C (Maybe a) a) instance, we get
[G] C a b, and hance by superclasses, [G] G a, [G] L a b [W] G (Maybe a)
Use the instance decl to get
[W] C a beta
Generate its derived superclass
[D] L a beta. Now using fundeps, combine with [G] L a b to get [D] beta ~ b

which is what we want.

Note [Danger of adding superclasses during solving]

[note link]

Here’s a serious, but now out-dated example, from #4497:

class Num (RealOf t) => Normed t type family RealOf x
Assume the generated wanted constraint is:
[W] RealOf e ~ e [W] Normed e
If we were to be adding the superclasses during simplification we’d get:
[W] RealOf e ~ e [W] Normed e [D] RealOf e ~ fuv [D] Num fuv
==>
e := fuv, Num fuv, Normed fuv, RealOf fuv ~ fuv

While looks exactly like our original constraint. If we add the superclass of (Normed fuv) again we’d loop. By adding superclasses definitely only once, during canonicalisation, this situation can’t happen.

Mind you, now that Wanteds cannot rewrite Derived, I think this particular situation can’t happen.

Note [Equality superclasses in quantified constraints]

[note link]

Consider (#15359, #15593, #15625)
f :: (forall a. theta => a ~ b) => stuff

It’s a bit odd to have a local, quantified constraint for (a~b), but some people want such a thing (see the tickets). And for Coercible it is definitely useful

f :: forall m. (forall p q. Coercible p q => Coercible (m p) (m q)))
=> stuff

Moreover it’s not hard to arrange; we just need to look up /equality/ constraints in the quantified-constraint environment, which we do in TcInteract.doTopReactOther.

There is a wrinkle though, in the case where ‘theta’ is empty, so we have

f :: (forall a. a~b) => stuff

Now, potentially, the superclass machinery kicks in, in makeSuperClasses, giving us a a second quantified constrait

(forall a. a ~# b)

BUT this is an unboxed value! And nothing has prepared us for dictionary “functions” that are unboxed. Actually it does just about work, but the simplier ends up with stuff like

case (/a. eq_sel d) of df -> …(df @Int)…

and fails to simplify that any further. And it doesn’t satisfy isPredTy any more.

So for now we simply decline to take superclasses in the quantified case. Instead we have a special case in TcInteract.doTopReactOther, which looks for primitive equalities specially in the quantified constraints.

See also Note [Evidence for quantified constraints] in Type.

Note [Quantified constraints]

[note link]

The -XQuantifiedConstraints extension allows type-class contexts like this:

data Rose f x = Rose x (f (Rose f x))
instance (Eq a, forall b. Eq b => Eq (f b))
      => Eq (Rose f a)  where
  (Rose x1 rs1) == (Rose x2 rs2) = x1==x2 && rs1 == rs2

Note the (forall b. Eq b => Eq (f b)) in the instance contexts. This quantified constraint is needed to solve the

[W] (Eq (f (Rose f x)))

constraint which arises form the (==) definition.

The wiki page is
https://gitlab.haskell.org/ghc/ghc/wikis/quantified-constraints

which in turn contains a link to the GHC Proposal where the change is specified, and a Haskell Symposium paper about it.

We implement two main extensions to the design in the paper:

  1. We allow a variable in the instance head, e.g.

    f :: forall m a. (forall b. m b) => D (m a)

    Notice the ‘m’ in the head of the quantified constraint, not a class.

2. We suport superclasses to quantified constraints.
   For example (contrived):
     f :: (Ord b, forall b. Ord b => Ord (m b)) => m a -> m a -> Bool
     f x y = x==y
   Here we need (Eq (m a)); but the quantifed constraint deals only
   with Ord.  But we can make it work by using its superclass.
Here are the moving parts
  • Language extension {-# LANGUAGE QuantifiedConstraints #-} and add it to ghc-boot-th:GHC.LanguageExtensions.Type.Extension

  • A new form of evidence, EvDFun, that is used to discharge such wanted constraints

  • checkValidType gets some changes to accept forall-constraints only in the right places.

  • Type.PredTree gets a new constructor ForAllPred, and and classifyPredType analyses a PredType to decompose the new forall-constraints

  • TcSMonad.InertCans gets an extra field, inert_insts, which holds all the Given forall-constraints. In effect, such Given constraints are like local instance decls.

  • When trying to solve a class constraint, via TcInteract.matchInstEnv, use the InstEnv from inert_insts so that we include the local Given forall-constraints in the lookup. (See TcSMonad.getInstEnvs.)

  • TcCanonical.canForAll deals with solving a forall-constraint. See

    Note [Solving a Wanted forall-constraint]

  • We augment the kick-out code to kick out an inert forall constraint if it can be rewritten by a new type equality; see TcSMonad.kick_out_rewritable

Note that a quantified constraint is never /inferred/ (by TcSimplify.simplifyInfer). A function can only have a quantified constraint in its type if it is given an explicit type signature.

Note that we implement

Note [Solving a Wanted forall-constraint]

[note link]

Solving a wanted forall (quantified) constraint
[W] df :: forall ab. (Eq a, Ord b) => C x a b
is delightfully easy. Just build an implication constraint
forall ab. (g1::Eq a, g2::Ord b) => [W] d :: C x a
and discharge df thus:
df = /ab. g1 g2. let <binds> in d

where <binds> is filled in by solving the implication constraint. All the machinery is to hand; there is little to do.

Note [Solving a Given forall-constraint]

[note link]

For a Given constraint
[G] df :: forall ab. (Eq a, Ord b) => C x a b

we just add it to TcS’s local InstEnv of known instances, via addInertForall. Then, if we look up (C x Int Bool), say, we’ll find a match in the InstEnv.

Note [Canonicalising equalities]

[note link]

In order to canonicalise an equality, we look at the structure of the two types at hand, looking for similarities. A difficulty is that the types may look dissimilar before flattening but similar after flattening. However, we don’t just want to jump in and flatten right away, because this might be wasted effort. So, after looking for similarities and failing, we flatten and then try again. Of course, we don’t want to loop, so we track whether or not we’ve already flattened.

It is conceivable to do a better job at tracking whether or not a type is flattened, but this is left as future work. (Mar ‘15)

Note [FunTy and decomposing tycon applications]

[note link]

When can_eq_nc’ attempts to decompose a tycon application we haven’t yet zonked. This means that we may very well have a FunTy containing a type of some unknown kind. For instance, we may have,

FunTy (a :: k) Int

Where k is a unification variable. tcRepSplitTyConApp_maybe panics in the event that it sees such a type as it cannot determine the RuntimeReps which the (->) is applied to. Consequently, it is vital that we instead use tcRepSplitTyConApp_maybe’, which simply returns Nothing in such a case.

When this happens can_eq_nc’ will fail to decompose, zonk, and try again. Zonking should fill the variable k, meaning that decomposition will succeed the second time around.

Note [Unsolved equalities]

[note link]

If we have an unsolved equality like
(a b ~R# Int)

that is not necessarily insoluble! Maybe ‘a’ will turn out to be a newtype. So we want to make it a potentially-soluble Irred not an insoluble one. Missing this point is what caused #15431

Note [Newtypes can blow the stack]

[note link]

Suppose we have

newtype X = MkX (Int -> X)
newtype Y = MkY (Int -> Y)

and now wish to prove

[W] X ~R Y

This Wanted will loop, expanding out the newtypes ever deeper looking for a solid match or a solid discrepancy. Indeed, there is something appropriate to this looping, because X and Y do have the same representation, in the limit – they’re both (Fix ((->) Int)). However, no finitely-sized coercion will ever witness it. This loop won’t actually cause GHC to hang, though, because we check our depth when unwrapping newtypes.

Note [Eager reflexivity check]

[note link]

Suppose we have

newtype X = MkX (Int -> X)

and

[W] X ~R X

Naively, we would start unwrapping X and end up in a loop. Instead, we do this eager reflexivity check. This is necessary only for representational equality because the flattener technology deals with the similar case (recursive type families) for nominal equality.

Note that this check does not catch all cases, but it will catch the cases we’re most worried about, types like X above that are actually inhabited.

Here’s another place where this reflexivity check is key: Consider trying to prove (f a) ~R (f a). The AppTys in there can’t be decomposed, because representational equality isn’t congruent with respect to AppTy. So, when canonicalising the equality above, we get stuck and would normally produce a CIrredCan. However, we really do want to be able to solve (f a) ~R (f a). So, in the representational case only, we do a reflexivity check.

(This would be sound in the nominal case, but unnecessary, and I [Richard E.] am worried that it would slow down the common case.)

Note [Use canEqFailure in canDecomposableTyConApp]

[note link]

We must use canEqFailure, not canEqHardFailure here, because there is the possibility of success if working with a representational equality. Here is one case:

type family TF a where TF Char = Bool
data family DF a
newtype instance DF Bool = MkDF Int

Suppose we are canonicalising (Int ~R DF (TF a)), where we don’t yet know a. This is not a hard failure, because we might soon learn that a is, in fact, Char, and then the equality succeeds.

Here is another case:

[G] Age ~R Int

where Age’s constructor is not in scope. We don’t want to report an “inaccessible code” error in the context of this Given!

For example, see typecheck/should_compile/T10493, repeated here:

import Data.Ord (Down)  -- no constructor
foo :: Coercible (Down Int) Int => Down Int -> Int
foo = coerce

That should compile, but only because we use canEqFailure and not canEqHardFailure.

Note [Decomposing equality]

[note link]

If we have a constraint (of any flavour and role) that looks like T tys1 ~ T tys2, what can we conclude about tys1 and tys2? The answer, of course, is “it depends”. This Note spells it all out.

In this Note, “decomposition” refers to taking the constraint
[fl] (T tys1 ~X T tys2)
(for some flavour fl and some role X) and replacing it with
[fls’] (tys1 ~Xs’ tys2)

where that notation indicates a list of new constraints, where the new constraints may have different flavours and different roles.

The key property to consider is injectivity. When decomposing a Given the decomposition is sound if and only if T is injective in all of its type arguments. When decomposing a Wanted, the decomposition is sound (assuming the correct roles in the produced equality constraints), but it may be a guess – that is, an unforced decision by the constraint solver. Decomposing Wanteds over injective TyCons does not entail guessing. But sometimes we want to decompose a Wanted even when the TyCon involved is not injective! (See below.)

So, in broad strokes, we want this rule:

(*) Decompose a constraint (T tys1 ~X T tys2) if and only if T is injective at role X.

Pursuing the details requires exploring three axes: * Flavour: Given vs. Derived vs. Wanted * Role: Nominal vs. Representational * TyCon species: datatype vs. newtype vs. data family vs. type family vs. type variable

(So a type variable isn’t a TyCon, but it’s convenient to put the AppTy case in the same table.)

Right away, we can say that Derived behaves just as Wanted for the purposes of decomposition. The difference between Derived and Wanted is the handling of evidence. Since decomposition in these cases isn’t a matter of soundness but of guessing, we want the same behavior regardless of evidence.

Here is a table (discussion following) detailing where decomposition of
(T s1 … sn) ~r (T t1 .. tn)

is allowed. The first four lines (Data types … type family) refer to TyConApps with various TyCons T; the last line is for AppTy, where there is presumably a type variable at the head, so it’s actually

(s s1 … sn) ~r (t t1 .. tn)

NOMINAL GIVEN WANTED

Datatype YES YES Newtype YES YES Data family YES YES Type family YES, in injective args{1} YES, in injective args{1} Type variable YES YES

REPRESENTATIONAL GIVEN WANTED

Datatype YES YES Newtype NO{2} MAYBE{2} Data family NO{3} MAYBE{3} Type family NO NO Type variable NO{4} NO{4}

{1}: Type families can be injective in some, but not all, of their arguments, so we want to do partial decomposition. This is quite different than the way other decomposition is done, where the decomposed equalities replace the original one. We thus proceed much like we do with superclasses: emitting new Givens when “decomposing” a partially-injective type family Given and new Deriveds when “decomposing” a partially-injective type family Wanted. (As of the time of writing, 13 June 2015, the implementation of injective type families has not been merged, but it should be soon. Please delete this parenthetical if the implementation is indeed merged.)

{2}: See Note [Decomposing newtypes at representational role]

{3}: Because of the possibility of newtype instances, we must treat data families like newtypes. See also Note [Decomposing newtypes at representational role]. See #10534 and test case typecheck/should_fail/T10534.

{4}: Because type variables can stand in for newtypes, we conservatively do not decompose AppTys over representational equality.

In the implementation of can_eq_nc and friends, we don’t directly pattern match using lines like in the tables above, as those tables don’t cover all cases (what about PrimTyCon? tuples?). Instead we just ask about injectivity, boiling the tables above down to rule (*). The exceptions to rule (*) are for injective type families, which are handled separately from other decompositions, and the MAYBE entries above.

Note [Decomposing newtypes at representational role]

[note link]

This note discusses the ‘newtype’ line in the REPRESENTATIONAL table in Note [Decomposing equality]. (At nominal role, newtypes are fully decomposable.)

Here is a representative example of why representational equality over newtypes is tricky:

newtype Nt a = Mk Bool         -- NB: a is not used in the RHS,
type role Nt representational  -- but the user gives it an R role anyway

If we have [W] Nt alpha ~R Nt beta, we don’t want to decompose to [W] alpha ~R beta, because it’s possible that alpha and beta aren’t representationally equal. Here’s another example.

newtype Nt a = MkNt (Id a)
type family Id a where Id a = a
[W] Nt Int ~R Nt Age

Because of its use of a type family, Nt’s parameter will get inferred to have a nominal role. Thus, decomposing the wanted will yield [W] Int ~N Age, which is unsatisfiable. Unwrapping, though, leads to a solution.

Conclusion:
  • Unwrap newtypes before attempting to decompose them. This is done in can_eq_nc’.

It all comes from the fact that newtypes aren’t necessarily injective w.r.t. representational equality.

Furthermore, as explained in Note [NthCo and newtypes] in TyCoRep, we can’t use NthCo on representational coercions over newtypes. NthCo comes into play only when decomposing givens.

Conclusion:
  • Do not decompose [G] N s ~R N t

Is it sensible to decompose Wanted constraints over newtypes? Yes! It’s the only way we could ever prove (IO Int ~R IO Age), recalling that IO is a newtype.

However we must be careful. Consider

type role Nt representational
[G] Nt a ~R Nt b       (1)
[W] NT alpha ~R Nt b   (2)
[W] alpha ~ a          (3)

If we focus on (3) first, we’ll substitute in (2), and now it’s identical to the given (1), so we succeed. But if we focus on (2) first, and decompose it, we’ll get (alpha ~R b), which is not soluble. This is exactly like the question of overlapping Givens for class constraints: see Note [Instance and Given overlap] in TcInteract.

Conclusion:
  • Decompose [W] N s ~R N t iff there no given constraint that could later solve it.

Note [Decomposing TyConApps]

[note link]

If we see (T s1 t1 ~ T s2 t2), then we can just decompose to
(s1 ~ s2, t1 ~ t2)
and push those back into the work list. But if
s1 = K k1 s2 = K k2

then we will just decomopose s1~s2, and it might be better to do so on the spot. An important special case is where s1=s2, and we get just Refl.

So canDecomposableTyCon is a fast-path decomposition that uses unifyWanted etc to short-cut that work.

Note [Canonicalising type applications]

[note link]

Given (s1 t1) ~ ty2, how should we proceed? The simple things is to see if ty2 is of form (s2 t2), and decompose. By this time s1 and s2 can’t be saturated type function applications, because those have been dealt with by an earlier equation in can_eq_nc, so it is always sound to decompose.

However, over-eager decomposition gives bad error messages for things like

a b ~ Maybe c e f ~ p -> q

Suppose (in the first example) we already know a~Array. Then if we decompose the application eagerly, yielding

a ~ Maybe b ~ c

we get an error “Can’t match Array ~ Maybe”, but we’d prefer to get “Can’t match Array b ~ Maybe c”.

So instead can_eq_wanted_app flattens the LHS and RHS, in the hope of replacing (a b) by (Array b), before using try_decompose_app to decompose it.

Note [Make sure that insolubles are fully rewritten]

[note link]

When an equality fails, we still want to rewrite the equality all the way down, so that it accurately reflects

  1. the mutable reference substitution in force at start of solving
  2. any ty-binds in force at this point in solving

See Note [Rewrite insolubles] in TcSMonad. And if we don’t do this there is a bad danger that TcSimplify.applyTyVarDefaulting will find a variable that has in fact been substituted.

Note [Do not decompose Given polytype equalities]

[note link]

Consider [G] (forall a. t1 ~ forall a. t2). Can we decompose this? No – what would the evidence look like? So instead we simply discard this given evidence.

Note [Combining insoluble constraints]

[note link]

As this point we have an insoluble constraint, like Int~Bool.

  • If it is Wanted, delete it from the cache, so that subsequent Int~Bool constraints give rise to separate error messages
  • But if it is Derived, DO NOT delete from cache. A class constraint may get kicked out of the inert set, and then have its functional dependency Derived constraints generated a second time. In that case we don’t want to get two (or more) error messages by generating two (or more) insoluble fundep constraints from the same class constraint.

Note [No top-level newtypes on RHS of representational equalities]

[note link]

Suppose we’re in this situation:

work item:  [W] c1 : a ~R b
    inert:  [G] c2 : b ~R Id a
where
newtype Id a = Id a

We want to make sure canEqTyVar sees [W] a ~R a, after b is flattened and the Id newtype is unwrapped. This is assured by requiring only flat types in canEqTyVar and having the newtype-unwrapping check above the tyvar check in can_eq_nc.

Note [Occurs check error]

[note link]

If we have an occurs check error, are we necessarily hosed? Say our tyvar is tv1 and the type it appears in is xi2. Because xi2 is function free, then if we’re computing w.r.t. nominal equality, then, yes, we’re hosed. Nothing good can come from (a ~ [a]). If we’re computing w.r.t. representational equality, this is a little subtler. Once again, (a ~R [a]) is a bad thing, but (a ~R N a) for a newtype N might be just fine. This means also that (a ~ b a) might be fine, because b might become a newtype.

So, we must check: does tv1 appear in xi2 under any type constructor that is generative w.r.t. representational equality? That’s what isInsolubleOccursCheck does.

See also #10715, which induced this addition.

Note [canCFunEqCan]

[note link]

Flattening the arguments to a type family can change the kind of the type family application. As an easy example, consider (Any k) where (k ~ Type) is in the inert set. The original (Any k :: k) becomes (Any Type :: Type). The problem here is that the fsk in the CFunEqCan will have the old kind.

The solution is to come up with a new fsk/fmv of the right kind. For givens, this is easy: just introduce a new fsk and update the flat-cache with the new one. For wanteds, we want to solve the old one if favor of the new one, so we use dischargeFmv. This also kicks out constraints from the inert set; this behavior is correct, as the kind-change may allow more constraints to be solved.

We use isTcReflexiveCo, to ensure that we only use the hetero-kinded case if we really need to. Of course flattenArgsNom should return Refl whenever possible, but #15577 was an infinite loop because even though the coercion was homo-kinded, kind_co was not Refl, so we made a new (identical) CFunEqCan, and then the entire process repeated.

Note [Canonical orientation for tyvar/tyvar equality constraints]

[note link]

When we have a ~ b where both ‘a’ and ‘b’ are TcTyVars, which way round should be oriented in the CTyEqCan? The rules, implemented by canEqTyVarTyVar, are these

  • If either is a flatten-meta-variables, it goes on the left.

  • Put a meta-tyvar on the left if possible

    alpha[3] ~ r

  • If both are meta-tyvars, put the more touchable one (deepest level number) on the left, so there is the best chance of unifying it

    alpha[3] ~ beta[2]

  • If both are meta-tyvars and both at the same level, put a TyVarTv on the right if possible

    alpha[2] ~ beta[2](sig-tv)

    That way, when we unify alpha := beta, we don’t lose the TyVarTv flag.

  • Put a meta-tv with a System Name on the left if possible so it gets eliminated (improves error messages)

  • If one is a flatten-skolem, put it on the left so that it is substituted out Note [Eliminate flat-skols] in TcUinfy

    fsk ~ a

Note [Equalities with incompatible kinds]

[note link]

What do we do when we have an equality

(tv :: k1) ~ (rhs :: k2)

where k1 and k2 differ? This Note explores this treacherous area.

First off, the question above is slightly the wrong question. Flattening a tyvar will flatten its kind (Note [Flattening] in TcFlatten); flattening the kind might introduce a cast. So we might have a casted tyvar on the left. We thus revise our test case to

(tv |> co :: k1) ~ (rhs :: k2)

We must proceed differently here depending on whether we have a Wanted or a Given. Consider this:

[W] w :: (alpha :: k) ~ (Int :: Type)

where k is a skolem. One possible way forward is this:

[W] co :: k ~ Type
[W] w :: (alpha :: k) ~ (Int |> sym co :: k)

The next step will be to unify

alpha := Int |> sym co

Now, consider what error we’ll report if we can’t solve the “co” wanted. Its CtOrigin is the w wanted… which now reads (after zonking) Int ~ Int. The user thus sees that GHC can’t solve Int ~ Int, which is embarrassing. See #11198 for more tales of destruction.

The reason for this odd behavior is much the same as Note [Wanteds do not rewrite Wanteds] in TcRnTypes: note that the new co is a Wanted.

The solution is then not to use co to “rewrite” – that is, cast – w, but instead to keep w heterogeneous and irreducible. Given that we’re not using co, there is no reason to collect evidence for it, so co is born a Derived, with a CtOrigin of KindEqOrigin.

When the Derived is solved (by unification), the original wanted (w) will get kicked out.

Note that, if we had [G] co1 :: k ~ Type available, then none of this code would trigger, because flattening would have rewritten k to Type. That is, w would look like [W] (alpha |> co1 :: Type) ~ (Int :: Type), and the tyvar case will trigger, correctly rewriting alpha to (Int |> sym co1).

Successive canonicalizations of the same Wanted may produce duplicate Deriveds. Similar duplications can happen with fundeps, and there seems to be no easy way to avoid. I expect this case to be rare.

For Givens, this problem doesn’t bite, so a heterogeneous Given gives rise to a Given kind equality. No Deriveds here. We thus homogenise the Given (see the “homo_co” in the Given case in canEqTyVar) and carry on with a homogeneous equality constraint.

Separately, I (Richard E) spent some time pondering what to do in the case that we have [W] (tv |> co1 :: k1) ~ (tv |> co2 :: k2) where k1 and k2 differ. Note that the tv is the same. (This case is handled as the first case in canEqTyVarHomo.) At one point, I thought we could solve this limited form of heterogeneous Wanted, but I then reconsidered and now treat this case just like any other heterogeneous Wanted.

Note [Type synonyms and canonicalization]

[note link]

We treat type synonym applications as xi types, that is, they do not count as type function applications. However, we do need to be a bit careful with type synonyms: like type functions they may not be generative or injective. However, unlike type functions, they are parametric, so there is no problem in expanding them whenever we see them, since we do not need to know anything about their arguments in order to expand them; this is what justifies not having to treat them as specially as type function applications. The thing that causes some subtleties is that we prefer to leave type synonym applications unexpanded whenever possible, in order to generate better error messages.

If we encounter an equality constraint with type synonym applications on both sides, or a type synonym application on one side and some sort of type application on the other, we simply must expand out the type synonyms in order to continue decomposing the equality constraint into primitive equality constraints. For example, suppose we have

type F a = [Int]

and we encounter the equality

F a ~ [b]

In order to continue we must expand F a into [Int], giving us the equality

[Int] ~ [b]

which we can then decompose into the more primitive equality constraint

Int ~ b.

However, if we encounter an equality constraint with a type synonym application on one side and a variable on the other side, we should NOT (necessarily) expand the type synonym, since for the purpose of good error messages we want to leave type synonyms unexpanded as much as possible. Hence the ps_ty1, ps_ty2 argument passed to canEqTyVar.

Note [Rewriting with Refl]

[note link]

If the coercion is just reflexivity then you may re-use the same variable. But be careful! Although the coercion is Refl, new_pred may reflect the result of unification alpha := ty, so new_pred might not _look_ the same as old_pred, and it’s vital to proceed from now on using new_pred.

qThe flattener preserves type synonyms, so they should appear in new_pred as well as in old_pred; that is important for good error messages.

Note [unifyWanted and unifyDerived]

[note link]

When decomposing equalities we often create new wanted constraints for (s ~ t). But what if s=t? Then it’d be faster to return Refl right away. Similar remarks apply for Derived.

Rather than making an equality test (which traverses the structure of the type, perhaps fruitlessly), unifyWanted traverses the common structure, and bales out when it finds a difference by creating a new Wanted constraint. But where it succeeds in finding common structure, it just builds a coercion to reflect it.