Polymorphism
This mostly discusses polymorphism in statically, as opposed to dynamically, typed languages.
Ad hoc polymorphism
Same operation being applied to othewise different types.
Example add operation working on both integers and strings.
Parametric Polymorphism
Parameterised data types such as Lists
Subtyping
Subtyping is a issue in type theory because an element of a type belongs only to that type. Therefore polymorphism based on subtyping may also be an issue.
A way to get some of this capability without the theoretical issues around subtyping might be to use the 'interface'/'typeclass' concept.
Row polymorphism
Structural types
Polytypism
- polytypism
- data type genericity - use of type variables.
Parametricity
Given a function signature what can we know about that functions implementation?
In many cases there will so many ways of translating the input types to the output type that we can't say much at all. Typically such implementations might deconstruct the parameters, do some processing on them and construct the output type.
If type variables have been used in the parameters then we know less about them. Strangely this often allows us to infer more about the implementation because there are less possibilities (a sort of contravarience of information?).
Usually if we have a function signature like a->b->c then we usually infer quantifiers: forall a and b then there exists c. However if a given type variable occurs more than once then there is a sort of dependant sum type.
A lower case identifier in the signature is a type variable, this is a form of polymorphism sometimes called genericity.
Sometimes, given a function signature there is really no choice about the implementation of the function. The following examples use a lower case identifier therefore we don't know anything about the type, so we can't call any functions on it, and the following implementations are forced:
Function Signature | Implementation |
---|---|
id: a -> a | id x = x |
impossible: a -> b | none - since we don't know anything about a and b we can't know about any functions between them. |
first: a->b->a | first x y = x |
apply: a -> (a->b) -> b | apply x f = f x |
forall a. a->a where a is a higher order type. |
f x = \x -> x |
Some function signatures will have more freedom, for example:
List a -> List a
We can rearrange elements of the list but we can't change the elements. There are two structures here that are somehow orthogonal.
What about parameters of a signature that have a certain 'interface'/'typeclass'? Can we still constrain the signature enough to say something about the implementation?
What if we require our implementation to be unique/canonical, that is, even if there are options for the implementation we are not allowed to make arbitrary choices. The choice must be special in some way.
When sets of sets are infinite is it always possible to have am automatic choice? (see axiom of choice).
- Parametricity Wiki page
- Connection of parametricity to Leibniz equality on this page.
- Blog by Bbartosz Milewski