visit
C# is not an exception to the ever-increasing popularity of inference-enabled languages: since C# 3.0, developers may declare with the
var
(in C++ 11, a similar feature was introduced — re-targeting the then-existing auto
). This functionality, however, falls under a provision: a declaration must be accompanied by an initializer.Although one would not expect to see ML and Haskell’s “whole-program type inference” in C#, could this initializer requirement be relaxed? Why does it exist, altogether?
What do you answer when someone asks: why is it necessary, in C#, that a local
var
-declared variable is accompanied with an initializer? The accurate reply to this question is: because the ; it is not because the type would not be inferable otherwise. Consider function f
.The two statements above (a declaration- and an expression-statement) are not fundamentally different from the single statement
var num = 42;
. In fact, if permitted, the Intermediate Language (IL) code produced for the former would probably be equal to that produced for the latter. Also, one may easily observe that int
is a valid replacement for var
in f
.To set the stage for our discussion, let us quote — with minor adjustments for pedagogical reasons — the relevant fragment of the to which var is subject:“When the local variable type is specified as var… the declaration is an implicitly typed local variable declaration, whose type is inferred from the type of the associated initializer expression.”Another characteristic of C# is that a variable must be before we attempt to obtain its value. Let us also quote the language specification that states such mandatory assignment.
“For an initially unassigned variable to be considered definitely assigned… , an assignment to the variable must occur in every possible execution path…”
It is not my intent to enter the pedantic realm of language specifications. Yet, based on our (informal) observation that
var num; num = 42;
is equivalent to var num = 42;
, and the fact that a variable must be assigned before being used, could we slightly stretch the functionality provided by var
?In our small and trivial function
f
, it is clear that int
is a valid type for the program variable num
. But, in a general setup, reasoning about the typing relations among a set of expressions demands a systemic approach. The de facto type inference discipline adopted in typical functional languages is that of .Here, I will describe (at a superficial level) the core ideas behind a of Hindley-Milner, alongside a simplified version of a popular technique that builds on the to implement such a type system. The setup I present is more elaborate than what would be essential to our discussion about
var
— to make it comprehensible in the event one would like to further extend it.A primary component of a constraint-based inference technique is a constraint language. A constraint language is composed by its own syntactic elements, and by the types of the (programming) language which we are modelling; in our case, C# is the modelled language. We refer to a C# type (e.g., an
int
or System.DateTime
) by 𝝉, and to a , which is a type too, by 𝜶.The grammar of our constraint language is as follows. C is a constraint.C ::= def f : 𝜶 in C | ∃𝜶. C | C ^ C | 𝝉 = 𝝉
The first constraint form, def, introduces a function identified by f, whose type is 𝜶. (A function type is denoted with the classical .) The second form, ∃, is an constraint which indicates that 𝜶 “holds” some type that may be employed in a constraint. The ^ operator, appearing in the third constraint form, is — think of it as the in C#. The last form is a constraint that expresses type equality; it is on this one that we will focus. Moreover, our constraint language is equipped with two builtin operators, decl_type(𝒗), which designates the declared type of a 𝒗ariable, and type_of(𝓵), which designates the type of a 𝓵iteral.
A constraint language must also have its semantics defined. But, except for the equality of types, I will not do so in this article — that would entail a longer discussion… and one that is expendable for this general presentation. Having said that, the semantic definition that we have for 𝝉 = 𝝉 is straightforward: that of .
Supplied with a constraint language, we may generate the constraint associated with a program. This task is usually accomplished in a manner, i.e., we traverse the program’s (AST) and, while doing so, generate a constraint that is in respect to the modelled language. Below is how a constraint for
f
could look like:def
f
: string
→ void
in ∃𝜶1. decl_type(num
) = 𝜶1 ^ 𝜶1 = type_of(42
)A constraint implies the typing of a program. But for it to be really useful, type variables must be instantiated. This is achieved by solving the constraint. Let us do that for the one generated for
f
.f
is not called recursively, none of the expressions in its body mention the parameter name
, its void
return is uninteresting, and it contains no inside it. These allow us to reduce the previous constraint as follows.num
) = 𝜶1 ^ 𝜶1 = type_of(42
)num
) = 𝜶1 ^ 𝜶1 = type_of(42
)42
) evaluates to int
; for decl_type(num
), we introduce a type variable for each program variable, and store the corresponding mappings in a set — this set is referred to as 𝜓.int
where 𝜓 = { (num
, 𝜶2) }At this point, our constraint is in a normal format such that it is eligible for a final solving stage by an off-the-shelf framework.
What remains from the partly-solved constraint of function
f
is two equalities. A solution to those is a , from type variables to types, that will ideally render the constraint in question . This is exactly what provides us with. Specifically, as the result of a successful unification, we have a “special” substitution that is referred to as the most general unifier (mgu), which, in turn, suggests a .Provided with an implementation of unification, this is how to proceed.int
, and on 𝜓. The application of a substitution is denoted with , therefore: 𝝈C = [𝜶2 ↦ 𝜶1]𝜶1 = int
and 𝝈𝜓 = [𝜶2 ↦ 𝜶1]{ ( num
, 𝜶2) }.num
, 𝜶1) }.int
, and repeat the process from the beginning.This iteration goes on until the entire constraint is processed or an error happens, e.g., due to an attempt of unifying incompatible types likes an
int
against a string
. Our example terminates without any error, after the second iteration, when [𝜶1 ↦ int
] is computed and applied on 𝜓, leaving it as:𝜓 = { (num, int) }
The mapping above tells us that
int
is a solution as the type of program variable num
. If the constraint that we generated for f
is indeed , then replacing var
for int
in the original source-code must yield a well typed C# program, according to the language’s typing .In a , typing judgements are defined by means of like this one, for a basic (symmetric) assignment.The above rule says that, under the judgement of the 𝚪, whenever the type of an assignment is 𝝉, then the type of its left-hand-side expression and its right-hand-side expression are 𝝉 as well. However, in a constraint-based setting, the constraint — or 𝜓 in our formulation — must be accounted as part of the environment too, enriching the rule as follows.
We have seen that, from a technical standpoint, it is possible to stretch the functionality of C#’s
var
a bit further. But at what “cost”? There are trade-offs to be aware of. To have a sense of those, let us pretend that we are extending C# into a fictitious language named C##. (We shall not “resolve” the double hash, since a already exists — pun intended.)Of course, I will not go into every possible language-design aspect here; I would not know all of them myself, as a matter of fact. So the upcoming sections are just a glimpse of the highlights.The Dynamic Typing Confusion
Suppose that, in C##, we decided to accept the syntax of our original
f
function, as-is. We know that the compiler would have no problem to infer the type of num
as int
, but would developers find that an intelligible approach? Consider this function g
, a slightly incremented f
.The error above may be obvious to some, but, apparently, not to all. This affirmation is in view of the fact that the introduction of the
dynamic
in C# 4.0 prompted a var
. The developer’s intention was perhaps to write g
like this:In spite of the contrast between
var
and dynamic
, as a language designer, one wants to avoid potential confusion. Therefore, it might indeed be sensible to forbid non-initializing var
-declarations in C##. However… what if we demand, for a non-initializing implicitly typed variable, an extra level of consciousness from the developer? For example, that the keywords var
and static
are combined, as in this C## version of g
.A collateral coolness of the above approach is that it reuses an existing keyword. This change is subtle. Whether or not it would effectively eliminate doubts for the general audience, I cannot tell, though. Toward “consistency”, we could also enable
var dynamic
as a synonym of the plain dynamic
.Type Conversions, Subtyping, and Generics
The foremost advantage of requiring that a
var
-declared variable is accompanied by an initializer is that there is no ambiguity in deciding what the variable's type is. By giving away this obligation, yet retaining the requirement that variables must be assigned prior to their use, our inference needs to account for further possibilities when typing a valid program.This situation is exemplified by function
conv
.The type of the literal
42
is int
, but we may not type v
as such; otherwise, the subsequence assignment would trigger a compilation error, given that the type of literal 1.6180F
is float
, but no from float
to int
is available. One correct alternative is to type v
as float
— keep in mind that C# allows developers to define .In the presence of , the situation is similar.Now, a compilation error would be triggered if we inferred, based on the first assignment, the type of
w
as C
. To avoid this error, one may naively think that always inferring a is a solution to this situation; leading to A
as the type w
, and, likewise, double
as the type of v
in function conv
. But that strategy does not work in general, because there might be further uses of the implicitly typed variable that impose a specific type: a call w.b()
in sub
, or a call whoops(v)
in conv
, where whoops
is void whoops(float p) {}
.There are other language constructs that I do not mention in these basic examples that would also demand special care in the non-initializing behavior of
var static
-declared variables in C##. A notorious challenge is ; put into that bucket as well, features that are .It is certainly possible to deal with many (maybe all) of the difficulties I just presented in a nice way. However, advances in our type inference technique and constraint language would be necessary — alongside eventual syntax hints. A few to consider are:“When the local variable type is specified as var… the declaration is an implicitly typed local variable declaration, whose type is inferred from the type of the associated initializer expression; if said variable is specified as var static, its type is inferred from assignment expressions involving it.”
That being said — and as a disclaimer — , I do not feel in position to make a thorough evaluation of this “proposal”. Apart from the technicalities, a primary aspect to consider is the actual/practical benefit (if any) implied in such a feature. To my understanding (I am not affiliated to Microsoft),
var
was originally conceived for . Yet, I believe that extending its functionality to non-anonymous types was likely a choice of convenience. Then, under the subjectiveness of what is convenient…, one may just as well wish to write the code as in function z
below.Thank you for reading! I hope you have enjoyed the text — if you did, I would be curious to hear your feedback; and even more grateful if you share this article further.Read behind a paywall at