|Ignoring Type Differences email@example.com (Ken and Kristi LaPoint) (2001-07-17)|
|Re: Ignoring Type Differences firstname.lastname@example.org (Joachim Durchholz) (2001-07-18)|
|Re: Ignoring Type Differences email@example.com (2001-07-18)|
|Date:||18 Jul 2001 20:08:52 -0400|
|Organization:||AOL Bertelsmann Online GmbH & Co. KG http://www.germany.aol.com|
|Posted-Date:||18 Jul 2001 20:08:52 EDT|
"Ken and Kristi LaPoint" <firstname.lastname@example.org> schreibt:
>Can anyone tell me when you might want the compiler to ignore type
>differences in an expression?
IMO a compiler cannot "ignore" type differences, it only can silently
"convert" the items into a common data type, before further
processing. Such conversions require rules, how to make different
The usual implicit conversions handle combinations of signed and
unsigned types, types of different precision, types of different
"nature" (integral, floating point...).
Depending on the language, some types are designed as being
incompatible with other types, like boolean, char, or enum. I
personally appreciate such strict typing, and want the compiler only
convert types of "usage computational", or up/down cast class types as
appropriate. This principle can be reduced to type casts within class
trees, when also numeric types are treated as classes.
Return to the
Search the comp.compilers archives again.