I understand the differences in capacity and values they can represent but it se
ID: 645816 • Letter: I
Question
I understand the differences in capacity and values they can represent but it seems as though people always use Int32 regardless of whether it is appropriate. No one ever seems to use the unsigned version (uint) even though a lot of the time it fits better as it describes a value that cannot be negative (perhaps to represent an ID of a database record). Also, no one ever seems to use short/Int16 regardless of the required capacity of the value.
Objectively, are there cases where it's better to use uint or short/Int16 and if so, which are they?
Explanation / Answer
I suspect you are referring to a perspective colored by your own experiences where you have not worked around folks who use integral types properly. This may well be a common occurrence, but it's been my experience that people commonly do use them correctly as well.
The benefit is memory space and cpu time, possibly IO space as well depending on whether the types are ever sent over the wire or to a disk. Unsigned types give you compiler checks to ensure you won't do certain operations that are impossible, plus extending the available range while maintaining the smaller size for heightened performance where it may be necessary.
The correct use is as you would expect - anytime you know for certain you can use them permanently (do not constrain without certainty or you will regret it later).
If you're trying to represent something that could never reasonably be negative (public uint NumberOfPeople) use an unsigned type.
If you're trying to represent something that could never reasonably be greater than 255 (public byte DamagedToothCount), use a byte.
If you're trying to represent something that could reasonably be greater than 255, but never a significant number of thousands, use a short (public short JimmyHoffasBankBalance).
If you're trying to represent something that could be many hundreds of thousands, millions even, but unlikely to ever reach multiple billions, use an int (public int HoursSinceUnixEpoch).
If you know for certain this number may have an unboundedly large value or you think it may have multiple billions but you're not certain how many billions, long is your best bet. If long's not big enough you have an interesting problem and need to start looking at arbitrary precision numerics (public long MyReallyGreatAppsUserCountThisIsNotWishfulThinkingAtAll).
This reasoning can be used throughout in choosing between signed, unsigned, and varied sizes of types et al, just think about the logical truths of the data you're representing in reality.