Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

I just noticed that every modern OO programming language that I am at least some

ID: 643074 • Letter: I

Question

I just noticed that every modern OO programming language that I am at least somewhat familiar with (which is basically just Java, C# and D) allows covariant arrays. That is, a string array is an object array:

Object[] arr = new String[2];   // Java, C# and D allow this
Covariant arrays are a hole in the static type system. They make type errors possible that cannot be detected at compile-time, so every write to an array must be checked at runtime:

arr[0] = "hello";        // ok
arr[1] = new Object();   // ArrayStoreException
This seems like a terrible performance hit if I do lots of array stores.

C++ does not have covariant arrays, so there is no need to do such a runtime check, which means there is no performance penalty.

Is there any analysis done to reduce the number of runtime checks necessary? For example, if I say:

arr[1] = arr[0];
one could argue that the store cannot possibly fail. I'm sure there are lots of other possible optimizations I haven't thought of.

Do modern compilers actually do these kinds of optimizations, or do I have to live with the fact that, for example, a Quicksort always does O(n log n) unnecessary runtime checks?

Can modern OO languages avoid the overhead created by supporting co-variant arrays?

Explanation / Answer

D doesn't have covariant arrays. It allowed them prior to the most recent release (dmd 2.057), but that bug has been fixed.

An array in D is effectively just a struct with a pointer and a length:

struct A(T)
{
T* ptr;
size_t length;
}
Bounds checking is done normally when indexing an array, but it's removed when you compile with -release. So, in release mode, there's no real performance difference between arrays in C/C++ and those in D.