To summarize, the author has two problems:<p>1. Bit shift is not part of the IntegerType protocol, when it should be (although the author could avoid the issue by accumulating the bytes in a UIntMax instead of the generic type).<p>2. Construction from (and conversion to) a UIntMax bit pattern is not part of the IntegerType protocol, when it should be (done correctly, this addresses the author's sign <i>and</i> construction complaints)<p>The author incorrectly claims/implies that these are problems with generics or protocols or the diversity of integer types in Swift. They're really a problem of omissions in the standard library protocols that are forcing some very cumbersome workarounds. The necessary functionality exists, it just isn't part of the relevant protocols. Submit these as bugs to Apple.<p>Edit:<p>As a follow up, here's a version that gets around the standard library limitations using an unsafe pointer...<p><pre><code> func integerWithBytes<T: IntegerType>(bytes:[UInt8]) -> T? {
if (bytes.count < sizeof(T)) {
return nil
}
var i:UIntMax = 0
for (var j = 0; j < sizeof(T); j++) {
i = i | (UIntMax(bytes[j]) << UIntMax(j * 8))
}
return withUnsafePointer(&i) { ip -> T in
return UnsafePointer<T>(ip).memory
}
}
</code></pre>
Of course, at that point, why not simply reinterpret the array buffer directly...<p><pre><code> func integerWithBytes<T: IntegerType>(bytes:[UInt8]) -> T? {
if (bytes.count < sizeof(T)) {
return nil
}
return bytes.withUnsafeBufferPointer() { bp -> T in
return UnsafePointer<T>(bp.baseAddress).memory
}
}</code></pre>
Wow that takes me back. Back to a conference room where we were talking about Integers in Java. If you made it a class, an Integer could carry along all this other stuff about how big it was, what the operators were, etc. But generating code for it was painful because your code had to do all of these checks when 99% of the time you probably just wanted the native integer implementation of the CPU. And Boolean's were they their own type or just a 1 bit Integer? And did that make an enum {foo, bar, baz, bletch, blech, barf, bingo} just a 3 bit integer?<p>Integers as types can compile quickly, but then you need multiple types to handle the multiple cases. Essentially you have pre-decoded the size by making into a type.<p>At one point you had class Number, subclasses Real, Cardinal, and Complex, and within those a constructor which defined their precision. But I think everyone agreed it wasn't going to replace Fortran.<p>The scripting languages get pretty close to making this a non-visible thing, at the cost of some execution speed. Swift took it to an extreme, which I understand, but I probably wouldn't have gone there myself. The old char, short, long types seem so quaint now.
I once, many years ago, wrote something titled "Type Integer Considered Harmful". (This was way back during the 16-32 bit transition). My position was that the user should declare integer types with ranges (as in Pascal and Ada), and it was the compiler's job to insure that intermediate variables must not overflow unless the user-defined ranges would also be violated. Overflowing a user range would be an error. The goal was to get the same answer on all platforms regardless of the underlying architecture.<p>The main implication is that expressions with more than one operator tend to need larger intermediate temporary variables. (For the following examples, assume all the variables are the same integer type.) For "a = b * c", the expression "b * c" is limited by the size of "a", so you don't need a larger intermediate. But "a = (b * c)/d" requires an temporary big enough to handle "b * c", which may be bigger than "a". Compilers could impose some limit on how big an intermediate they supported.<p>This hides the underlying machine architecture and makes arithmetic behave consistently. Either you get the right answer numerically, or you get an overflow exception.<p>Because integer ranges weren't in C/C++, this approach didn't get much traction. Dealing with word size differences became less of an issue when the 24-bit, 36-bit, 48-bit and 60-bit machines died off in favor of the 32/64 bit standard. So this never became necessary. It's still a good way to think about integer arithmetic.
For amusement value, the Haskell equivalent is:<p><pre><code> import Data.Bits
import Data.List(unfoldr)
f :: (Num a, Bits a) => a -> [a]
f = unfoldr $ \case
0 -> Nothing
n -> Just (n .&. 0xff, n `shift` 8)</code></pre>
Generic code is like nerd sniping.<p>I look at this and think "why would you want to write generic code for all those ints?"<p>The integer types may look similar but they're different in more ways than they're similar. They have different bit sizes, different signedness. The CPU literally has to do different things depending if its `uint8` or `int64`. So why do you want or expect one piece of code that does it all?<p>It's just so much easier and faster to do it like Go, have non-generic functions that do exactly what you want and as a result, get meaningful work done. It's faster to write (because you don't need to figure out how to do in a generic way), faster and easier to read, and possible to make changes to one func but not others.
I guess this sort of gets at the crux of the issue: Do you want it to be more like a scripting language (which would basically give you the mathematical equivalent of "integer" including unlimited size) at the cost of speed, or do you want it to be closer to the implementation in the CPU, which entails dealing with 8/16/32/64 bit limits and sign bits?<p>Why not have a way to do both? You can get an easy-to-use Int when speed is less of a concern, and can deal with Int16's, Int32's, UInt32's and whatnot when the job demands it.
I think you all are being too nice to Apple.<p>I had a similar experience as the blog post author. I spent many hours battling generics and the huge forest of (undocumented) protocols to do something seemingly trivial. I just gave up rather than try to pin down exactly what was wrong in a long and detailed blog post.<p>The prevailing answer to everything seem to be: Write a bug report to Apple and use Objective-C (or Swift's UnsafePointer and related).<p>This ignores what I think really is the issue here: Swift has an overly complex type system. This picture:<p><a href="http://swiftdoc.org/type/Int/hierarchy/" rel="nofollow">http://swiftdoc.org/type/Int/hierarchy/</a><p>Tells a lot. And this is from unofficial documentation that has been generated from Swift libraries. When you read the documentation Apple provides there is little explanation of this huge protocol hierarchy and the rationale behind it.<p>It seems to me that: Swift has been released in a rush with bugs even in the core language and compiler. Lacking documentation. And of course even a larger number of bugs in the IDE support, debugging etc.<p>Secondly: Swift battles the problem of easy-to-understand typesafe generics like so many other languages only it has it much worse: It carries a lot of stuff from Objective-C and it has to support easy interoperability. Plus it has ideas like not allowing implicit conversion of number types (requiring an integer added to a double to be explicitly converted to a double) causing the big type system to show it's messy head again and again.<p>I really want to love Swift but it will take years for Swift to be as clean and productive as Objective-C.<p>I my opinion what Apple should have done was to create the "CoffeeScript" of Objective-C. A language that essentially was Objective-C in terms of language features but with a concise syntax.
The Swift equivalent of his first `NSData` example is essentially this:<p><pre><code> func integerWithBytes<T:IntegerType>(bytes:[UInt8]) -> T
{
let valueFromArrayPointer = { (arrayPointer: UnsafePointer<UInt8>) in
return unsafeBitCast(arrayPointer, UnsafePointer<T>.self).memory
}
return valueFromArrayPointer(bytes)
}
let bytes:[UInt8] = [0x00, 0x01, 0x00, 0x00]
let result: UInt32 = integerWithBytes(bytes)
assert(result == 256)</code></pre>
"All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections." ~David John Wheeler
I wouldn't expect a much better design from a language developed behind closed doors with no community input.<p>Now all we can do is file bugs against Apple, and hope they improve it; they who chose to release a new language with but three months of public beta. They obviously didn't care much to have their designs tested or incorporate feedback then.
> More or less, what I want to archive can be done with old world NSData: data.getBytes(&i, length: sizeofValue(i))<p>That doesn't work in C/C++ if you are using a modern optimizer.<p>C does not have the other Swift issues the author mentions, so shifting into the largest int and casting from there does work.