TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The byte order fallacy

12 pointsby mike_esspeover 12 years ago

1 comment

csenseover 12 years ago
I do programming in Java. I don't like serialization because I'm old-fashioned and I want to have control over the exact format of my binary files (not least to insure interoperability with other languages).<p>Coming from a C background, from the first time I did binary file I/O with Java, I've noticed that the lack of support for casting every pointer to char* forces you to use the portable solution.<p>As for why a lot of software has to worry about endianness, I assume that people start out with:<p><pre><code> typedef struct whatever { int a; int b; const char c[MAX_C_LENGTH]; } S; </code></pre> And then they can get quick-and-dirty file-saving code like this:<p><pre><code> S inst; inst.a = 1; inst.b = 2; inst.c = "Hacker News"; fwrite(&#38;inst, sizeof(S), 1, fp); </code></pre> It's robust in that the same version of the code, running on the same machine, will be able to read and write files, even when more fields are added to the structure or MAX_C_LENGTH is changed.<p>Of course, when it comes to interop between different versions of the code, or running the code on radically different machines (32- vs 64-bit, or different endianness CPU), it will break. But for many applications this doesn't occur until after release. And then nobody wants to rewrite the saving and loading code from scratch, so a translation layer is patched in to allow the non-portable saving/loading code to give the correct result on problematic machines.