That's at least the third programming language named Q:<p><a href="https://en.wikipedia.org/wiki/Q_(programming_language_from_Kx_Systems)" rel="nofollow">https://en.wikipedia.org/wiki/Q_(programming_language_from_K...</a><p><a href="https://en.wikipedia.org/wiki/Q_(equational_programming_language)" rel="nofollow">https://en.wikipedia.org/wiki/Q_(equational_programming_lang...</a>
I am surprised that EDN is not making more progress as a data interchange format, especially now that it has fantastic validators and coercion libraries, such as Prismatic's Schema:<p><a href="https://github.com/prismatic/schema" rel="nofollow">https://github.com/prismatic/schema</a><p>This is, by far, the most intelligent data exchange system I have ever seen. It offers a richer language of types than what is offered by JSON, it is as readable as YAML, it is far more concise than XML. And yet it seems unable to break out of the world of Clojure. But it deserves to be taken seriously as a data interchange language.
I really like this. In general, a well-supported non-crappy schema language for modern data exchange formats is long overdue (e.g. JSON Schema is horribly un-human-friendly). The fact that they aim higher and allow for a much more detailed set of validations is great, IMO.<p>I hope that this becomes a success and gets implementations in many programming languages.
A "data language", in this context, is a language for representing and validating data. I'm not clear on why it's called "Q", or why it calls marshalling/unmarshalling undressing/dressing. The language itself looks alright, it's dependently typed. It doesn't say anything about the speed of the existing JS/Ruby implementations.
It's strange to me how dependent Q becomes on its host language. Instead of defining its own types in order to sit between various serializations and host systems, Q seems to simply augment the Host with a few extra types.
In Clojure you can use schema (<a href="https://github.com/Prismatic/schema" rel="nofollow">https://github.com/Prismatic/schema</a>) which does the same thing without needing a separate language.
This is the second project named 'Q' that's been on HN in the last week or so. All cool ideas, but some Googling first by all parties would probably ease confusion.[0]<p>[0]<a href="https://news.ycombinator.com/item?id=7290655" rel="nofollow">https://news.ycombinator.com/item?id=7290655</a>
It's interesting, but is it powerful enough? There's a great talk by Zed Shaw about implementing authorization code. I think the same issue holds here.<p>A language might work for simple and more complex type constraints, but at some point the constraints might become so weird that you really need a full blown turing complete language to define them.<p>And that point might be sooner than you'd like. Why not instead of using expressions in this constrained language, use something like Haskell?<p>It's got the super powerful typesystem, and you're certain that it will be able to express <i>any</i> constraint you throw at it.<p>That said, this language does look nice, and I haven't really tried to find out if there's any obviously important constraints you wouldn't be able to build in it.
Given that you use a powerful rdbms, such as postgres, you don't need anything more than sql constraints. For example, the constraint to check the temperature would be something like CHECK(temp between 33.0 and 45.0). To check that no two user accounts share the same alias, a simple UNIQUE(user_alias) will do. More complex constraints can be enforced using plpgsql.<p>Then name your constraints. Then when the sql layer throws them, it's trivial to remap them to user readable error messages. It works especially well in newer postgres versions where the error reporting is much more detailed and the messages easily machine parsable.
I like it because you can practically use this to define a grammar for parsing a programming language, because:<p>1. Each data type can inline express its own value constraints (not just type constraints, but rather value constraints) which is key to source code parsing<p>2. With subclasssing you can combine data types into higher level objects that inherit all of the lower level value constraints. This is similar to how a recursive lexer drills down from a high level statement into the different parts of each statement.<p>3. The data type value constraints support sequences, alternatives, and unions, most of what a lexer needs to parse a language based on a grammar spec<p>With these ingredients you can actually parse code, and thereby write a code interpreter (or parser at least). You begin with a grammar rule (a high level data type) that defines a statement, and a statement is defined as a list of alternatives forms of a statement, and each statement form is defined as a sequence of keywords and supported value types, and so on down the language primitives.<p>... but I'm not sure if supports recursion, and thereby recursive descent parsing. Can you say something like "Sum = [Sum, Operand, Sum]" If not then you're limited to immediate values only and no nested expressions.
Am I missing something fundamental here? I think in the home page Q shouldn't be compared to just JSON as they are in principle two different things. If I were to write a 'data language' or to extend an existing language to cater for data, wouldn't I write my validators in that said language? If I am correct, then I was expecting Q to be compared to the closest other 'data language', not be compared with a document/schema.<p>For example, if I was doing it in JS, I'd say:
var validDoc = true;
var Temp = function (t) {return t >= 33.5 && t <= 45.0};<p>then when I get the doc, I loop through my conditions, as such:<p>validDoc = Temp(doc.temperature);<p>// reject if !validDoc<p>From here, the benefit of Q becomes that it's much easier to do validations, instead of writing them from scratch, considering that my example above only returns a true || false, as it doesn't have the error handling to let me know what went wrong and where.<p>One could also achieve some of what Q is doing by using an ORM if said data is going to a database/document store.<p>So, HN, am I missing something here?
Interesting... here is a tool, which is also going twoards data oriented approach.<p>"Drake - a kind of make for data"<p><a href="http://blog.factual.com/introducing-drake-a-kind-of-make-for-data" rel="nofollow">http://blog.factual.com/introducing-drake-a-kind-of-make-for...</a>
I can't count the number of times I have wanted to set up re-usable model definition and validation, but haven't been able to find a good portable library that isn't part of something much larger I don't need. This definitely addresses a space in need of more options.<p>My answer was modlr (<a href="https://github.com/jdc0589/modlr" rel="nofollow">https://github.com/jdc0589/modlr</a>), which I have been ignoring, but Q looks like it could do well. I'm not super pumped about the name though, conflicts with super popular promise library lots of people already use.
Oh dear, this might conflict with a SQL variant we cooked up at my old company called Q-Lang. Where you could write a SQL statement, leave bits of the WHERE clause "blank" and it would generate a GUI interface with drop downs and calendars and everything to let people fill in those bits without too much effort. Nothing terribly fancy, but it let you go from a working example SQL statement to an integrated query interface in the app in about 10 minutes.<p>It had lots of limitations, but most of the time it absolutely annihilated some of our competition's multi-month integration engagements.
Q is trying to do both data and validation, which are very different problem spaces. I think designing for both involves serious tradeoffs for the individual parts.