Harlan: New GPU programming language

It’s generating OpenCL, not GLSL, so it’s doing something slightly different than JME shader nodes.
Still, @nehon (or anybody else) might want to study what’s there and steal useful ideas from it.

Links:
http://blog.theincredibleholk.org/blog/2013/06/28/announcing-the-release-of-harlan/

1 Like

Every few years a programmer comes out of some forgotten basement and tries to apply lisp to the topic of the year (actually, mostly to the topic from 3 years ago). And, without exception, they all fail. Not necessarily their solution is bad in itself, but just because people outgrew having to program in compiler-centric representation of code. Because saying of something like

[java]
define operator+(point3 a, point3 b) = point3(a.x+b.x,a.y+b.y,a.z+b.z)
[/java]

will always beat readability of

[java]
(define (point-add x y)
(match x
((point3 a b c)
(match y
((point3 x y z)
(point3 (+ a x) (+ b y) (+ c z)))))))
[/java]

World has changed since 70-ties. Purity of language doesn’t mean it is good.

Syntax is irrelevant. JME isn’t going to implement or use Harlan anyway, so Harlan’s syntax won’t affect JME users.
Concepts are what’s interesting. Harlan might be using closures in an interesting (and Java-compatible) way. Or it might have useful abstractions that could be reused to good effect in shader nodes. Or whatever. It’s more the whitepapers than the actual code that are interesting here.

Thanks I’ll look into it.

Talking about the “idea” they use in that new Harlan language: kind of functional language for a GPU, what kind of “reverse evolution” is this? :stuck_out_tongue:

<quote>
Harlan is a new programming language for GPU computing. Harlan aims to push the expressiveness of languages available for the GPU further than has been done before. Harlan already has native support for rich data structures, including trees and ragged arrays. Very soon the language will support higher order procedures.
</quote>

I meant the idea is cool, that they enable various output but they just do it in a wrong way!
Instead of a compiler, i think he should just write a “what ever functional language” to Open GLSL/CL paser. That will be much more easier…
The syntax are also verbose, how many times we have to type “int” - “float” or kind of type? Is it the way to remind coder about the real format of the GPU instructions. I can not see the good of this lengthy language?

Ideas:
In fact, the old-style Node based graph ( like current Shader node or like UDK Material editor, which Shader node are “kind of” After it) are some what only good in express those kind of things: operations order, input/outputs…not logic and condition.

It also hide the underlying implementation of the real data and functions.

Idea 1:
We can try Haskell syntax with no type-stricted at all and very expressive, can capture : operations, order, input/output… and also logic at once.
More clear and super short syntax.
Also can translate back to C style with a paser (try Anltr) and fullfil by specific types.
Idea 2:
NodeBox are offer a way even better to compose Logic beside of just Graphics/ computing operation… http://nodebox.net/ . We should not just “after” the old-style Shader node application :stuck_out_tongue: if we current is.