About

Michael Zucchi

 B.E. (Comp. Sys. Eng.)

  also known as zed
  & handle of notzed

Tags

android (44)
beagle (63)
biographical (104)
blogz (9)
business (1)
code (74)
compilerz (1)
cooking (31)
dez (7)
dusk (31)
extensionz (1)
ffts (3)
forth (3)
free software (4)
games (32)
gloat (2)
globalisation (1)
gnu (4)
graphics (16)
gsoc (4)
hacking (455)
haiku (2)
horticulture (10)
house (23)
hsa (6)
humour (7)
imagez (28)
java (231)
java ee (3)
javafx (49)
jjmpeg (81)
junk (3)
kobo (15)
libeze (7)
linux (5)
mediaz (27)
ml (15)
nativez (10)
opencl (120)
os (17)
panamaz (5)
parallella (97)
pdfz (8)
philosophy (26)
picfx (2)
players (1)
playerz (2)
politics (7)
ps3 (12)
puppybits (17)
rants (137)
readerz (8)
rez (1)
socles (36)
termz (3)
videoz (6)
vulkan (3)
wanki (3)
workshop (3)
zcl (4)
zedzone (26)
Monday, 17 March 2014, 09:10

sumatra, graal, etc.

I didn't really wanna get stuck building stuff all day but that seems to have happened.

Sumatra uses auto* so it built easily no problem. Bummer there's no javafx but hopefully that isn't far off.

graal is a bit of a pain because it uses a completely custom build/update/everything mega-tool written in a single 4KLOC piece of python. Ok when it works but a meaningless backtrace if it doesn't. Well it is early alpha software I guess. Still ... why?

Anyway ... I tried building against 'make install' from sumatra but that doesn't work, you need to point your JAVA_HOME at the Sumatra tree as the docs tell you to. My mistake there.

So it turns out getting the hsail tools to build had some point after-all. The hsailasm downloaded by the "build" tool (in lib/okra-1.8-with-sim.jar) wont work against the libelf 0.x included in slackware (might be easier just building libelf 1.x). So I added the path to the hsailasm I built myself ... and ...

 export PATH=/home/notzed/hsa/HSAIL-Instruction-Set-Simulator/build/HSAIL-Tools:$PATH
 ./mx.sh  --vm server unittest -XX:+TraceGPUInteraction \
   -XX:+GPUOffload -G:Log=CodeGen hsail.test.IntAddTest
[HSAIL] library is libokra_x86_64.so
[HSAIL] using _OKRA_SIM_LIB_PATH_=/tmp/okraresource.dir_7081062365578722856/libokra_x86_64.so
[GPU] registered initialization of Okra (total initialized: 1)
JUnit version 4.8
.[thread:1] scope: 
  [thread:1] scope: GraalCompiler
    [thread:1] scope: GraalCompiler.CodeGen
    Nothing to do here
    Nothing to do here
    Nothing to do here
    version 0:95: $full : $large;
// static method HotSpotMethod
kernel &run (
        align 8 kernarg_u64 %_arg0,
        align 8 kernarg_u64 %_arg1,
        align 8 kernarg_u64 %_arg2
        ) {
        ld_kernarg_u64  $d0, [%_arg0];
        ld_kernarg_u64  $d1, [%_arg1];
        ld_kernarg_u64  $d2, [%_arg2];
        workitemabsid_u32 $s0, 0;
                                           
@L0:
        cmp_eq_b1_u64 $c0, $d0, 0; // null test 
        cbr $c0, @L1;
@L2:
        ld_global_s32 $s1, [$d0 + 12];
        cmp_ge_b1_u32 $c0, $s0, $s1;
        cbr $c0, @L12;
@L3:
        cmp_eq_b1_u64 $c0, $d2, 0; // null test 
        cbr $c0, @L4;
@L5:
        ld_global_s32 $s1, [$d2 + 12];
        cmp_ge_b1_u32 $c0, $s0, $s1;
        cbr $c0, @L11;
@L6:
        cmp_eq_b1_u64 $c0, $d1, 0; // null test 
        cbr $c0, @L7;
@L8:
        ld_global_s32 $s1, [$d1 + 12];
        cmp_ge_b1_u32 $c0, $s0, $s1;
        cbr $c0, @L10;
@L9:
        cvt_s64_s32 $d3, $s0;
        mul_s64 $d3, $d3, 4;
        add_u64 $d1, $d1, $d3;
        ld_global_s32 $s1, [$d1 + 16];
        cvt_s64_s32 $d1, $s0;
        mul_s64 $d1, $d1, 4;
        add_u64 $d2, $d2, $d1;
        ld_global_s32 $s2, [$d2 + 16];
        add_s32 $s2, $s2, $s1;
        cvt_s64_s32 $d1, $s0;
        mul_s64 $d1, $d1, 4;
        add_u64 $d0, $d0, $d1;
        st_global_s32 $s2, [$d0 + 16];
        ret;
@L1:
        mov_b32 $s0, -7691;
@L13:
        ret;
@L4:
        mov_b32 $s0, -6411;
        brn @L13;
@L10:
        mov_b32 $s0, -5403;
        brn @L13;
@L7:
        mov_b32 $s0, -4875;
        brn @L13;
@L12:
        mov_b32 $s0, -8219;
        brn @L13;
@L11:
        mov_b32 $s0, -6939;
        brn @L13;
};

[HSAIL] heap=0x00007f47a8017a40
[HSAIL] base=0x95400000, capacity=108527616
External method:com.oracle.graal.compiler.hsail.test.IntAddTest.run([I[I[II)V
installCode0: ExternalCompilationResult
[HSAIL] sig:([I[I[II)V  args length=3, _parameter_count=4
[HSAIL] static method
[HSAIL] HSAILKernelArguments::do_array, _index=0, 0xdd563828, is a [I
[HSAIL] HSAILKernelArguments::do_array, _index=1, 0xdd581718, is a [I
[HSAIL] HSAILKernelArguments::do_array, _index=2, 0xdd581778, is a [I
[HSAIL] HSAILKernelArguments::not pushing trailing int

Time: 0.213

OK (1 test)

Yay? I think?

Maybe not ... it seems that it's only using the simulator. I tried using LD_LIBRARY_PATH and -Djava.library.path to redirect to the libokra from the Okra-Interface-to-HSA-Device library but that just hangs after the "base=0x95..." line after dumping the hsail. strace isn't showing anything obvious so i'm not sure what's going on. Might've hit some ubuntu compatibility issue at last or just a mismatch in versions of libokra.

On the other hand ... it was noticeable that something was happening with the gpu as the mouse started to judder, yet a simple ctrl-c killed it cleanly. Just that alone once it makes it into OpenCL will be worth it's weight in cocky shit rather than just hard locking X as is does with catalyst.

Having just typed that ... one test too many and it decides to crash into an unkillable process and do weird stuff (and not long after I had to reboot the system). But at least that is to be expected for alpha software and i've been pretty surprised by the overall system stability all things considered.

I think next time I'll just have a closer look at aparapi because at least I have that working with the APU and i'm a bit sick of compiling other peoples code and their strange build systems. Sumatra and graal are very large and complex projects and a bit more involved than I'm really interested in right now. I haven't used aparapi before anyway so I should have a look.

If the slackware vs ubuntu thing becomes too much of a hassle I might just go and buy another hdd and dual-boot; I already have to multi-boot to switch between opencl+accelerated javafx vs apu.

Update: Actually there may be something more to it. I just tried creating my own aparapi thing and it crashed in the same way so maybe i was missing some env variable or it was due to a suspend/resume cycle.

So I just had another go at getting the graal test running on hsail and I think it worked:

 export JAVA_HOME=/home/notzed/hsa/sumatra-dev/build/linux-x86_64-normal-server-release/images/j2sdk-image
 export PATH=${JAVA_HOME}/bin:${PATH}
 export LD_LIBRARY_PATH=/home/notzed/hsa/Okra-Interface-to-HSA-Device/okra/dist/bin
 ./mx.sh  --vm server unittest  -XX:+TraceGPUInteraction -XX:+GPUOffload -G:Log=CodeGen hsail.test.IntAddTest

[HSAIL] library is libokra_x86_64.so
[GPU] registered initialization of Okra (total initialized: 1)
JUnit version 4.8

...

Time: 0.198

OK (1 test)

Still, i'm not sure what to do with it yet ...

I was going to have a play today but I just got the word on work starting again (I can probably push it out to Monday) so I might just go to the pub or just for a ride - way too nice to be inside getting monitor burn. I foolishly decided to walk into the city yesterday for lunch with a mate I haven't seen for years (and did a few pubs on the way home - i was in no rush!) but just ended up with a nice big blister and sore feet for my troubles. It's about a 45 minute walk into the city but I don't do much walking.

I want see if anything interesting comes out of the Sony's GDC talk first though (the vr one? - yes the VR one, it's just started).

And ... done. Interesting, but still early days. Even if they release a model for the public at a mass-market price it's still going to have to be a long term project. Likely the first 5-10 years will just be experimentation and getting the technology the point where it is good and cheap enough.

Update: Yep, i'm pretty sure it's just a problem with suspend/resume. I just tried running it after a resume and it panicked the kernel.

Tagged hacking, hsa, java, opencl.
Monday, 17 March 2014, 05:00

Building the hsail tools.

I had a lot more trouble than this post suggests getting this stuff to compile which is probably why i sound pissed off (hint: i am, actually pro tip: i often am), but this summarises the results.

I started with the instructions in README.hsa from the hsa branch of gcc. See the patches below though.

I installed libelf from slackware 14.1 but had to build libdwarf manually. I got libdwarf-20140208 from the libdwarf home page. I actually created a SlackBuild for it but i'm uncertain about providing it to slackbuilds.org right now (mostly because it seems a bit too niche).

Then this should probably work:

  mkdir hsa
  cd hsa
  git clone --depth 1 https://github.com/HSAFoundation/HSAIL-Instruction-Set-Simulator.git
  git clone --depth 1 https://github.com/HSAFoundation/HSAIL-Tools
  cd HSAIL-Instruction-Set-Simulator/src/
  ln -s ../../HSAIL-Tools
  cd ..
  mkdir build
  cd build
  cmake -DCMAKE_BUILD_TYPE=Debug ..
  make -j3 _DBG=1 VERBOSE=1

Actually I originally built it from the Okra-Interface-to-HSAIL-Simulator thing, but that checks out both of these tools as part of it's android build process together with the llvm compiler this does. Ugh. And then proceeds to compile whilst hiding the details of what it's actually doing and simultaneously ignoring-all-fatal-errors-along-the-way through the wonders of an ant script. Actually I could add more but I might offend for no purpose - afterall I did get it compiled in the end. I'm just a bit more impatient than I once was :)

Patches

It doesn't look for libelf.h anywhere other than /usr/include so I had to add this to HSAIL-Instruction-Set-Simulator. Of course if you have lib* in /usr/local you have to change it to that instead. Sigh. I thought this sort of auto-discovery was a solved problem.


diff --git a/CMakeLists.txt b/CMakeLists.txt
index 083e9c6..c86192c 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -25,8 +25,9 @@ set(CMAKE_CXX_FLAGS "-DTEST_PATH=${PROJECT_SOURCE_DIR}/test ${CMAKE_CXX_FLAGS}")
 set(CMAKE_CXX_FLAGS "-DBIN_PATH=${PROJECT_BINARY_DIR}/HSAIL-Tools ${CMAKE_CXX_FLAGS}")
 set(CMAKE_CXX_FLAGS "-DOBJ_PATH=${PROJECT_BINARY_DIR}/test ${CMAKE_CXX_FLAGS}")
 set(CMAKE_CXX_FLAGS "-I/usr/include/libdwarf ${CMAKE_CXX_FLAGS}")
+set(CMAKE_CXX_FLAGS "-I/usr/include/libelf ${CMAKE_CXX_FLAGS}")
 
-set(CMAKE_CXX_FLAGS_RELEASE "-O3 -UNDEBUG")
+set(CMAKE_CXX_FLAGS_RELEASE "-O2 -UNDEBUG")
 
 # add a target to generate API documentation with Doxygen
 find_package(Doxygen)

And because I used a newer version of libdwarf than ubuntu uses it has a world-shattering fatal error causing difference in one of the apis.

Apply this to HSAIL-Tools.


diff --git a/libHSAIL/libBRIGdwarf/BrigDwarfGenerator.cpp b/libHSAIL/libBRIGdwarf/BrigDwarfGenerator.cpp
index 7e2d28d..fe2bc51 100644
--- a/libHSAIL/libBRIGdwarf/BrigDwarfGenerator.cpp
+++ b/libHSAIL/libBRIGdwarf/BrigDwarfGenerator.cpp
@@ -261,7 +261,7 @@ public:
     //
     bool storeInBrig( HSAIL_ASM::BrigContainer & c ) const;
 
-    int DwarfProducerCallback2( char * name,
+    int DwarfProducerCallback2( const char * name,
                                 int    size,
                                 Dwarf_Unsigned type,
                                 Dwarf_Unsigned flags,
@@ -459,7 +459,7 @@ bool BrigDwarfGenerator_impl::generate( HSAIL_ASM::BrigContainer & c )
 // *sec_name_index is an OUT parameter, a pointer to the shared string table
 // (.shrstrtab) offset for thte name of the section
 //
-static int DwarfProducerCallbackFunc( char * name,
+static int DwarfProducerCallbackFunc( const char * name,
                                       int    size,
                                       Dwarf_Unsigned type,
                                       Dwarf_Unsigned flags,
@@ -476,7 +476,7 @@ static int DwarfProducerCallbackFunc( char * name,
 }
 
 int
-BrigDwarfGenerator_impl::DwarfProducerCallback2( char * name,
+BrigDwarfGenerator_impl::DwarfProducerCallback2( const char * name,
                                                  int    size,
                                                  Dwarf_Unsigned type,
                                                  Dwarf_Unsigned flags,

I had to use the 'git registry' to disable coloured diffs so I could just see what I was even doing here.

  git config --global color.ui false

Whomever thought that sort of bloat was a good idea in such a tool ... well I would only offend if i called him or her a fuckwit wouldn't i?

Tagged hsa, rants.
Monday, 17 March 2014, 03:05

hsa, gcc hsail + brig

I didn't spend much time on it yesterday but most of it was just reading up on hsa. There's only a few slides plus a so-shittily-typeset-is-has-to-be-microsoft-word reference manual on hsail and brig. As a bonus it includes some rather poor typeface choices (too stout - the trifecta of short, thick, and fat), colours, and tables to complement the word-wrap that all say to me 'not for printing' (not that I was going to). The link to the amd IOMMUv2 spec is broken - not that i'm likely to need that. Bit of a bummer as amd docs are usually formatted pretty well and they seem to be the primary driver at this point. In general the documentation 'needs work' (seems to be my catchphrase of 2014 so far) although new docs and tools seem to be appearing in drips and drops over time.

I'd already skimmed some of it and had a general understanding but I picked up a few more details.

The queuing mechanism looks pretty nice - very simple yet able to do everything one needs in a multi-core system. I had been under the impression that the queue system had / needed some hardware support on the CPU too but looking at it but it doesn't look necessary so was just a misunderstanding. Or ... maybe it's not since any work signalling mechanism would ideally avoid kernel interactions and/or busy waiting - either or both of which would be required for a purely software implementation. But maybe it's simpler than that

To be honest i would have preferred that hsail was a proper assembly language rather than wrapping the meta-data in a pseudo-C++ syntax. And brig seems a bit unnecessarily on the bulky side for what is essentially a machine code encoding. At the end of the day neither are deal breakers.

The language itself is kind of interesting. Again I thought it was a slightly higher level virtual-processor that it is, something like llvm's intermediate representation or PTX. But it has a fixed maximum number of registers and the register assignment and optimisation occurs at the compiler stage and not in the finaliser - looks a lot more like say DEX than IR or Java bytecode. This makes a lot of sense unless you have a wildly different programming model. Seems a pretty reasonable and pragmatic approach to a universal machine code for modern processors.

The programming and queuing model looks like something that should fit into Epiphany reasonably well. And something that can be used to implement OpenCL with little work (beyond the compiler, but there's few of those already).

GCC

I managed to get gcc checked out to build. The hsa tools page just points to the subversion branch with no context at all ... but after literally 8 hours trying to check it out and only being part-way through the fucking C++ standard library test suite, I gave up (I detest git but I'm no fan of subversion by any stretch of the imagination). I had to resort to the git mirror. Unfortunately gcc takes a lot longer to build than last time I had to despite having faster hardware, but that's 'progress' for you (no it's not).

I'm not sure how useful it is to me as it just generates brig directly (actually a mash up of elf with amd64 + brig) and there's no binutils to play with hsail that I can tell. But i'll document the steps I used here.

  git clone --depth 1 -b hsa git://gcc.gnu.org/git/gcc.git

  mkdir build
  cd build
  ../gcc/configure --disable-bootstrap --enable-languages=c,c++ --disable-multilib
  make

Slackware 64 is only 64-bit so I had to disable multilib support.

The example from gcc/README.hsa can then be compiled using:

  cd ..
  mkdir demo
  cd demo
  cat > hsakernel.c
extern void square (int *ip, int *rp) __attribute__((hsa, noinline));
void __attribute__((hsa, noinline)) square (int *in, int *out)
{
  int i = *in;
  *out = i * i;
}
CTRL-D
  ../build/gcc/xgcc -m32 -B../build/gcc -c hsakernel.c  -save-temps -fdump-tree-hsagen

Using -fdump-tree-hsagen outputs a dump of the raw HSAIL instructions generated.

[...]

------- After register allocation: -------

HSAIL IL for square
BB 0:
  ld_kernarg_u32 $s0, [%ip]
  ld_kernarg_u32 $s1, [%rp]
  Fall-through to BB 1

BB 1:
  ld_s32 $s2, [$s0]
  mul_s32 $s3, $s2, $s2
  st_s32 $s3, [$s1]
  ret_none 

[...]

Went through the gcc source and found a couple of useful bits. To get the global work-id I found you can use: __builtin_omp_get_thread_num() which compiles into workitemabsid_u32 ret,0. And __builtin_omp_get_num_threads() which compiles into gridsize_u32 ret,0. Both only work on dimension 0. And that seems to be about it for work-group functions.

I'm not really sure how useful it is and unless the git mirror is out of sync there hasn't been a commit for a few months so it's hard to know it's future - but it's there anyway.

My understanding is that a reference implementation of a finaliser will be released at some point which will make BRIG a bit more interesting (writing one myself, e.g. for epiphany, is a bigger task than i'm interested in right now). I'm probably going to have more of a look at aparapi and the other java stuff for the time being but eventually get the llvm based tools built as well. But ugh ... CMake.

Tagged hacking, hsa, opencl, parallella.
Sunday, 16 March 2014, 04:00

Aparapi on HSA on Slackware on Kavaeri on ASROCK

Although i've been waiting with bated breath for HSA to arrive ... the last I heard about a month ago via the aparapi mailing list was that the drivers weren't quite ready yet. So I was content to wait patiently a bit longer. Then somehow the first I heard that the alpha became available from one of the few comments on this blog and apparently it's been out for a few weeks. I couldn't find any announcement about it?

So yesterday before I went out and this morning I followed Linux-HSA-Drivers-And-Images-AMD and SettingUpLinuxHSAMachineForAparapi trying to get something working. As i'm using a different motherboard and OS it was a little more involved although I made it more involved than it should've been by making a complete pigs breakfast out of every step along the way due to being a bit out of practice.

But after getting a working kernel built and X sorted I just ran the test example a few seconds ago:

$ ./runSquares.sh 
using source from Squares.hsail
0->0, 1->1, 2->4, 3->9, 4->16, 5->25, 6->36,
     ;7->49, 8->64, 9->81, 10->100, 11->121,
     ;12->144, 13->169, 14->196, 15->225, 16->256,
     ;17->289, 18->324, 19->361, 20->400, 21->441,
     ;22->484, 23->529, 24->576, 25->625, 26->676,
     ;27->729, 28->784, 29->841, 30->900, 31->961,
     ;32->1024, 33->1089, 34->1156, 35->1225, 36->1296,
     ;37->1369, 38->1444, 39->1521,
PASSED
$

I'm presuming 'PASSED' means it worked.

I'm not sure how much i'll do today but i'll next look at the hsa branch of aparapi, sumatra?, and then I want to look a bit closer. I haven't been able find much detailed technical documentation yet but there is the kernel driver at least now and hopefully it's coming soon.

On Slackware

I'm using the ASROCK FM2A88X-ITX+ motherboard with Slackware64 14.1 and using the DVI and HDMI outputs in a dual-head configuration. Just getting Slackware 14.1 working on it reliably required a BIOS upgrade but i'm not sure what version it is right now.

To compile a fresh checkout of the correct kernel I tried the supplied kernel config file 3.13.0-config at first but that didn't work it just hung on the loading kernel line from elilo. After a couple of aborted attempts I managed to get a working kernel by starting with /boot/config-generic-3.10.17 as the .config file, running make oldconfig and holding down return until it finished to accept all the defaults, then using make xconfig to make sure my filesystem driver wasn't a module (which i of course forgot the first time).

Getting dual-screen X was a bit confusing - searches for xorg.conf configuration is pretty much a waste of time I think mostly because every config file is filled with non-important junk. But I finally managed to get it going even if for whatever reason it comes up in cloned mode but I can fix it manually running xrandr after i login. Because I'm not ready to make this permanent is good enough for me. As I was previously using the fglrx driver I had initially forgotten to de-blacklist the radeon kernel module but that was an easy fix.

This is how I set up the screen config.

$ xrandr --output HDMI-0 --right-of DVI-0

I'm not ready to make this my system yet because afaik OpenCL isn't available for this driver interface yet. Although the Okra stuff includes libamdhsacl64.so so presumably it isn't too far away.

Aparapi

I got aparapi going quite easily.

But beware, don't run '. ./env.sh' directly to start with - any error and it just closes your shell window! So test with 'sh ./env.sh' until it passes it's checks.

I used the ant that comes with netbeans and I already had AMD APP SDK 2.9 and Java 8 installed.

Not sure if it's needed but I noticed a couple of variables were blank so I set them in env.sh.

export APARAPI_JNI_HOME=${APARAPI_HOME}/com.amd.aparapi.jni
export APARAPI_JAR_HOME=${APARAPI_HOME}/com.amd.aparapi

Once env.sh was sorted it built in a few seconds and the mandelbrot demo ran in suitably impressive fashion.

Well this should all keep me busy for a while ...

Tagged hacking, hsa, java, opencl.
Wednesday, 12 March 2014, 13:15

JNI, memory, etc.

So a never-ending hobby has been to investigate micro-optimisations for dealing with JNI memory transfer. I think this is at least the 4th post dedicated soley to the topic.

I spent most of the day just experimenting and kinda realised it wasn't much point but I do have some nice plots to look at.

This is testing 10M calls to a JNI function which takes an array - either byte[] or a ByteBuffer. In the first case these are pre-allocated outside of the loop.

The following tests are performed:

Elements

Uses Get/SetArrayElements, which on hotspot always copies the memory to a newly allocated block.

Range alloc

Uses Get/SetArrayRegion, and inside the JNI code always allocates a new block to store the transferred data and frees it on exit.

Critical

Uses Get/ReleasePrimitiveArrayCritical to access the JVM memory directly.

ByteBuffer

Uses the JNIEnv entry points to retrieve the memory base location and size.

Range

Uses Get/SetArrayRegion but uses a pre-allocated (bss) buffer.

ByteBuffer field

Uses GetLongField and GetIntField to retrieve the package/private address and size values directly from the Buffer object. This makes it non portable.

I'm running it on a Kaveri APU with JDK 1.8.0-b129 with default options. All plots are generated using gnuplot.

Update: I came across this more descriptive summary of the problem at the time, and think it's worth a read if you're ended up here somehow.

Small arrays

The first plot shows a 'no operation' JNI call - the pointer to the memory and the size is retrieved but it is not accessed. For the Range cases only the length is retrieved.

What can be seen is that the "ByteBuffer field" implementation has the least overhead - by quite a bit compared to using the JNIEnv entry points. From the hotspot source it looks like they perform type checks which are adding to the cost.

Also of interest is the "Range alloc" plot which only differs from the "Range" operation by a malloc()/free() pair. i.e. the JNI call invocation overhead is pretty much insignificant compared to how willy-nilly C programmers throw these around. This is also timing the Java loop as well of course. The "Range" call only retrieves the array size in this case although interestingly that is slower than retrieving the two fields.


The next series of plots are for implementing a dummy 'load'. The read load is to add up every byte in the array, and the write load is to write the array index to the array. It's not particularly important just that it accesses the memory.

Well, they're all pretty close and follow the overhead plot as you would expect them to. The only real difference is between the implementations that need to allocate the memory first - but small arrays can be stored on the stack 'for free'.

The only real conclusion is: don't use GetArrayElements() or malloc space for short arrays!


Larger arrays

This is the upper area of the same plots above.

Here we see that by 8K the overhead of the malloc() is so insignificant to the small amount of work being performed that it vanishes from the time - although GetArrayElements() is still a bit slower. The Critical and field-peeking ByteBuffer edge out the rest.

And now some strange things start to happen which don't seem to have an obvious reason. Writing the data to bss and then copying it using SetArrayRegion() has become the slowest ... yet if the memory is allocated first it is nearly the fastest?

And even though the only difference between the ByteBuffer variants is how it resolves Buffer.address and Buffer.capacity ... there is a wildly different performance profile.

And now even more weirdness. Performing a read and then a write ... results in by far the worst performance from accessing a ByteBuffer using direct field access, yet just about the best when going through the JNIEnv methods. BTW the implementation rules out most cache effects - this is exactly the same memory block at exactly the same location in each case, and the linearity of the plot shows it isn't size related either.

And now GetArrayElements() beats GetArrayRetion() ...

I have no idea on this one. I re-ran it a couple of times and checked the code but perhaps I missed something.


Dynamic memory

Perhaps it's just not a very good benchmark. I also tried an extreme case of allocating the Java memory inside the loop - which is another extreme case. At least these should give some bracket.

Here we see Critical running away with it, except for the very small sizes which will be due to cache effects. The ByteBuffer results show "common knowledge" these things are expensive to allocate (much more so than malloc) so are only suitable for long-lived buffers.

Again with the SetArrayRegion + malloc stealing the show. Who knows.

It only gets worse for the ByteBuffer the more work that gets done.


The zoomed plots look a bit noisy so i'm not sure they're particularly valid. They are similar to the pre-allocated version except the ByteBuffer versions are well off the scale at that size.

After all this i'm not sure what conclusions to draw. Well for one OpenCL has so many other overheads I don't think any of these won't even be a rounding error ...

Invocation

I also did some playing around with native method invocation. The goal is just to get a 'pointer' to a native resource in the JNI and just to compare the relative overheads. The calls just return it so it isn't optimised out. Each case is executed for 100M times and this is the result of a fourth run.

call

This is what I used in zcl. An object method is invoked and the instance method retrieves the pointer from 'this.p'.

calle

The same but the call is wrapped in a try { } catch { } with in the loop and the method declares it throws an exception.

callp

An instance method where an anonymous pointer is passed to the JNI.

calls

A static method which takes the object as a parameter. The JNI retrieves 'this.p'.

callsp

This is the commonly used approach whereby an anonymous pointer is passed as a parameter to a static method.

The three types are the type of pointer. I was going to test this on a 32-bit platform but ran out of steam so the integers don't make much difference here. int and long are just a simple type and buffer stores a 'struct' as a ByteBuffer. This latter is how I originally implemented jjmpeg but clearly that was a mistake.

Results

    type    call    calle   callp   calls   callsp

    int     1.062   1.124   0.883   1.100   0.935
    long    1.105   1.124   0.883   1.101   0.936
    buffer  5.410   5.401   2.639   5.365   2.631

The results seemed pretty sensitive to compilation - each function is so small so there may be some margin of error.

Anyway the upshot is that there's no practical performance difference across all implementations and so the decision on which to use can be based on other factors. e.g. just pass objects to the JNI rather than the mess that passing opaque pointers create.

And ... I think that it might be time for me to leave this stuff behind for good.

Tagged hacking, java.
Tuesday, 11 March 2014, 14:35

javafx on parallella via remote X, with added opencl

After the signoff on the last post I thought maybe I had spoken too soon about being able to write front-end code for the parallella in Java. I couldn't get JavaFX doesn't to work via remote X on ARM but writes directly to the framebuffer. While that's a nice think to have it doesn't help me. Always one step forward two steps back eh (I mean I could always just use swing but where's the fun in that).

The gl screensavers and other X11 stuff worked ok. I couldn't find any useful mention of it on the internets until I had a look in jira (openjdk bug system). Came across Monocle which I thought was worth a look. It didn't look all that promising until I saw it was already built-in to the ARM builds.

And ... it worked. Well I guess that's something. I had to add the following to the command line to get javafx working over remote X from an arm box:

  -Djavafx.platform=monocle -Djavafx.order=sw

I 'fixed' some coprthr peculiarities in the opencl code and got it running (but only on the arm cpu so far):

(Not sure what blogger is doing, it doesn't look that bad even with my shitty palette).

At first the UI was very unresponsive then I discovered that clEnqueueNDRangeKernel or clEnqueueReadBuffer runs synchronously - which is obviously not something to be doing from an animation handler. I chucked that through an Executor and whilst it wont win any land-speed records it is now basically usable.

Tagged hacking, java, javafx, opencl, parallella.
Tuesday, 11 March 2014, 09:59

zcl 0.3 released

I just dumped another snapshot to the zcl home page.

Notes are there but a little more on the array stuff.

Basically it just lets you pass a java array instead of a Buffer to the read/write buffer/image commands. It supports non-blocking operations.

I was looking through the aparapi source and noticed it was using GetPrimitiveArrayCritical and ReleasePrimitiveArrayCritical in places where I didn't think you should, so I tried using that for pinning the array rather than GetArrayElements which I tried last time (and decided not to include). This should be more efficient because the *Elements calls all seem to allocate memory and create a copy.

A really dumb test case bears this out and i'm getting a 3-4x performance improvement. Yay! I think that's enough to include into the api now so I did - even though it bloats out the api considerably with 7 overloaded entry points for each such method (Buffer, byte, short, int, long, float, double).

But there's something of a problem in that because of the asynchronous nature of the non-blocking commands the pinning required must be for an indeterminate time. Which is explained as undesirable in the JNI docs but without a deeper knowledge is hard to know if it matters on real jvms.

I'm using an event listener to find out when a non-blocking job is complete and then releasing the array as soon as possible but this adds measurable overhead and other potential complications. I may try to implement a more explicit management mechanism and see if that makes a lot of difference but it would have to be significant to be worth the extra hassles involved in using such an api.

But until I have a use for zcl, ... it might all just be on hold for the moment, because ...

elf-loader

Something I mentioned earlier was revisiting the elf-loader code with a different set of goals. And so i'm thinking of pausing this OpenCL stuff and moving in that direction. Right now apart from OpenCL there's no way to access the epiphany chips from Java and I for one have no interest in doing any frontend stuff in C (there is some work on the sumatra thing but that could be a while and serves a different purpose anyway).

The more I think about it the more it seems like the elf-loader code should provide a pretty good basis for a decent epiphany runtime that still lets you write the low-level code on the device but creates an easier way to manage their code compared to a fugly linker script and an object format that was designed for a system with processes running in virtual memory.

Quite a bit of work though so i've gotta be peachy keen to get it started.

Update: I found out that I was completely wrong on the way aparapi uses Get/Release PrimitiveArrayCritical() and as I originally thought holding a critical array across jni calls and/or threads is probably not a very good idea at all: even if it works. It basically just locks GC completely (and not much else). So I will probably have to resort to using a temporary malloc() buffer to honour the api for non-blocking transfers. That's if i don't just revert it all out of existence. I may not even be able to do use critical access even for the blocking transfers due to supporting java native kernels and various notification callbacks.

Tagged hacking, java, opencl, parallella.
Monday, 10 March 2014, 13:59

Return of the king

Spent the afternoon and evening drinking in town.

One interesting part was when Jay Weatherall (the current Premier of the state) and a small entourage sat next to me at the bar @ The Exeter. As he was on the clock he and his wife stuck to a single soda water each which was a bit weak but understandable (state election is next Saturday). He was talking to some Scottish bloke ... that turned out to be Billy Bragg according to overheard gossip from the barman after they left. It's been probably 20 years since I saw him in a video clip so I didn't recognise him at all and just thought he was another politician (the mention of appearing on Q and A and Rockwiz in the next week or so had me confused mind you). I didn't really listen in too much but what I heard sounded interesting enough if a little dire (underclass oppression and that kinda stuff). Jay is Labor which is not pinko enough for me these days but he seems like a nice enough bloke from what little i've seen (being in a minor state with mostly centralised networked news services across the country we barely get any local coverage these days - even if i did watch it. Which I don't.). I kept to myself but an after-the-fact good luck to him anyway.

Bloody nice weather for autumn mind you. 35 and overcast which takes the sting out of the full sun - although one has to be careful not to get burnt severely, barely a breeze, a bit humid which isn't an every-day occurrence in the driest state in the world. Mowed the lawn after I woke up from my birds-are-singing all nighter hacking the parallella the `night' before and then had some lunch before heading out on my bike. By 10pm it just started to cool down a little but was still very pleasant and hasn't really changed since - if i were 10 years younger and my headlight was fully charged I might have headed out for an evening ride instead of just slinking home to some junk food and writing a pointless blog post. I was surprised how feral The Austral got by the time I left though, I dropped by to have a traveller before I rode home after spending most of the evening at The Exeter and it felt like it was 2 in the morning or something. A little bit too 'rough' for my sensibilities, but maybe The Adelaide Cup had something to do with that because it is pretty much a carnival of bogan and some of the revellers end up in town afterwards (horse race == public-holiday around here, how fucked is that). Tis ok during the day though.

womadelaide was on this weekend so there were plenty of people about (like politicians and singers). Given I was a hermit most of the summer for various personal reasons and the fact that i'm starting work soon, I thought I should make the effort and head out whilst the weather's holding. Twas good overall and hopefully i'm not hungover for a fifth day straight tomorrow.

Update: So a few days later ... something a bit strange. I had to go to the city to the dentist before 9am and as I passed one of the busiest intersections in the whole city I saw some middle aged smurfs wildly waving placards from the median strip. Between them was the local Liberal candidate waving madly at passing cars. Bit stupid and quite dangerous. But then again all I know of her is from the ABC election coverage a few years ago: apparently she used to run a hair salon (but the way the said it made it sound like she ran a brothel!). Obviously that qualifies you around here.

But today it's polling day and a thunderstorm woke me up a bit earlier than i'd hoped so I might get voting out the way straight up.

Tagged biographical, rants.
Newer Posts | Older Posts
Copyright (C) 2019 Michael Zucchi, All Rights Reserved. Powered by gcc & me!