Single Header Bullet


I reduced Bullet 2.83 to a single C++ header file.


Largely to see if I could … hey - SQLite say a 5% - 10% speed boost when they did this right?

… oh - and to ease adopiton I guess … maybe …


I used a slightly complicated Scala script that I’m not interested in examining again … for now …

Basically ; I took the 2.83 stripped variant of Bullet and used a Scala program to digest it. I walked all of the files following the preprocessor #include commands while sprinkling #line directives into place to keep the line numbers correct. I had to force a few bits to happen in a specific order, and there were some trivial functions (two I think) that were declared twice, but I think that it worked. I certainly passed a “hello world” Bullet test … so it worked for my woefully inadequate testing.


Anyone who adheres to Sean Barret‘s philosophies may be amused by this.


I’d like to have a smaller version of Bullet to work with. Smaller footprint, no seriailsation, drop the alighned allocators, remove as much virtual as possible.

… but definately keep the option to use double … cause … y’know ; VR!

Peter's Awesome Lua with the Core functionality we care about

TL:DR ; results

DukTape is a C engine for executing ECMAScript 5.1ish. (… or JavaScript or whatever we’re calling it this week) Notably it’s distributed as a trio of source files to ease integration.

Lua is a C script engine (for the Lua languages) and is not distributed in such a nice way. So I spent a few hours preparing a (Python) script to repack Lua 5.3.3 into a single header. (By the time I fix my blog posting stuff 5.3.4 will probably be out)

Merry (belated) Christmas!

Why would you do this?

Mostly to conform to Sean Nothings Barret’s constraints.

Who would care?

Anyone who wants to slap (somewhat) minimal Lua into a project and see what happens.

How did you do this?

I used a python 3.5.1 script to crack open the .tar.gz file, scan the Makefile, then concatenate sources as needed

  • I scan for some test cases that you (obviously) won’t have - sorry
  • You’ll need a .c (not .cpp … sorrynotsorry) file for Lua to actually compile … so there’s that.
  • … my tests are written in .cpp though … as is the Catch unit testing framework
  • I used a regex and POSIX line endings to get the whole thing under 1MB
    • so that KDiff likes it
    • … hopefully future updates won’t break this


Lindenmayer Systems allow one to specify a series of replacement rules for transforming strings.

If the text is used as a series of drawing commands, including saving and restoring the cursor’s position, the technique can generate fairly interesting foliage.

A scene graph offers the ability to save/restore by chaining segments together. I used Unity3D’s ScriptableObject to create “Languages“ with replacement rules. I also created “Dictionaries“ mapping symbols to segments made from GameObject prefabs. Finally, I explicitly marked a node in the segment as being the Leaf to which any successive stuff should be attached.

By allowing multiple overlapping rules, I let the system show a lot of variance. I added “Soar“ mutilators to tweak the spawned things and show some more variance. By tweaking the “seed” value with the world position, I ended up with something that could use the same prefab to produce a whole forest of trees.

Overall I’m happy to move forward with this as a tool for filling in my own virtual forests. I think that it it needs some work on the “usability” and could stand to take some lessons from Unity3D’s builtin tree system. Seems pretty good for a Saturday afternoon bit of messing about

Vive Cane

I’m still alive just … busy, not bloggy … maybe someday I’ll be more bloggy. Here’s something that kept me busy …

Teleporting everywhere feels wrong, so my suggestion is to use the Vive wand like a cane. Two minutes feels a bit long, but here’s a video showing it off.

More or less; when you grip the/a wand - your avatar is planting a cane in the world from which you can push yourself. When the grip is held - I constantly offset the “foot” of the avatar and I can ensure that the wand’s (in game) position (in VR) doesn’t change. With two wands, you can crawl around in VR (which CryTek already worked out) which opens some interesting possibilities.

At this point … I don’t really have much to say about this, it is what it is - an amusing way to avoid joystick-motion sickness. There’s a handful of honey-dos I’d like to chase down with it but … 9-5, home made pizzas, social life, StarCraft’s Co-Op, and my-own build tool all compete for my attention.

If someone is reading this in the future - reach me on Twitter if you have a question or want a follow up.

Adding NotePad++ Macros to

I haven’t posted anything in awhile … so here’s how to get to get macros that work kind-of-like NotePad++

  1. Install atom-keyboard-macros into Atom.

    The default keybindings did nothing for me … sorry

  2. open your keybindings.cson

    • Hit CTRL+, > click on Keybindings > click on the blue text that says your keymap file
  3. paste this wodge into the bottom of your keybindings.cson PRESERVE THE INDENTATION!

    # almost NotePad++ macros for!
    # based on
    'ctrl-shift-r': 'atom-keyboard-macros:start_kbd_macro'
    'ctrl-alt-r': 'atom-keyboard-macros:end_kbd_macro'
    'ctrl-shift-p': 'atom-keyboard-macros:call_last_kbd_macro'
  4. there is no step 4

Mirroring Git/GitHub to Hg/BitBucket

This seemed a lot longer when I planned it in my notebook at lunch.

GitHub user project (both sides) BitBucket user SCM Schedule
ocornut imgui g-pechorin None Periodic
  1. Install hg-git
    • You’ll have to do this on the Jenkins server
    • You’ll have to do it either for the Jenkins user or all
    • I’m using a OsX machine as my host, so I was able to use easy_install to install hg-git and dulwich
  2. setup a project on GitHub
  3. create a Jenkins Freestyle project which runs periodically
    • Polling the SCM was NOT an option since there’s no default branch on GitHub
      • … this is a quirk of hg-git … I think
      • … IIRC/YRMV - so sling me a tweet or whatever if I’m wrong
  4. program the job to pull from git, push to hg, and ignore results of 1
    • this was only elaborate because I needed it to not-fail when there were no changes
      if [ -d "imgui" ]; then
      echo "Re-Using ImGUI"
      cd imgui
      hg pull git+ssh://
      echo "Cloning ImGUI"
      hg clone git+ssh://
      cd imgui
      hg push -f ssh://
      if [ $retcode -eq 0 ] || [ $retcode -eq 1 ]; then
      exit 0
      exit $retcode

Mercurial / Hg SubRepos

I’m still trying to catch up on stuff following Develop. I’ve decided to write a post about my experience(s) switching my work over to SubRepos.

I am unaware of the “reason” why they’re considered “bad.” Perhaps it’s a Unix thing? Maybe they don’t work as well as people feel that they should?


I have a (secret) project called “nite-nite/“ in which I use and develop some public-domain headers. I want this public-domain stuff to be … well … public-domain and visible to all. Putting these into a Sub-Repository seemes approriate, so I started by setting up a a separate repository on BitBucket. Following the basic usage I cloned this into my existing working copy and set it up as directed;

C:\Users\peter\Desktop\nite-nite>hg clone ssh://
C:\Users\peter\Desktop\nite-nite>echo pal-public-domain = pal-public-domain > .hgsub
C:\Users\peter\Desktop\nite-nite>hg add .hgsub

So far so good right? Well … not so much. The push command won’t work right with the setup we/I just used.

The fix is simple, the .hgsub file looks like this …

pal-public-domain = pal-public-domain

… and it needs to look like this …

pal-public-domain = ssh://

So commit/amend the previous commit and push

C:\Users\peter\Desktop\nite-nite>hg commit -m "added public domain stuff"
C:\Users\peter\Desktop\nite-nite>hg push

I’m reasonably happy with this. As a bonus, I applied it to my blog and the embedded Unity project can be embedded as source rather than a binary. Great, now I’ll get on with the actual work of moving those headers into the public-domain project.


I fixed a bug. I don’t have a video of this bug - just the fix


So - two more bugs to go before I can add features

Stop Motion

Mother told me to try something different.

I made a stop-motion video (mostly to see if I could) with bits I had lying around or withing a 5 minute walk. … also, I wanted to see if I could record “blocked out” storyboards since I’m a crap pose drawing person.

I spent £2 on some pipe cleaners and a dopey phone stand. I took the pictures on my phone. I used some bluetack to help posing, pyOpenCV3 to encode the jpegs at 3 FPS, and ffmpeg to reduce the file size so that I could upload it in < 40 minutes.

The writeup took about 40 minutes … so maybe I didn’t save much time

I think the fella could use some firmer limbs - (maybe pasta tubes?) to make animation easier. I went looking for used-GI-Joe toys to produce storyboards from. I wanted the articulated hips and joints to show things like dudes slouching. With firmer limbs - I’d probably get smoother results … maybe …

It also might be good to have a steadier hand when taking the pictures.

Dual Hetero Quadro on Heaven

Literally the punchiest title I could come up with. I’ve been told that heterogeneous GPU setups are ridiculously slower than a single-GPU. This is largely an anecdotal shrug of “hey - a second GPU doesn’t really slow my computer down in any meaningful manner at all!” I’m sure that I did this all wrong and that the GPU is capable of being tweaked into a setting where this all becomes conclusively - the legwork for that isn’t interesting to me so I haven’t done it.

TL:DR; A second GPU which doesn’t match up or SLI won’t give you cooties.


My workstation came with an NVidia PNY Quadro K600 GPU, which was replaced with a K620. I decided to put the K600 back into my computer as a secondary card and run UniGine Heaven benchmark to see just how slow a third wheel makes it. All tests were carried out at full-screen-exclusive resolution but were otherwise using the Heaven Benchmark’s default settings for their namesake. Stereoscopy was disabled for the non-stereo setting, and I used 3d Vision as UniGine’s the stereo method. For stereo settings the 3D OpenGL Stereo profile was used with Stereo= Enabled and Display Mode= Generic Active Stereo.


I really can’t see any reason that a consumer would use this setup - it just amused me.

Configuration Detail Setting Stereo Score Frames Per Second Minimum FPS Maximum FPS
K620 Extreme Yes 131 5.2 3.5 24.4
K620 Basic Yes 269 10.7 5.2 21.6
K620 Extreme No 281 11.2 7.3 22.5
K620 Basic No 592 23.5 15.0 42.2
K620 + K600 Extreme Yes 131 5.2 3.7 10.0
K620 + K600 Basic Yes 270 10.7 8.2 18.4
K620 + K600 Extreme No 280 11.1 7.2 22.0
K620 + K600 Basic No 591 23.5 14.8 40.5


The relation between the numbers suggests that;

  • the second GPU does kind of slow it down measurably
  • the second GPU doesn’t slow it down noticeably
  • the Dual-GPU is a teensy bit helpful with Stereo rendering, but not quite worth the cost
  • in a real-game, I could tell the second GPU to do PhysX work and maybe see some improvement?

I may have left some junk running on the desktop during the single-GPU tests, so the difference could be skewed a bit. Regardless; I’m confident that adding the second GPU to this desktop hasn’t sucked-up all my PCIe lanes or something silly.


  • how would this work in an AMD/NVidia mix? (Red/Green)
  • would a x86 OS work any good/evil with with x86 benchmark?
  • would something with PhysX really be that different good/evil?