Lindenmayer Systems allow one to specify a series of replacement rules for transforming strings.
If the text is used as a series of drawing commands, including saving and restoring the cursor’s position, the technique can generate fairly interesting foliage.
A scene graph offers the ability to save/restore by chaining segments together. I used Unity3D’s ScriptableObject to create “Languages“ with replacement rules. I also created “Dictionaries“ mapping symbols to segments made from GameObject prefabs. Finally, I explicitly marked a node in the segment as being the Leaf to which any successive stuff should be attached.
By allowing multiple overlapping rules, I let the system show a lot of variance. I added “Soar“ mutilators to tweak the spawned things and show some more variance. By tweaking the “seed” value with the world position, I ended up with something that could use the same prefab to produce a whole forest of trees.
Overall I’m happy to move forward with this as a tool for filling in my own virtual forests. I think that it it needs some work on the “usability” and could stand to take some lessons from Unity3D’s builtin tree system. Seems pretty good for a Saturday afternoon bit of messing about
I’m still alive just … busy, not bloggy … maybe someday I’ll be more bloggy. Here’s something that kept me busy …
Teleporting everywhere feels wrong, so my suggestion is to use the Vive wand like a cane. Two minutes feels a bit long, but here’s a video showing it off.
More or less; when you grip the/a wand - your avatar is planting a cane in the world from which you can push yourself. When the grip is held - I constantly offset the “foot” of the avatar and I can ensure that the wand’s (in game) position (in VR) doesn’t change. With two wands, you can crawl around in VR (which CryTek already worked out) which opens some interesting possibilities.
At this point … I don’t really have much to say about this, it is what it is - an amusing way to avoid joystick-motion sickness. There’s a handful of honey-dos I’d like to chase down with it but … 9-5, home made pizzas, social life, StarCraft’s Co-Op, and my-own build tool all compete for my attention.
If someone is reading this in the future - reach me on Twitter if you have a question or want a follow up.
I haven’t posted anything in awhile … so here’s how to get Atom.io to get macros that work kind-of-like NotePad++
Install atom-keyboard-macros into Atom.
The default keybindings did nothing for me … sorry
CTRL+,> click on
Keybindings> click on the blue text that says
your keymap file
paste this wodge into the bottom of your
keybindings.csonPRESERVE THE INDENTATION!123456# almost NotePad++ macros for Atom.io!# based on https://github.com/JunSuzukiJapan/atom-keyboard-macros'atom-text-editor':'ctrl-shift-r': 'atom-keyboard-macros:start_kbd_macro''ctrl-alt-r': 'atom-keyboard-macros:end_kbd_macro''ctrl-shift-p': 'atom-keyboard-macros:call_last_kbd_macro'
there is no step 4
This seemed a lot longer when I planned it in my notebook at lunch.
|GitHub user||project (both sides)||BitBucket user||SCM||Schedule|
- Install hg-git
- You’ll have to do this on the Jenkins server
- You’ll have to do it either for the Jenkins user or all
- I’m using a OsX machine as my host, so I was able to use
- setup a project on GitHub
- create a Jenkins Freestyle project which runs periodically
- Polling the SCM was NOT an option since there’s no
defaultbranch on GitHub
- … this is a quirk of
hg-git… I think
- … IIRC/YRMV - so sling me a tweet or whatever if I’m wrong
- … this is a quirk of
- Polling the SCM was NOT an option since there’s no
- program the job to pull from git, push to hg, and ignore results of
- this was only elaborate because I needed it to not-fail when there were no changes123456789101112131415161718192021#!/bin/bashif [ -d "imgui" ]; thenecho "Re-Using ImGUI"cd imguihg pull git+ssh://email@example.com:ocornut/imgui.gitelseecho "Cloning ImGUI"hg clone git+ssh://firstname.lastname@example.org:ocornut/imgui.gitcd imguifihg push -f ssh://email@example.com/g-pechorin/imguiretcode=$?if [ $retcode -eq 0 ] || [ $retcode -eq 1 ]; thenexit 0elseexit $retcodefi
- this was only elaborate because I needed it to not-fail when there were no changes
I’m still trying to catch up on stuff following Develop. I’ve decided to write a post about my experience(s) switching my work over to SubRepos.
I am unaware of the “reason” why they’re considered “bad.” Perhaps it’s a Unix thing? Maybe they don’t work as well as people feel that they should?
I have a (secret) project called “
nite-nite/“ in which I use and develop some public-domain headers.
I want this public-domain stuff to be … well … public-domain and visible to all.
Putting these into a Sub-Repository seemes approriate, so I started by setting up a a separate repository on BitBucket.
Following the basic usage I cloned this into my existing working copy and set it up as directed;
So far so good right? Well … not so much.
push command won’t work right with the setup we/I just used.
The fix is simple, the
.hgsub file looks like this …
… and it needs to look like this …
So commit/amend the previous commit and
I’m reasonably happy with this. As a bonus, I applied it to my blog and the embedded Unity project can be embedded as source rather than a binary. Great, now I’ll get on with the actual work of moving those headers into the public-domain project.
Mother told me to try something different
I made a stop-motion video (mostly to see if I could) … also, I wanted to see if I could record “blocked out” storyboards since I’m a crap pose drawing person.
I spent £2 on some pipe cleaners and a dopey phone stand. I took the pictures on my phone. I used some bluetack to help posing, pyOpenCV3 to encode the jpegs at 3 FPS, and ffmpeg to reduce the file size so that I could upload it in < 40 minutes.
The writeup took about 40 minutes … so maybe I didn’t save much time
I think the fella could use some firmer limbs - (maybe pasta tubes?) to make animation easier. I went looking for used-GI-Joe toys to produce storyboards from. I wanted the articulated hips and joints to show things like dudes slouching. With firmer limbs - I’d probably get smoother results … maybe …
It also might be good to have a steadier hand when taking the pictures.
Literally the punchiest title I could come up with. I’ve been told that heterogeneous GPU setups are ridiculously slower than a single-GPU. This is largely an anecdotal shrug of “hey - a second GPU doesn’t really slow my computer down in any meaningful manner at all!” I’m sure that I did this all wrong and that the GPU is capable of being tweaked into a setting where this all becomes conclusively - the legwork for that isn’t interesting to me so I haven’t done it.
TL:DR; A second GPU which doesn’t match up or SLI won’t give you cooties.
My workstation came with an NVidia PNY Quadro K600 GPU, which was replaced with a K620.
I decided to put the K600 back into my computer as a secondary card and run UniGine Heaven benchmark to see just how slow a third wheel makes it.
All tests were carried out at full-screen-exclusive resolution but were otherwise using the Heaven Benchmark’s default settings for their namesake.
Stereoscopy was disabled for the non-stereo setting, and I used
3d Vision as UniGine’s the stereo method.
For stereo settings the
3D OpenGL Stereo profile was used with
Stereo= Enabled and
Display Mode= Generic Active Stereo.
I really can’t see any reason that a consumer would use this setup - it just amused me.
|Configuration||Detail Setting||Stereo||Score||Frames Per Second||Minimum FPS||Maximum FPS|
|K620 + K600||Extreme||Yes||131||5.2||3.7||10.0|
|K620 + K600||Basic||Yes||270||10.7||8.2||18.4|
|K620 + K600||Extreme||No||280||11.1||7.2||22.0|
|K620 + K600||Basic||No||591||23.5||14.8||40.5|
The relation between the numbers suggests that;
- the second GPU does kind of slow it down measurably
- the second GPU doesn’t slow it down noticeably
- the Dual-GPU is a teensy bit helpful with Stereo rendering, but not quite worth the cost
- in a real-game, I could tell the second GPU to do PhysX work and maybe see some improvement?
I may have left some junk running on the desktop during the single-GPU tests, so the difference could be skewed a bit. Regardless; I’m confident that adding the second GPU to this desktop hasn’t sucked-up all my PCIe lanes or something silly.
- how would this work in an AMD/NVidia mix? (Red/Green)
- would a x86 OS work any good/evil with with x86 benchmark?
- would something with PhysX really be that different good/evil?
I have had several geese charge me - this is a poor approximation.
I was quite young and the geese were acclimated to humans, more importantly they knew how tasty the french fries and clam strips we carried were. I never had a chance, nor will I ever forget. Honking with an bestial hunger, the savage geese charged! Pursing me across the fried clam shop’s parking lot, my mother could only cackle as she scrambled for the camera.
To this day - I have not returned to that shop.
I was playing with Python’s binary extension system and was impressed with the simplicity. I think that the usage of setup.py encourages a consistent ecosystem … as opposed to the more open conventions used by Java and CLR.
(I followed the generic instructions and they worked fine on Windows 8.1 - disregard the hype/hate!)