Videogame music video

Ok, another idea: create a video game that doubles as a music video. It doesn’t need to be super flashy, in fact, if it has a retro feel that may be even better. Open it up to people on mobile platforms and there’s a chance it might do something a little more viral than just the usual kind of video.

Seems like a couple of people have already explored this hybrid genre:

Qvalias game/video


Looks like it might be possible to create something in a program called Gamemaker. it’s a pretty interesting way of creating stuff and may be a good way of developing something retro and simple at some speed.



Some more thoughts

Here’s Linking Park’s video where they use similar technology. In the Khaidian video of Martyrdom, it looks slightly block because I’ve been forced to only do full body shots of the band (due to the 360 degree view),  on a low resolution camera. The Kinect v.1 is low res compared to the newer Kinect for Xbox one, which would have been great and greatly improved the clarity of image, but the limitations of RGBDToolkit mean I’m limited to the first iteration.

Interestingly, I may end up using the beta for ‘Depthkit’, which uses the Kinect v.2 which would be really interesting to use.

Battling technology.

My first render went a little wrong. I wanted it to export as a PNG image sequence, due to it’s non lossy nature, but it started as a .MOV PNG sequence, which is the same thing but in a .MOV wrapper. Unfortunately toward the end of the render, 60 hours or so, I noticed a problem with some of the animation not triggering. I clicked further along in the timeline and the whole thing crashed. This meant I lost a good portion of work as I’ve been unable to recover the 9.5 gigs of footage. At this point I remembered the other reason why it’s so useful to render as an image sequence.

Exporting at 4k in H.264 is also a pain in media encoder. You have to stick everything on high or your size is choked to around 2200 pixels. Something to watch out on when using 4K in the future.


When I had my presentation, I was told by the tutors to watch out for it being too floaty feeling. I agree, and thing that the plexus layers will liven things up, but there are fairly good reasons why you don’t do dramatic or jerky camera movements. guidelines for good VR according to Occulus


And on it goes

For the last few days I’ve been going crazy attempting to sort out issue after issue. It’s a ll a bit of a blur, so this may not be in any order.13239011_10156919808790440_5171223907330343128_n

Knowing that I was going to use a render farm and wanting to keep render times down somewhat, I knew that I needed to bake some of my animations. Essentially a few of the plugins I use are mograph type animations, this means that to ensure I have the same animation as I created, once I’ve received my files back from the render, I need to bake them to individual polygons.

I’ve been using Greyscale Gorilla‘s excellent plug-in called ‘Transform’. It breaks apart your models into polygons or chunks, and in nice new inventive ways. I was having issues with baking the ‘poly mode’ animations. GSG has already produced a tutorial for baking ‘chunk mode’, but nothing for poly. After a lot of research I decided that I would get in contact with Greyscale Gorilla themselves and ask how to tackle this seemingly simple but as far as I was concerned, impossible task. Brilliantly, Chris Schmidt sent back a solution for me.

Unhide GSG layers
Make “PolyFXInstance” editable.
Select ALL polygons of this model.
Disconnect (uncheck Preserve Groups)
Add a PointCache tag (Character tags menu)
Store State
At this point you can turn off the PolyFX and the Effector!

Great stuff!

Part, the second-

Render farms. Ugh. so I know that I am simply not rendering this myself. After some quick calculations I worked out that it would take 3 months of 24/7 rendering to get this thing finished. At least.

Having to produce images at 4X the size of 1080p so that the visuals are high enough resolution and also stretch 360 degrees around you, takes sometime to process. It entails creating an ‘equirectangular’ as explained in this tutorial: Octane 360 in C4D

One of the great things about Octane is it’s use of onboard GPU power. it’s one of the main reasons I use it now, having purchased a GTX970 video card I found that it did speed up rendering a lot. The added viewport window is also brilliant, especially as C4Ds perspective wireframe is slooooooow at times. The biggest issue appears to be support from render farms. OTOY usually don’t give out licenses for render farms as they have aimed their engine at people who want to use their own PC and video cards. It’s totally scalable, which means that multiple cards offer an increase of exactly what that card would be capable of on it’s own. i.e two GTX 970s are twice as effective as one, etc…

This has meant I’ve had to go shop for my render farm requirements. After a LOT of searching I’ve had to settle on a Polish company called ULTRARENDER. to be fair, they’ve been very helpful and even have gone beyond what they needed to do already. The reason Ultrarender is different is because you rent a server filled with high spec cards rather than very fast processors and RAM. The machine I’m using currently has 6 GTX980s and a Tesla, which makes it pretty nippy. It’s not cheap though. I’m spending around £700 for a weeks rental. Hopefully I’ll have enough time to get at least one sequence finished for my presentation. That would be around 1.30mins, which fits in nicely with my 3 min slot.

Mostly so far I’ve been installing programs on the server using a program called TeamViewer. it allows for remote operation of the server and allows me to install everything I would need. I’ve then moved everything from Dropbox where it was stored earlier, to the server to render. The problem is the licenses to use various bits of software have been a total pain.

OTOY allow you to disconnect your program from one computer to use it on another, so no problem (other than the hour wait per deactivation/activation).

Greyscale Gorilla there’s no issue with at all. just install!

Maxon on the other hand don’t allow me to use my student copy of C4D on any other machine (other than the one originally set up with). I can use it, but the resolution is choked at 800×600 and I can’t save PNG sequences (which I need to). So I’m now uploading an older copy I have to try and make that work… continual installing. This sucks.

test 001test 002

Some of the more odd / better music videos I’ve happened across. A lot of these were released within the last 3 years and are a fairly good indication of videos within metal (and industrial and Primus…).

Mastodon – This video sparked a lot of talk because it uses imagery not usually associated with rock and metal, namely twerking.

Devil Wears Prada – Nice use of puppets, also a bit different.

Every time i die – just a big dumb video of a band having fun. kinda cool. not very original.

Behemoth – Lots of dark imagery. More conceptual than full on performance.

Rivers of Nihil – Tech Death metal, a bit of a change from the usual performance video. Although it does still sit in the ‘standard band in wasteland/decrepit house/warehouse’ arena.

Animation this time. Looks like lots of After Effects.

Very typical performance + epic narrative going on here

Slayer – going for a very filmic narrative here. plus the usual performance stuff.

Not a known band at all, but I thought the data mashing was kind of cool.

Marilyn Manson’s comeback. slightly odd, almost pop video, with no musicians really, only writhing women and CG. and Manson.

Sikth – Mikee (the singer) did this video. seems mainly to be after effects, but it’s good to see something a little different.

Clutch – using humour and narrative (as the song does).

Apparently quite a respected video. a lot of the animation is pretty good and obviously a lot of work has gone into it.

Primus. just….

Parkway Drive – Actually a very typical video.

Pantera – (1994) considered a bit of a classic performance video. Almost used as a jumping off point for the Martyrdom video.

Slipknot – another classic performance video that sparked millions of ‘play in a house being ripped apart by fans’ style videos.

Really dark and violent imagery suits Dillinger’s style here.

Skinny Puppy – a seminal industrial band. I only like this album though. And this video, although old, is a great example of giving the audience something they didn’t realise they wanted. Goths breakdancing.


Timing is the secret of comedy…

My major issue right now is getting the timing for the Kinect performance correct. What seems to be happening is a slip in the timing of the files. I was importing the entire performance and then attempting to sync it up to the audio. I was basing this on the film footage I’ve already taken. One issue appears to be that although the C4D project is 30fps and apparently the Kinect records at 30fps, there’s something going weird along the way, resulting in this drift.And looking at the output OBJ files from RGBD Toolkit results in a LOT of mixed up or missing frames.

I’ve looked at the individual frames of the output and there seems to be duplicated frames here and there. Not so much that you would notice it when playing back though. If I delete the frames it leaves an empty gap that I have to edit manually. When fully imported into C4D, there also seems to be an overall slowdown of the footage, almost like it was filmed in a higher frame rate.

So now I’ve tried manually deleting, and moving frames and it’s just taking too long. So I’m essentially taking the footage, splitting it into two versions, slow and fast, and then editing it so that it jumps in and out at key times.

The OBJ(ect) of my affection.

Kinect recorded, now to just import into Cinema 4D. Apparently this is a LOT easier said than done. I’ve just spent the last two days researching and attempting to find a solution to all my issues.

Firstly, RGBD Toolkit exports its files as OBJ, which all 3D programs can apparently read. The problem is OBJ sequences, which cannot be natively imported into C4D or most other programs without additional scripts. I managed to track down something called Riptide. A program that can import sequences.

The main issue with Riptide is it’s a slightly older plug-in, which means I had to locate an older version of C4D (R13) with which to use.

In the meantime I mucked about with the OBJ sequences in After effects using both Trapcode Form and Plexus. They will certainly give me some additional looks that will be great, but I do want to attempt to create some figures in C4D as well.

My exports from RGBD Toolkit were creating massive issues with C4D collapsing under the sheer size of the points within the OBJ files. So back to RGBD Toolkit to simplify the exports…

Spiky monolithic doooom

A very short test render to see how the spiky movements work within the environment. Not a bad first attempt.

This uses various effectors including animating noise to affect the size of the spikes. Seems to work, although I think it will be interesting to see how introducing music to it all – and possibly some sound effectors instead of the random ones, will change how it reacts.


Blog at

Up ↑