v.1

Some more thoughts

Here’s Linking Park’s video where they use similar technology. In the Khaidian video of Martyrdom, it looks slightly block because I’ve been forced to only do full body shots of the band (due to the 360 degree view),  on a low resolution camera. The Kinect v.1 is low res compared to the newer Kinect for Xbox one, which would have been great and greatly improved the clarity of image, but the limitations of RGBDToolkit mean I’m limited to the first iteration.

https://www.fxguide.com/featured/beautiful-glitches-the-making-of-linkin-parks-new-music-vid/

Interestingly, I may end up using the beta for ‘Depthkit’, which uses the Kinect v.2 which would be really interesting to use.

Battling technology.

My first render went a little wrong. I wanted it to export as a PNG image sequence, due to it’s non lossy nature, but it started as a .MOV PNG sequence, which is the same thing but in a .MOV wrapper. Unfortunately toward the end of the render, 60 hours or so, I noticed a problem with some of the animation not triggering. I clicked further along in the timeline and the whole thing crashed. This meant I lost a good portion of work as I’ve been unable to recover the 9.5 gigs of footage. At this point I remembered the other reason why it’s so useful to render as an image sequence.

Exporting at 4k in H.264 is also a pain in media encoder. You have to stick everything on high or your size is choked to around 2200 pixels. Something to watch out on when using 4K in the future.

OCTANE REFERENCE SETTINGS: http://helloluxx.com/news/octane-render-kernel-settings-reference-library/

When I had my presentation, I was told by the tutors to watch out for it being too floaty feeling. I agree, and thing that the plexus layers will liven things up, but there are fairly good reasons why you don’t do dramatic or jerky camera movements. guidelines for good VR according to Occulus

http://www.cineversity.com/vidplaytut/render_virtual_reality_videos_with_cinema_4d_create_youtube_using_octane

https://www.freeflyvr.com/freefly-vr-how-it-works/

http://resolume.com/forum/viewtopic.php?t=4050

 

And on it goes

For the last few days I’ve been going crazy attempting to sort out issue after issue. It’s a ll a bit of a blur, so this may not be in any order.13239011_10156919808790440_5171223907330343128_n

Knowing that I was going to use a render farm and wanting to keep render times down somewhat, I knew that I needed to bake some of my animations. Essentially a few of the plugins I use are mograph type animations, this means that to ensure I have the same animation as I created, once I’ve received my files back from the render, I need to bake them to individual polygons.

I’ve been using Greyscale Gorilla‘s excellent plug-in called ‘Transform’. It breaks apart your models into polygons or chunks, and in nice new inventive ways. I was having issues with baking the ‘poly mode’ animations. GSG has already produced a tutorial for baking ‘chunk mode’, but nothing for poly. After a lot of research I decided that I would get in contact with Greyscale Gorilla themselves and ask how to tackle this seemingly simple but as far as I was concerned, impossible task. Brilliantly, Chris Schmidt sent back a solution for me.

Unhide GSG layers
Make “PolyFXInstance” editable.
Select ALL polygons of this model.
Disconnect (uncheck Preserve Groups)
Add a PointCache tag (Character tags menu)
Store State
Calculate
At this point you can turn off the PolyFX and the Effector!

Great stuff!

Part, the second-

Render farms. Ugh. so I know that I am simply not rendering this myself. After some quick calculations I worked out that it would take 3 months of 24/7 rendering to get this thing finished. At least.

Having to produce images at 4X the size of 1080p so that the visuals are high enough resolution and also stretch 360 degrees around you, takes sometime to process. It entails creating an ‘equirectangular’ as explained in this tutorial: Octane 360 in C4D

One of the great things about Octane is it’s use of onboard GPU power. it’s one of the main reasons I use it now, having purchased a GTX970 video card I found that it did speed up rendering a lot. The added viewport window is also brilliant, especially as C4Ds perspective wireframe is slooooooow at times. The biggest issue appears to be support from render farms. OTOY usually don’t give out licenses for render farms as they have aimed their engine at people who want to use their own PC and video cards. It’s totally scalable, which means that multiple cards offer an increase of exactly what that card would be capable of on it’s own. i.e two GTX 970s are twice as effective as one, etc…

This has meant I’ve had to go shop for my render farm requirements. After a LOT of searching I’ve had to settle on a Polish company called ULTRARENDER. to be fair, they’ve been very helpful and even have gone beyond what they needed to do already. The reason Ultrarender is different is because you rent a server filled with high spec cards rather than very fast processors and RAM. The machine I’m using currently has 6 GTX980s and a Tesla, which makes it pretty nippy. It’s not cheap though. I’m spending around £700 for a weeks rental. Hopefully I’ll have enough time to get at least one sequence finished for my presentation. That would be around 1.30mins, which fits in nicely with my 3 min slot.

Mostly so far I’ve been installing programs on the server using a program called TeamViewer. it allows for remote operation of the server and allows me to install everything I would need. I’ve then moved everything from Dropbox where it was stored earlier, to the server to render. The problem is the licenses to use various bits of software have been a total pain.

OTOY allow you to disconnect your program from one computer to use it on another, so no problem (other than the hour wait per deactivation/activation).

Greyscale Gorilla there’s no issue with at all. just install!

Maxon on the other hand don’t allow me to use my student copy of C4D on any other machine (other than the one originally set up with). I can use it, but the resolution is choked at 800×600 and I can’t save PNG sequences (which I need to). So I’m now uploading an older copy I have to try and make that work… continual installing. This sucks.

test 001test 002

Some of the more odd / better music videos I’ve happened across. A lot of these were released within the last 3 years and are a fairly good indication of videos within metal (and industrial and Primus…).

Mastodon – This video sparked a lot of talk because it uses imagery not usually associated with rock and metal, namely twerking.

Devil Wears Prada – Nice use of puppets, also a bit different.

Every time i die – just a big dumb video of a band having fun. kinda cool. not very original.

Behemoth – Lots of dark imagery. More conceptual than full on performance.

Rivers of Nihil – Tech Death metal, a bit of a change from the usual performance video. Although it does still sit in the ‘standard band in wasteland/decrepit house/warehouse’ arena.

Animation this time. Looks like lots of After Effects.

Very typical performance + epic narrative going on here

Slayer – going for a very filmic narrative here. plus the usual performance stuff.

Not a known band at all, but I thought the data mashing was kind of cool.

Marilyn Manson’s comeback. slightly odd, almost pop video, with no musicians really, only writhing women and CG. and Manson.

Sikth – Mikee (the singer) did this video. seems mainly to be after effects, but it’s good to see something a little different.

Clutch – using humour and narrative (as the song does).

Apparently quite a respected video. a lot of the animation is pretty good and obviously a lot of work has gone into it.

Primus. just….

Parkway Drive – Actually a very typical video.

Pantera – (1994) considered a bit of a classic performance video. Almost used as a jumping off point for the Martyrdom video.

Slipknot – another classic performance video that sparked millions of ‘play in a house being ripped apart by fans’ style videos.

Really dark and violent imagery suits Dillinger’s style here.

Skinny Puppy – a seminal industrial band. I only like this album though. And this video, although old, is a great example of giving the audience something they didn’t realise they wanted. Goths breakdancing.

 

Timing is the secret of comedy…

My major issue right now is getting the timing for the Kinect performance correct. What seems to be happening is a slip in the timing of the files. I was importing the entire performance and then attempting to sync it up to the audio. I was basing this on the film footage I’ve already taken. One issue appears to be that although the C4D project is 30fps and apparently the Kinect records at 30fps, there’s something going weird along the way, resulting in this drift.And looking at the output OBJ files from RGBD Toolkit results in a LOT of mixed up or missing frames.

I’ve looked at the individual frames of the output and there seems to be duplicated frames here and there. Not so much that you would notice it when playing back though. If I delete the frames it leaves an empty gap that I have to edit manually. When fully imported into C4D, there also seems to be an overall slowdown of the footage, almost like it was filmed in a higher frame rate.

So now I’ve tried manually deleting, and moving frames and it’s just taking too long. So I’m essentially taking the footage, splitting it into two versions, slow and fast, and then editing it so that it jumps in and out at key times.

The trials and tribulations of a 360 wrangler.

It seems I’m starting (hah!) to ask too much of Cinema 4d and After effects, at least for my PC. It’s certainly feeling the strain of having 4 separate figures with multiple OBJ files playing, not to mention a fair amount of polygons flying around from the monoliths and lights.

Anyway, been working on the chorus sections for the track and I really like the idea in a tonal change; open it up and have everything white. Here’s a quick version of the first chorus using plexus within after effects to add the figure.

——

Just some important stuff about Skybox and how to use it within 360 design.

Workflow

Riptide imports each individual OBJ, but as one big mess, meaning you have to muck about to get the whole thing in sequence, and I don’t have a lot of time for that. I’ve now managed to finally import the OBJ files as a sequence, but via a different plug-in, OBJ importer 2 from C4D Zone. This works with newer versions of C4D but doesn’t import textures. At this point though, I’m really not that worried about that, I just wanted to get something into C4D.

I decided to do a test render and have found that the OBJ sequence seems to be at 60fps, making it slow and now out of time. I’ve since adjusted this, by deleting every even frame. I just now need to see if this works in plexus and Trapcode form, now I have also dropped the polygon count within RGBD Toolkit.