Videogame music video

Ok, another idea: create a video game that doubles as a music video. It doesn’t need to be super flashy, in fact, if it has a retro feel that may be even better. Open it up to people on mobile platforms and there’s a chance it might do something a little more viral than just the usual kind of video.

Seems like a couple of people have already explored this hybrid genre:

Qvalias game/video

 

Looks like it might be possible to create something in a program called Gamemaker. it’s a pretty interesting way of creating stuff and may be a good way of developing something retro and simple at some speed.

Prisma

So over the last few months I’ve noticed people using a new app on their photos that turns them into the closest approximation of a painting that I’ve seen from an app yet.

Cool, so how do I turn it into video?

Looks like someone already figured it out:

 

This is basically a video turned into frames and then each frame is processed by Prisma. Once completed, the frames are then reconstructed as a sequence.

I kinda like the first person point of view here. I wasn’t thinking this first of all, possibly just a camera following someone. Got some ideas along the lines of: protagonist travelling somewhere possibly being followed – the first person adds to the paranoia.

Brain squeezings: suddenly holding soil. face paint – looks freaky in prisma.

v.1

Some more thoughts

Here’s Linking Park’s video where they use similar technology. In the Khaidian video of Martyrdom, it looks slightly block because I’ve been forced to only do full body shots of the band (due to the 360 degree view),  on a low resolution camera. The Kinect v.1 is low res compared to the newer Kinect for Xbox one, which would have been great and greatly improved the clarity of image, but the limitations of RGBDToolkit mean I’m limited to the first iteration.

https://www.fxguide.com/featured/beautiful-glitches-the-making-of-linkin-parks-new-music-vid/

Interestingly, I may end up using the beta for ‘Depthkit’, which uses the Kinect v.2 which would be really interesting to use.

Leap of Faith

This is me demonstrating the setup for the installation. using Ableton Live to control the music and trigger the video at the right times. The Leap is controlling effects simultaneously on Resolume to reflect the changes with the video.

The player can then control what happens with the remix, essentially creating music and audio without having any real knowledge of how they did it!

Resolving Resolume’s reactiveness

My initial plan has been to use Ableton live and Resolume arena together as I wanted to have the viewer actually remix on the fly. Initially looking at it I thought it would be great to trigger video and sections of songs. I attempted to put this together and have found that the Leap Motion only sends through CC messages (a continuous stream of data which has a value between 1-127) rather than piano note data (a single button press if you will, usually used to trigger samples or video clips). There may be ways to get around this, but due to time and ease, I’ve decided to stick to a pre arranged sequence and manipulate the video and audio with effects and plugins.

Attempting to put this all together I’ve encountered some issues. I originally exported my video at 1080p. I wanted it to be as sharp as possible. I had to convert my video into DVX format which is a proprietary format from Resolume and apparently allows it to play better. Previously I have found that a JPG sequence has worked well playing back in Ableton live’s video window. Avoiding H.264 as, although great for streaming, is terrible for programs like Resolume or Ableton. I’ve found though that the 1080p video is a little jerky, so may try it at 720p.

Setting up the controls for both Resolume and Ableton has been tricky as I’ve had to go via Geco, Leap Motion’s Midi controller software. It does work well, but there are so many options open to you, that you don’t know how people are going to use it, especially as they have no proper training and it’s just “wave your hands over this thing”. I’ve considering using a small picture which shows what to do, without stopping people from experimenting. Presently I have various controls mapped to movements, a distortion on the video matching a distortion on the audio. This is proving very tricky though as I’m manipulating several streams of video and audio.

All this said, I think I’ve figured out the problem. the controller would jump channel numbers if I mapped to another effect. I can solve this by mapping to Resolume’s control panel and then matching that to a global effect rather than a specific effect.

Battling technology.

My first render went a little wrong. I wanted it to export as a PNG image sequence, due to it’s non lossy nature, but it started as a .MOV PNG sequence, which is the same thing but in a .MOV wrapper. Unfortunately toward the end of the render, 60 hours or so, I noticed a problem with some of the animation not triggering. I clicked further along in the timeline and the whole thing crashed. This meant I lost a good portion of work as I’ve been unable to recover the 9.5 gigs of footage. At this point I remembered the other reason why it’s so useful to render as an image sequence.

Exporting at 4k in H.264 is also a pain in media encoder. You have to stick everything on high or your size is choked to around 2200 pixels. Something to watch out on when using 4K in the future.

OCTANE REFERENCE SETTINGS: http://helloluxx.com/news/octane-render-kernel-settings-reference-library/

When I had my presentation, I was told by the tutors to watch out for it being too floaty feeling. I agree, and thing that the plexus layers will liven things up, but there are fairly good reasons why you don’t do dramatic or jerky camera movements. guidelines for good VR according to Occulus

http://www.cineversity.com/vidplaytut/render_virtual_reality_videos_with_cinema_4d_create_youtube_using_octane

https://www.freeflyvr.com/freefly-vr-how-it-works/

http://resolume.com/forum/viewtopic.php?t=4050