Job Research – Modelling

So to wrap up the year I’m required to do one more post, to discuss my chosen discipline a little more, what the role is responsible for, other job roles you work alongside and how to get there.

A lot of this is off the top of my own head (until I get to the actual jobs bit) I guess wrapping up the first point is pretty easy, producing industry standard 3D assets for use in game/tv/film/simulations/architectural renderings etc. These could range from anywhere between a barrel in a video game, a character for a movie or commercial, a mockup of a building for architectural plans etc. It is a wide ranging skill that sees more and more use as time goes on, however lets be honest here, its the entertainment side I’m primarily focusing on. Texturing can occasionally be a separate job, however it is usually the responsibility of the modeller to unwrap and apply appropriate textures.

The common jobs for modellers break down into environment artist and character, while not always this does tend to be the division between hard and soft surface modelling. A table is a much simpler structure than a human body for example. In bigger companies this could be subdivided even further, for this exercise I was looking at jobs at Ubisoft Reflections and they were recruiting specifically for a 3D Vehicle Artist.

Most artists will end up taking orders from the Senior artists, who take orders from the Lead Artist. However you’ll still brush shoulders with other job roles, usually technical artists and animators who will be responsible for rigging, simulations (hair/fur/water etc) and obviously animating models you create (if necessary). You could also be responsible for discussions with clients to see if the work is matching up to the brief, schedules are being met etc.

Onto some current job listings:

Junior Vehicle Artist – Ubisoft Reflections


Skills and Knowledge

  • Good interpersonal and communication skills;
  • Knowledge of modeling techniques;
  • Knowledge of texture mapping and materials;
  • Knowledge of relevant 2D and 3D software packages;
  • A technical problem solving mindset is desirable;
  • Knowledge of vehicle design principles is desirable;
  • Knowledge of streaming and LOD systems linked to in-game engines;
  • Familiarity with data management software (such as Perforce) is desirable;
  • Exposure to industry game engines and production pipelines is desirable;
  • Knowledge of the video game industry and awareness of typical video game development processes is desirable.

Relevant Experience

  • Up to 1 years’ experience in an internship or placement year in a professional game studio environment or other relevant experience;
  • Bachelor’s degree in Graphics Design, Art or any other relevant training.

Okay right off the bat I can ignore the Bachelor’s degree part, that funding pool dried up many years ago and I couldn’t keep up the student lifestyle while balancing the mortgage and general monthly outgoings. I either make it into this industry on talent and portfolio or fail trying!

Communication skills & problem solving, no problem, years in IT and multiple years on a service desk. Sadly this particular listing doesn’t specify what kinds of modelling techniques or mapping techniques they’re looking for in a candidate, you’d just have to be as good as you can be and hope for the best. Now this is a specific artists job and I’m not entirely sure what vehicle design principles are. I doubt I’ll get any experience of asset management systems in college or accurate production pipelines. So for this job I’d be gambling on artistic talent to carry most of the weight.

Although to be honest, I asked the question of how to get my foot in the door as a mature student multiple times to multiple professionals during Animex and the response was almost always “a good showreel highlighting the best you can do” with occasional tailoring depending on the style/history of the studio you’re applying to. I’ll be taking that advice to heart and focusing on a high quality show reel for the end of this course for job applications.

Ubisoft also had posted a Junior Environment Artist job:

Junior Environment Artist


Skills and Knowledge

• Good interpersonal and communication skills;
• Ability to create interesting, detailed and visually appealing environments;
• Ability to adapt to new processes and pipelines;
• Working knowledge of industry leading 3D modelling packages and techniques;
• Understanding of composition visual story telling;
• Knowledge of level editors;
• Understanding of Physically Based Rendering systems;
• Familiarity with data management software (such as Perforce) is desirable;
• Basic gameplay and level design knowledge is desirable;
• Knowledge of optimisation techniques (e.g. 3D Studio Max);
• Knowledge of the video game industry and awareness of typical video game development processes is desirable;
• Exposure to industry game engines and production pipelines is desirable.

Relevant Experience

• Up to 1 years’ experience in an internship or placement year in a professional game studio environment or other relevant experience;
• Bachelor’s degree in Graphic Design, Art or other relevant training;
• Experience in both hard surface and organic modelling;
• Experience working in Adobe Photoshop.

Whole modelling is still obviously a big factor in this job role it seems to take a back bench in favour of a working knowledge of composition, story telling, level editors and generally good aesthetics. Which makes sense as your level is your connection to the world in which your story in unravelling.

This also makes having the required experience for a junior role a little tricky, I know the course isn’t going to give any hands on with a level editor, or specific rendering systems. So again this comes down to personal tuition or hoping the quality of my art opens up opportunities for on the job training. I didn’t bring this up on the previous job but fingers crossed that this course ticks the box for ‘other relevant training’.

Last but not least (as I don’t want this to go on forever) I found an Internship position at Studio Gobo. I don’t know if I’d be eligible as it states “Ideally you will have completed the second year of your degree” and I’m not on a degree program. However job hunting is a hard beast and you apply for EVERYTHING and hope for the best. So a year from now I’d try something like this.

Studio Gobo – Artist Internship

You know what, this level of qualifications I could tick. Maybe even now but definitely after another year of education! There is hope after all! Its only a 12 month internship and therefore isn’t permanent, but that’s a lot of quality experience you could take elsewhere once the contract ends.

To wrap this up, off to spend a summer learning! To quote some famous Warcraft Orcs “What? More work?…..Okey doke!”.



Final Portfolio/Showreel Feedback & Chosen Disclipline

This afternoon I delivered my final portfolio to all members of staff present and was given my grade for it. Here are my thoughts on the whole submission.


A common mistake during the Christmas 2016 prep run for this was talking too long about your work and treading over ground the lecturers are well aware of. So I made the decision to edit everything into a reel, its a little long for a standard industry showreel but has to cover all aspects of the course and a years worth of best work from each module. So eight minutes isn’t bad. I themed it 1980s style and did all the title card work in After Effects for some extra spit and polish.

Feedback was immensely positive and the few bits of constructive feedback I received were as follows:

  1. Where I collaborated on a project, clearly state which aspects I was responsible for.
  2. For future reels, cross fade music track to music track without any pauses between audio. Gary compared this to an animation where it stops suddenly and is jarring to the viewer.
  3. When rendering animation cycles or anything with a ground plane in future make sure objects are actually touching the ground (my cycles were hovering and casting shadows making it really obvious).
  4. My font choice for the lower banner is a little bit unreadable (I was worried about this and seems I was correct). Thankfully an easy fix to make.

The fact those few points were the only things to be raised, I’m absolutely bloody thrilled! I was graded a Distinction for the portfolio. It has been a fantastic first year and has really woken up a side of me I’d considered long gone, as sentimental as this might be, a big thank you to all of the staff over the last year who made this possible. I’d buy you all a beer but rules don’t allow it, so catch me in another year!

Now after seeing some of the truly wondrous talent at Animex, I don’t personally feel like I’m anywhere near the level of talent I need to be despite the grades. So I have a long year ahead but more on that in a moment.

Chosen Disclipline

Being a mature student with some prior experience I already knew when I joined I wanted to focus on modelling and texturing. That isn’t to say I’ve not picked up new things I have an interest in over the year, I’ve really enjoyed my After Effects from the more motion graphic kind of work to compositing and matte painting. I don’t want to lose or ignore these new skills but will likely continue them as a sideline/hobby and continue to focus on modelling.

I’ve gotten to grips with Maya, had great success with Substance Painter and am looking to purchase ZBrush over the summer to include it into my workflow. I’ve also had some great texturing success using curvature maps and generators to weather and age models. Over the summer I plan to get much better use out of my paid for training materials (Pluralsight) and brush up further on all of these tools. Hopefully I should return for the synoptic project ready to produce something the industry would be proud of! My time left to succeed is limited so if I don’t kick it up a notch now, I’m only hurting myself.

Year One – Contents Summary


This is a post bringing together all of my submissions to allow teaching staff easy access for marking. I’ll categorise to each module.

3D Modelling

3D Room – Feedback Re-submission

Low Poly Project – Submission

High Poly Pirates – Wk 4&5(unfinished)

Concept Art

Robot Concept Brief -Developed Design

Dystopian Vehicle – Final


Animation Ident – Final

Break the Cycle – Animation Project Submission

(Optional Animation Post/VFX)

After Effects Rigging – Animation with Expressions (Pt.3)


VFX – Final Idea Pt.4

(Optional VFX Posts)

VFX – Skin Replace/Glow

VFX – Matte Painting

After Effects – All Star Credits (Pt.2)

Game Design

Analysis of Game Design Pt.1

Analysis of Game Design Pt.2

Analysis of Game Design Pt.3

The Maze Game

Xmas Game – Submission

Another Xmas Game – WIP

Programming (Yeah I know the grade isn’t based on posts but it keeps my mind organised)

Exam Prep – Individual Game (Pt.3)

Walking Sim – Unity Asset Creation

Walking Sim – Adding Scene Interactivity

Walking Sim – Polish & Peer Review


Global Game Jam 2017






Animex AVFX – Day One

While it is towards the end of the year and a week out of my work schedule adds a tiny bit of stress onto my mind, I couldn’t pass up the opportunity to spend a week at the Animex Festival learning from industry. The college had offered to take us for the Games side of the event only, however Kelly and I signed up for the VFX side too and have been given official leave.


Today’s schedule

  1. The FX of Lego Batman – Matt Estela, VR supervisor, Animal Logic
  2. Animating Ethel and Ernest – Peter Dodd, Animation director, Lupus Films
  3. Creating the Characters of Fantastic Beasts and Where to Find Them – William Gabriele, Rigging TD, Framestore
  4. The Animation of Fantastic Beasts and Where to Find Them – Colin McEvoy, Animation supervisor, Double Negative
  5. The VFX of Rogue One: a Star Wars Story – Bradley Floyd, 2D Sequence supervisor, and John Seru, Generalist lead, Industrial Light and Magic

After a short break there would be a social meet and greet on the night, allowing us to talk to all of these talented people and hopefully learn a thing or two in the process.

The FX of Lego Batman


Now recapping an entire lecture would be insane, so between Kelly and I we took some shorthand notes. I’ll try to highlight some of the major points and leave the rest out.

The Lego Movie was a total success, most people didn’t expect it to be but by god we were all proven wrong. For Lego Movie and Lego Batman, Lego provided their brick database used in the Lego Digital Designer, giving the teams access to 3D models of the entire catalogue from the beginning of development. This allowed the assets to go straight into the pipeline. It also goes to some lengths at explaining how the entire development only took two years on a movie this size (more on this in a moment).

Originally a Lego Ninjago movie, along with the Lego Movie sequel were planned to happen before a Batman movie. However after the initial pitch and first draft concepts being green-lit, it was moved ahead of both other productions giving the team a mere two years to complete the project rather than four. To achieve this the script writers, story writers and art department were running in parallel to speed up pre-production.

The Lego movie was mostly kept to smaller desktop sets, however the team didn’t feel they could accurately portray Gotham City in such a way and wanted to aim for a larger scope with an atmosphere that (quote from Matt) “the movie should be as colourful as the Joker, but as dark as the Dark Knight”.

The next tid-bit was what completely made the Lego Batman movie for me, they tried as much as possible to reference as much of the 78 years of Batman history as possible, drawing design references from comics, the animated shows, the 60s show etc. For anyone in the know this made the movie a laugh a minute if you knew your stuff.

Now for the technical part. The in house renderer for the movie was called Glimpse which originally started as a plug-in for Renderman and slowly grew into its own renderer. Glimpse would trace the rays and feed them back to Renderman. Towards the end of the movie some scenes were rendered entirely with the now independent Glimpse but otherwise the bulk of the movie was still Renderman with plug-ins. The software is still being developed in house for future Lego movies. To wrap this up a single Lego stud had 1000 vertices, which…blows my mind to be honest. Gotham City was in the trillions of vertices and is absolutely staggering that they brought down rendering to a mere three minutes a frame. Hats off to the technical team at AnimalLogic!

Also a big thank you to Matt Estela on the night during the social event for some cracking industry advice, some confidence pep talk and brilliantly funny career stories! If I manage to learn any Houdini over the summer, its thanks to this guy.

Ethel and Ernest


Going into this lecture I wasn’t aware of the movie being discussed but as soon as the first slide came up I recognised the art style and knew it had to be related to the Snowman in some form or another. Turns out yes, its an adaption from another Raymond Brigg’s book about his parents. While the movie only took a year to make, it has seen its share of development hell and took eight years to gain momentum before production began. My hats off to all the dedicated people that believed in the project and kept pushing to make it a reality. The movie was created at Lupus Films (responsible for the shorts Snowman and the Snowdog and We’re Going on a Bear Hunt) but this was their first feature film.

Raymond’s art style on paper is a rough one, known for very quick pencil sketches and then photocopying these before reinforcing his line work and then over painting these with goache. It makes for a very hand crafted and personal feel, something which the development team wanted to recreate for the movie. Initial tests were done to animate old school using pencil and layout sheets but this was too time consuming and costly. Therefore for this talk I’d like to focus on some of the methods and cheats that were used to aid development.

Lupus made great use of the tool TVPaint with plug-in LazyBrush, which Peter Dodd had made use of during working on the recent TSB ad campaigns.

This tool allowed large areas to be quickly coloured using strokes across an area. The magical plug in reduced the colouring per frame from 30 minutes to 2-3 minutes. To achieve this hand painted colour feeling, photographs and scans of textures, paint swatches and other physical media were added to LazyBrush.

Going back up a level to TVPaint itself, many attempts were taken at making pencil strokes look as real and natural as possible. Line work was then duplicated, blurred and inversed for a white outline, colour/texture added to this layer and then the darker hard lines added back on top. To aid the animators with the drawing process, Animbrushes were created containing 30-40 turn arounds of character heads which allowed quick stamping of varying angles. Oh and finally many backgrounds were modelled in 3D to use as perspective reference, such a cheat is great to see being used in industry as its something I’ve done myself in the past.

Creating the Characters of Fantastic Beasts and Where to Find Them


This talk by the very talented William Gabriele from Framestore was easily one of the more complex of the day. He walked us through the rigs used by some of the goblins used in the movie, specifically Gnarlak and the goblin singer in the same club scene.

Framestore worked closely with JK Rowling to create these new characters and the film overall had 500 artists working on 36 full characters across 400 shots. The amount of work involved is staggering. Fifteen million CPU hours were required to render these shots and we were told this would have taken 1700 years on a single CPU, this boiled down to 274TB of rendered data. You see stats like these and realise movies are on a whole other level to games.

As for the rigs used, I never even considered that movie level rigs would have musculature, tendons and accurate skeletons rigged too. I give my utmost respect to technical artists able to work at this extreme level of anatomy. I wish I had examples to show but like all of the talks, photographs and filming were not allowed due to sensitive behind the scenes material. For Gnarlak they took a standard male human body rig and adapted the proportions to give a starting point.

As for the animation, most of it was motion capture from the actors giving the performance which was then put through a validation process, locked to the proportions of the character and then some manual clean up by hand.

I wish I had more to say on this one, or at least say I’ve picked up some great hint of technical knowledge but honestly it was WAY above my head in terms of complexity. Amazing talk though and very eye opening to the kind of quality required for movies these days.

The Animation of Fantastic Beasts and Where to Find Them


Another great talk revolving around Fantastic Beasts, but this time from Animation Supervisor Colin McEvoy from Double Negative, discussing bringing Frank the Thunderbird to the screen.

All kinds of different animals were looked at and integrated into Frank’s final design such as various birds of prey, horses and oddly jellyfish. Due to his size the team were worried his wing span was too narrow and wouldn’t look believable, however once animation tests were done his two extra smaller sets of wings gave enough visual language to suggest he would still be flight capable.

Being graceful was brought up a lot in their briefs and this fed into Frank’s animation a lot, the curve and arc of his tail were a particular area where the team needed to focus on this idea and implemented many slow fluid arcs to achieve this.

I’m going to derail from the bulk of my notes here and focus on two points, having recently animated a few cycles and had some major headaches with my own animations Colin mentioned something that changed everything. Animation layers. By using layers you can create a cycle, move to a new layer and key onto a new timeline without affecting the layer below (similar to how one would use layers in Photoshop). He would begin by doing his blocking phase, move to a new layer and then key on 4s using straight ahead techniques, new layer and work on 2s and crazily another layer and work on 1s. That is an amazing amount of work but also a crazy level of control down to the tiniest level. This is something I’ll have to test in animations next year, this could really help and many thanks to Colin for spending some time on the night explaining this to us in more detail, we really appreciated it!

Colin also mentioned he occasionally animates just the silhouette to focus on the shapes and the negative space, refining not only the animation but the composition. The field chart tool comes in handy for this, independent to the camera it lets you track your character movement without any background, great for tracking arcs and making sure your character is framed perfectly.

The VFX of Rogue One: a Star Wars Story


Being a huge Star Wars fan, was I excited for this? Does the Pope wear a funny hat?! The only thing stopping me from jumping up and down in my seat was the tiny lecture theatre seats! Bradley Floyd (2D Sequence supervisor) and John Seru (Generalist lead) from ILM London were here to discuss the process of concepting and creating the city of Jedha for the movie.

Harking back to the visuals of the old movies it was lovely to hear they went straight back to Ralph McQuarrie’s concepts for the original for inspiration along with location scouting and visual inspiration from the Middle East, Egypt and Africa.

Here is the single greatest thing I learnt from this talk and it is an absolute source of motivation from now on. The model for Jedha as seen above was put together through a mix of Maya, Nuke and ZBrush, although you might think it took a team of 20-30 artists maybe? Two artists, just two artists modelled the entirety of Jedha and I am blown away by it. How was this possible you ask, by using simple building models which were unique on each side, cleverly rotating each building and arranging them into small districts surrounded by the occasional bigger hero building. In total Jedha only had around 30 unique buildings. Layered on top of this were smoke/steam effects, they built up a small library of these and placed them around pipes/grates for added effect.

For the explosion of Jedha (sorry anyone that hasn’t seen the movie yet) the team researched nuclear weapon test videos specifically looking at clouds, shockwaves, how the ground was affected, heat and exposure. A slightly grim task but the results speak for themselves. To achieve a lot of this the terrain that had been created in Maya was procedurally shattered and had various FX simulations added to it (tectonic cracking, debris, dust etc). Amazingly most of what you see in the final movie was rendered in Arnold, I’ve obviously underestimated Arnold as a renderer and should spend more time figuring out how it works.

I had a few quick words with both John and Bradley on the night, just about general paths into the industry and the kind of things to aim for in a showreel. Both assured me that degree education absolutely isn’t necessary as long as you have talent, keep your show reel short and focus only on your absolute best work. Don’t build up to the best, open with it and wow early. As for movies their suggestion was simple enough, model and texture well enough so that it would blend into reality and not be out of place in a video composite. Thanks for the tips and your time guys, along with generally nerding out over Star Wars!

VFX – Final Idea Pt.4

To catch up from last time I now have colour corrected, sky replaced footage and an animated ball of light travelling down a road. Time for further alterations! Now it has been a while since I posted and a lot has been updated, so apologies but I’m going to run through a lot quite quickly.


Edits Made

  1. Made use of chroma keyed wind turbine footage found online, these were scaled, colour corrected and set in place.
  2. Added a farmhouse shack in the background from pre-keyed footage found online, scaled colour corrected and placed.
  3. Simulated the light passing the storage crate and the actor using blue masks and animating the outline.
  4. Added a stock lens flare from footage crate and boosted the intensity of its colour, also changed the hue to be more blue.
  5. Added a tail to the energy ball using a Particle CC World effect, getting it to follow the light was a massive pain which required some expression code (found online).
  6. A CC Drizzle and CC Rainfall effect were added, honestly this wasn’t part of my original planning but felt like adding it anyway. I currently still want this checked out by a lecturer if I have time for some feedback.

Again apologies all of that is a blur with little explanation, time is of the essence at the moment and I have to continue on with my second scene (which is actually the first). While I had tutorials to run from before I was now running solo and hadn’t yet put much thought into how I would achieve the meteor/energy ball descent from the sky. Time to get creative!

To kick off I had a dig around footage crate and youtube to see if there was anything pre-made and free that may be of help. While youtube did have a couple of pre-keyed meteors I thought this may be more appropriate:

After gathering a selection of shockwaves and electrical static from footagecrate I began some quick experiments to see what looked good. Given how short of a time the above energy beam was on screen for, the long tail made it look ridiculous and I ended up masking most of it out, parenting a new light to the front and adding another hidden solid into the clouds to show some of this light.

I added some electrical static effects to the clouds before the appearance of the energy ball, and a few layered shockwaves upon it entering. I did most of this in an hour during down time after finishing up the game jam assessment, I’ll give myself a pat on the back for that! After receiving some feedback, Gary suggested a couple of things:

  1. There needs to be some kind of reflection or highlight on the waters surface as this moves across the sky.
  2. The shockwaves need to have more impact.
  3. While the static electric is a nice touch the scene could either do with some actual lightning or lighting effects within the clouds to suggest the storm is above the cloud layer. I was directed to some recently posted video co-pilot tutorials to help me with this.

So problem one, masked a new light blue solid, feathered it and animated it across the water. Check one! Increased the size, opacity and exposure of the layered shockwaves. Check two! The next one not so easy, I followed portions of this:

Here is the end result:



As you can see I copied over the rain effects from the other scene, changed the rain direction and although its minor, added a floor plane on the road to catch the CC Drizzle.

At least in its current form before getting feedback here is the ‘finished’ piece.


VFX – Final Idea Pt.3

I’ve had a bit of a break since my last posting, so quick update. I shot both scenes over the Easter break, sat on the footage for two weeks and then tried to key it. This didn’t end well, the day was so overcast and grey that so much out of the scene was being keyed out when removing the sky (I’ve opted to attempt a full weather replacement for extra detail). Thankfully due to starting the project early I had plenty of time, so recently ran out to re-shoot and the new footage is working a treat!

I’d originally planned to do a blow by blow of the process, but after getting involved in the project I realised I’d be here forever doing that. This wasn’t like the quick half an hour class tutorials, this was a far larger endeavor. So I’ll summarise the major parts.

Here are two stills from each of the camera changes in my scenes, as you can see they mirror the storyboard almost spot on (even if it took a bit of location scouting).

As I’ve already mentioned the new footage has keyed much easier thanks to a lovely bright sky that day, so I’ll skip over that process and show the sky replacement with colour adjustments.

The skys are all stock footage from the internet with a few speed alterations to keep them believable. I lost some minor parts of the original footage doing the key but layered sections back in to gap these holes. All of the footage was then hit with the same colour LUT I found online which was pretty good at stripping out the warmth from whatever it touched, along with this there was some manual tweaking using levels, curves and exposure on certain layers. Overall I’m pretty confident it now looks like a dark miserable day, perfect for layering some lights onto! As for flipping the second scene footage, it was a choice made to more closely reflect the Video Co-Pilot tutorial to make following along later a little easier.

For now I decided to take a break from staring at my footage and decided to start following along the VCP tutorial on making my energy ball. Gary had said we could use any assets we find online and while there are energy balls online from footagecrate that I could easily attach a light to, I wanted to make my own just to say I could. After an hour following instructions, I had this rendered out.

I continued following instructions afterwards given my footage was no ready to use. This involved setting up a new solid as a 3D layer and trying to layer it over the road so the perspectives matched. This was achieved by using a camera rather than moving the solid as it was easier to manipulate, the solid was then set to accept lights and its blend mode changed to classic colour dodge. The layer now blends into the road and gives the illusion that the light is being cast on the road itself.

The energy ball was imported to the composition with a hue edit, a new point light was created and the energy ball comp parented to it. Finally this was animated along the path of the road.


I’d show the video but I’ll leave that until the end when I’ll upload the completed project. Why spoil the surprise early, right? I shall return soon with further updates!

Analysis of Game Design Pt.3

Sadly it is time to ditch my beloved Sega for something a little more recent and on the cutting edge. Six months ago I bought a HTC Vive, and struggled to find good content that I would actually call a game rather than an ‘experience’. Then came along the gem that is Vanishing Realms. While it isn’t hugely famous like the other titles I’ve covered and therefore information may be a little sparse, I’ll do my best to cover this little VR gem. I’ll be covering the same criteria as before:

How available hardware impacted design.
● Intended audiences for the game.
● Critique the game. Talk about game
design, visuals, mechanics & performance.

Vanishing Realms


Hardware Available at the Time


This point seems a bit obvious given it’s plastered right above, the HTC Vive. I won’t do a write up of what the Vive is as I already have a quick review of the hardware from last year on this blog (which actually made me go buy one).

VR Experience

However I always wondered how such a fleshed out Vive game could be released alongside the launch of the hardware itself. We’re still waiting for major Vive titles as people get to grips with development for it. After some investigation for this post it all makes sense. The sole developer Kelly Bailey worked for Valve up until February 2016, contributing to every Half-Life game and Portal. It seems obvious now that he would have had early access to dev kit versions of the Vive long before it’s actual release. The more astounding fact is that he made Vanishing Realms by himself in eight months before its launch on Steam.

Target Audience

This is kind of hard to say, there is no age rating information anywhere for it. Or even any developer comments on intended audiences. I feel like the intended market is already slim to begin with, as uptake in VR systems are slow due to the high price tag however I feel the real audience is anyone that grew up with a love of classic role playing games such as tabletop D&D and digital titles like Legend of Zelda. It has a very pen and paper, swords and sorcery kind of vibe. However due to a lack of good titles for the Vive I would not be surprised if a lot of people bought the game just for something to try on their VR system, its how I came across it.


Game Design & Mechanics

Vanishing Realms plays like you would expect an old dungeon crawler to play. Explore, collect items and gold, fight monsters, get more weapons and eventually fight a boss. It’s a time old system taken straight from pen and paper of old. However Vanishing Realms changes this forever by putting you into the dungeon using a combination of teleporting and the Vives room scale exploration. You can kneel, lie down, jump and even throw items by simply doing it yourself. I recall a rather tricky wall of traps I had to carefully navigate around on my hands and knees, from an outsiders perspective I must have looked crazy.


There are several logic puzzles to solve in various areas of the game that rely on some of the mechanics already mentioned, there is a fantastic element of personal interaction in the game and thats before I even get to the combat. It takes a little while before the game presents you with an enemy, but once it does you’re confronted with hulking 6-7ft skeletons trying to swing swords and axes at your head in VR. It takes you off your feet a little bit and your very first fight will inevitably contain a degree of flailing. Once you adjust to it all, all movements are tracked. You can block by clashing swords, strike as you please when the opportunity arises, parry with a shield and then strike with your off hand, it is entirely down to you as a player how you choose to fight. The game later introduces a 2 hand weapon mechanic which requires a primary hand that guides the weapon and your second hand simulates holding the weapon at some distance down the shaft which actually determines strike distance. It’s fun and involved and once battles become more intense almost becomes a workout. I’ve smashed several light bulbs in my own living room after losing my sense of presence and getting invested in swinging a weapon or throwing a rock.


The overall visual style for VR is cartoony and simple. Given the short development time it doesn’t surprise me the developer went for a more simple style but also VR demands a lot from your graphics card even if the game looks relatively simple and low poly. So the decision for this was likely two fold to save time and keep the framerate high enough to avoid motion sickness.  The same goes for the texture quality in game, which doesn’t hold up under scrutiny when you consider it’s a modern title. However in no way none of these points are a negative against the gameplay, once invested in playing such things become irrelevant and you stop paying attention.


Otherwise the colour palette is what you’d expect from a medieval fantasy, stone greys, browns, blues and the occasional warm palette for fire lit areas. I’ve heard tales that some of the assets used in game are paid for pre-built assets and while I can’t confirm this one way or the other, it wouldn’t surprise me given the short development time.


VR performance is a mixed bag and usually is down to hardware. I personally ran the game on a modern Skylake i7, 16GB RAM and a GTX 1070 and the game ran flawlessly. However anything less and the game does run into some framerate issues, but this is just the nature of VR in its current state.

I also frequently had issues with dropped items being ever so slightly lower than the floor plane meaning I couldn’t pick them up and more often than not ended up ramming my Vive controller accidently in the floor while trying to do so. This along with a few clipping issues with bad guys were the worst of my problems, so nothing serious that crippled gameplay. I’m hoping in the future VR and the hardware required to even run it will start to drop and allow more widespread use of the technology, I would love to see more games like this as currently its a rare gem among a sea of short ‘experiences’ with little gameplay. I spent so long inside the Vive playing this that I came out with motion sickness for a whole day. Do I regret it? Absolutely not, it was an amazing time!


Analysis of Game Design Pt.2

How available hardware impacted design.
● Intended audiences for the game.
● Critique the game. Talk about game
design, visuals, mechanics & performance.



Available Hardware at the Time


Talking about the Dreamcast and its development could honestly take days. To shorten this I’m going to skip out on most of its earlier (failed) development cycle, all the in-fighting between Sega branches and stick to the final product. In my opinion Sega attempting to make the console four times amounting to a huge amount of wasted R&D money contributes to their financial trouble and eventual downfall in the hardware market. However don’t misinterpret me, I ADORE my Dreamcast.


Sega released the Dreamcast’s predecessor, the Sega Saturn on November 22nd 1994 in Japan and almost immediately began research into their next machine. Admitting right of the door that Sony had bested them at their own game with the launch of the Playstation. While technically the Playstation had less power than the Saturn, it was easier to program for, had more available development tools and Sony had already struck deals with some major developers before launch. The new kid on the block just burst onto the market with an unpredictable force.

The Dreamcast was eventually launched in November 98′ in Japan and was the first of the 128-bit console generation. Hardware wise I won’t get into specifics, but was the first console to use off the shelf parts more similar in nature to those from a personal computer. Games could be written directly for the hardware or using DirectX API libraries as the system also ran an embedded version of Windows CE. This made it easy to port PC games to the platform. Compared to anything before it the DC was a console powerhouse, and would only be rivaled by the soon to be PS2.

However this article is tied to Shenmue specifically and its development life started on the Saturn, pushing the system to its theoretical limits before being told to stop Saturn development but continue the project. This left the team in a multi year window where they didn’t know their target hardware. Yu Suzuki, Shenmue’s creator was quoted as saying he wanted the game to be a console title without limits and personally put down a prediction for the upcoming hardware specifications. While there is no evidence to support this, I wouldn’t be surprised if this flagship title for the system had some influence over the final hardware considering the $70 million dollars they poured into the games development. A monumental sum back in the 90s.

Intended Audience

The age rating for Shenmue at the time was Teen, or in the UK 12+. This was the first time anyone and I mean anyone had attempted such an in depth, open world, story telling experience like this in a game. In an era of beat em ups and arena shooters, it stood alone as a marvel of technology and game design. In my opinion, I take that 12+ literally, regardless of age or gender there was something in Shenmue for everyone. At its core it was a people drama based around discovery and revenge.

There was however concerns that an attempt to re-create a perfect mid 80s Japan wouldn’t resonate with Western audiences and in an attempt to fix this the game needed a full English localisation to ease players into the unfamiliar setting. Due to the size of the game this was an enormous task challenge at the time and famously some of the voice casting decisions were….questionable and dragged down the quality of the localisation.

Game Design & Mechanics

Trying to sum up Shenmue is going to be a challenge. Originally Suzuki wanted the game to be an RPG based on the Virtua Fighter series, with voice acting, elaborate combat sequences and a cinematic approach. The tie in to Virtua Fighter was dropped once development left the Sega Saturn behind but plenty of evidence of it stills exists online.


The final core game has you travelling across a large open environment, full of minigames, subquests and character interaction. Things we take for granted today but monumental in the 90s. The player character could interact with a lot in the environment, cupboards, drawers, fridges, vending machines etc, and hold up objects to the camera to 360 rotate around and look at them. This blew my mind back then. On top of this the game had a fighting, martial arts mechanic which would be required frequently, this is likely still a holdover from when the game was going to be based on Virtua Fighter. The in game weather was even based on meteorological records from the 80s for the exact dates the game is set, Again the attention to detail to craft this game world was insane.

All of these mechanics were used to discover where the murderer of your father has escaped to, in the style of traditional Chinese cinema. I was always fond of the time of day system, all NPCs had a scripted daily routine based on the time, stores opened and closed at their own set hours, sleep was required before a certain time and some dialogue interactions with NPCs were based on time of day meetings. I was always fond of the section where you took up a part time job to look for information and had to drive a fork lift truck during your contracted hours. All of this was such a monumental effort that the development team had to invent a new type of data compression to fit the game on the eventual 4 discs. Without it they estimated it would have taken up to 60 optical discs.

I’ve left it till last but this topic couldn’t end without bringing up the QTE (quick time event), Shenmue gave birth to this idea of quickly reacting to an on screen prompt for a button press during a pre-scripted cutscene/event. While I accept the mechanic in this one game, the idea became a plague that spread to many games over several years, we’ve only recently seen the end of it. The idea was to give you some input on what happens during cutscenes, to feel like you’re still part of the action. However more often than not they just became a frustrating exercise in reaction speed to ever increasingly complex inputs, that required frequent reloads until you memorised the pattern.

All told it isn’t surprising it held the Guiness World Record for most expensive game ever made for quite a number of years. Sadly Sega never made the money back on it and became one of the many reasons for their eventual downfall from the console market.


All the visual influence for Shenmue was based on a mid 1980s image of Japan, specifically the port city of Yokosuka where the US setup their naval base post WWII. Nobody had to come up with an art style, it was all based on real world architecture, fashion, pop culture and overall history of the time period. There isn’t much to be found online about its visual influence probably to it all being real world reference.


The game ran well on the specifications the Dreamcast finalised with, transitioning from game to cutscene in the seamless manner we come to expect today. However performance isn’t always about framerates. Likely due to the compression and the sheer size of the game, load times could be quite lengthy between areas. Never helped by the Dreamcasts famously noisy disk drive.

I should mention the sequel here, some of the development costs for the first game leaked over into the second as they were back to back projects. However the environment detail for Hong Kong in the second game were scaled up significantly and the Dreamcast lost a lot of frames. The sequel was released at the time of the Dreamcast’s demise and in an effort to help sales the game was also ported to the original Xbox where it ran considerably better. The Xbox version is also the only version with an English localisation as it was rushed to shelves for the DC before it died. It’s amazing to think the development spanned the life of two consoles, took both of them to their limits and still required more power. As projects go it was certainly ambitious.



High Poly Pirates – Wk 4&5

Yes I admit somewhere within half term, week four of this post didn’t surface. I shall attempt to get you all up to speed this week.

Last week aside from some minor cleanup I mostly focused on trying to get the old hand holds into the door that is now the top of the desk. I knew this would end up becoming a boolean operation and was worried at what damage it would do to the mesh. As it turned out, not much when you’re getting pretty good at cleanup. Sadly I don’t have screenshots from the moment, just what remains in old save files.

Most of the issues were restoring edge flow that included the new boolean geometry which mostly boiled down to cutting away a section of tabletop and stitching it back together from scratch using the new edges from the boolean. Once this was in place a simple extrude and bevel later and our new hole had a handle.


Fast forward a week and I needed to crease all my hard edges to prepare the mesh for Mudbox, this would prevent edges that needed to stay hard from being warped. I vastly underestimated how much time this would take, thinking oh ten, fifteen minutes? It took most of a three hour session to achieve something ideal. New ngons were discovered which consumed some of this time, jumping up and down the quick smoothing views highlighted shading errors which usually meant an ngon somewhere. So this isn’t the most interesting or detailed blog post as I spent most of the day creasing and jumping up and down the quick smoothing views. Here is how it looks.


As a visual test to prove the creasing works, here is the finished low poly with a mesh smooth applied. Seems to be working just fine!


Sitting back I mulled over the UVs after doing an automatic unwrap and with some input from Matt decided that space could be saved on the map by stacking shells for the legs. This would mean both leg textures would be identical and I’d rely on variation in the tabletop to distract the eye. Now….this technique is another old nemesis of mine I’ve never managed to crack since the early days of the spaceships/room project. I know I need to take a half, layout the UVs and then mirroring geometry should stack the shells automatically, just a case of testing it. If I’m not entirely keen on the results I may just break the legs onto a seperate texture all together.

To kick off this process, I’ve detached the geometry of the legs from the tabletop and will work on unwrapping and mirroring one of them.


Till next week!


Analysis of Game Design Pt.1

The class has been tasked with picking some famous games, or something you’re familiar with and talking about it from a game design perspective. There is so much scope here so I’m going to keep it to things that have meant a lot to me over the years, hopefully this will keep things interesting for you and me. I may even learn something new! Here are the topics that need to be covered:

How available hardware impacted design.
● Intended audiences for the game.
● Critique the game. Talk about game
design, visuals, mechanics & performance.

So onto the first game!

Sonic the Hedgehog


Available Hardware at the Time


While Sega had some limited success in the 80s with their Sega Master System, it was never able to compete with Nintendo’s NES/Famicom and their almost indomitable market share. In order to compete they pushed the development of their next console and the first entry into the 16-bit console war, the Sega Mega Drive. The MD was built around a Motorola 68000 CPU, which had seen success throughout the 80’s in mass market computers such as early Macintosh machines, Amigas and the Atari ST. The console market was about to meet the power of a PC for the first time. Sega even added a secondary CPU to the system to deal exclusively with audio, leaving less workload for the main Motorola chip.

Even with this new hardware in hand it wasn’t enough to topple Nintendo and Sega realised they were missing one major ingredient in the plan. A mascot to rival Mario himself. A team was assembled to develop a Mario killer game using all of the new found hardware at their fingertips. This was how Sonic the Hedgehog came to be. The game was defined by what functionality they could pull out of the new hardware. Programmer and Project Manager Yuji Naka had always been obsessed with fast cars and speed and wanted to bring this love into the game. So much so, the game speed was eventually dialed back during development as the prototype moved so fast it was giving testers motion sickness. It seems like the new hardware was delivering exactly what they wanted in spades.

Intended Audiences

There was much heated debate between Sega of Japan and Sega US over this very matter, clashing heads over how to appeal to two very different audiences for maximum appeal and profit. Japan originally styled the character in your typical Japanese style of cute features and large expressive eyes but already knew the product had to succeed in Western territories to help sell Mega Drive units. In light of this Sonic was also styled after such classic animation icons as Felix the Cat and Mickey Mouse.


However the initial submission to Sega US caused quite the stir with the then CEO Michael Katz who immediately wrote a list of ten reasons why the IP would never succeed in the west and had it forwarded to Sega Japan. One of these reasons was that nobody in the US even knew what a hedgehog was, turns out they aren’t native to any of the Americas, I learnt something new today.

Sega US went to work redesigning the character to suit a Western audience which in turn infuriated the original Sonic Team in Japan and this continued for some time. Eventually SOA’s design would be the final design based on the facts that the Master System and Mega Drive up until now had better sales than the West than in Japan, where Nintendo still had the major foot hold. Nearly all of the initial Sonic lore was created by Sega of America.

The other aspect to target audience was a split between Japan’s casual gamer market and the Wests more competitive gamer market. Yuji Naka managed to nail down the appeal to both markets by introducing the speed aspect of the game as a skill based test, hoping gamers would return again and again in an attempt to beat the stages as fast as possible with the worlds speediest hedgehog. Meanwhile casual gamers could take a slower approach as the game had no time limit, merely a time counter.

Meanwhile I find it funny that no thought seems to have gone into the age or gender demographic, or at least I can’t seem to find any information about it. I could say that Sega were lucky that the end product was popular with all ages but I’m sure it was considered somewhere during the development process and simply isn’t documented online.

Game Design & Mechanics

All of the staff working on the original game admit that its core influence for gameplay still originated from Mario, but with that added desire of speed I’ve already mentioned. You could boil Sonic’s mechanics to that or Mario in its simplest form, get from point A to point B in a side scrolling platformer. As for the rest of the mechanics I’ll have to quickly jump back to the character design process.


The original character submission long before the blue hedgehog was a rabbit, who would pick things up with his ears and throw them at enemies. This rabbit created by Naoto Oshima didn’t fit Yuji Naka’s vision and said the item pickup process broke the rhythm of the action, along with his goal to make the game playable with only one button. So the team went back to the drawing board to look for animals that could roll into a ball, their idea for an attack. What would Sonic be if they had never nailed down that core rolling attack at this stage, it wouldn’t be the game we know now.

To now elaborate, the core mechanic is get from point A to point B, while avoiding or destroying enemies using a balled up jump or roll attack, but wait theres more. I previously mentioned the desire to appeal to the Western audiences competitive streak, by encouraging what we’d know now as speed running levels. He had always wondered when playing Mario why the levels could not be cleared more quickly, and this was his foundation for that aspect of Sonic. So for the last time get from point A to B as quickly as possible, while avoiding or destroying enemies using the ball up mechanic. For the most part I think that sums up Sonic pretty well.

“I like fast things and I thought that it would be nice to create a game where the more skilled you become, the faster you can complete a stage. Games back then had no backup or saving system, which meant that you had to play right form the beginning every time…As a result, the very first stage would be played time and time again, making the player very skilled at it. So we thought it would be nice if this would enable the player to complete those stages faster and that’s the basis of Sonic’s speed. We also thought this feature would help differentiate Sonic from Mario.”

—Yuji Naka, Programmer and Project Manager of Sonic the Hedgehog.

From a personal perspective all I can say is it worked. I lovingly played Sonic until the days of the PS1 when the Mega Drive took a back shelf, constantly beating my own times and finding new paths to the end of the level. I still have my MD and occasionally run through all of Sonic for the nostalgia, emulation just isn’t the same.


I’ve already talked at some length about the creation of Sonic himself and where the style came from, so I’ll take this opportunity to briefly discuss the levels themselves.

Level designer Hirokazu Yasuhara worked on all the levels single handedly and was responsible for the theme of each. The games colour scheme was influenced by pop artist Eizin Suzuki and the first level Green Hill Zone was designed to bare some resemblance to California in hopes of appealing to the Western market.

An example of pop artist Eizin Suzuki’s style.


There isn’t much official information outside of this, so from here on in this is purely observation and opinion. Each level has its own style and theme, often being worlds apart from each other but each lavishly detailed to the point its insane to comprehend it all being the work of one man.


The high contrast and vivid colours wash out as the game progresses and the zones get more industrial, until your final conflict with the main antagonist Dr.Robotnik, responsible for all the robots in the other zones. Visually its a nice touch and gives a sense of progression as your surrounding visuals get less natural and more grim. Even though ask any player and we all hate level 4 ‘Labyrinth’ more than any later stage, Sonic constantly drowning was the frustration of gamers for a generation.



Sonic had some performance issues during development but the final game ran absolutely flawless. The initial speed Naka wanted out of the game took some clever programming, early tests resulted in flickering, slow frame rates and glitchy sprite animations. This was solved early in development by Naka writing an algorithm to solve these issues, this was also responsible for smoothly allowing the games loop-de-loops and momentum based physics to work. I take my hat off to one programmer making all of this possible in 1989.

That about wraps up talking about Sonic, however I’m not done with Sega by a long shot. More on that shortly!