My 5 Year Plan

Now that I have finished college, the next 5 years will be interesting.

Five years from now I plan to obtain a Ba / Master’s degree in The Art of Visual Effects.
My main decision for going to University are the Connections (since it is situated in London, a hub for film and movies), Experience and a Diploma.

I will be attending Escape Studios, which is a private University that specialises in the Art of Visual Effects. From there, since the Uni is situated in London, it is surrounded by many Visual Effects companies like DoubleNegative, Framestore, Analog, Lumiere, JellyfishPictures etc. My plan will be to apply for a job in these studios as an internship or even a full fledged job as a Junior Artist that I would do on the side, to pay for the Course and to get myself in the industry.
From here I’d hopefully get further up the hierarchy, with a dream of being a Senior VFX Artist and Compositor with some Motion Design on the side.
My all time dream is to be on the big screen, not as an actor but in the credits, under them. Trust me it is way more gratifying that it sounds.

In the future I would like a sense of comfort that no matter what happens I will have the money to take care of it. I do not exactly need a luxurious lifestyle, but I’d like to partake in a life where I have money in the bank, and as long as I can express myself and be creative, I will be happy. 
I want to be able to have a family without financial stress and worries. I want to make sure that my children will have great and understanding parents, just like mine, and that they will be able to grow up in a stable household. Not too stable to become spoilt, but that will be my job to not let it happen.

With the increasing popularity of task automation at work, jobs like construction, engineering and retail will be decreasingly available to us. Creativity is the one skill that I think you should invest in, because this is the only skill that a robot cannot be programmed to have.

I have / am working hard in school because I think it is vital for my future. I need proper education to accomplish all of the goals that I have set out for myself.

Regarding how I am going to pay for all of this. Hopefully, if I manage to land the job I am looking for, a percentage will be taken out of my salary that will contribute to paying off the student loan for the 3/4 years. If the workplace sees potential in me and retains that I am good enough for their standards, they might even help me pay off my loan

Cutting it short, all of these things are a part of my future. University is not my future, its only the beginning of it.

Babel – Development and Final Product

Development

This is the Asset list for my game

Here you can find the Production Log for my game

This is the final look of my map, and a Development board that we printed professionally.

Evaluation

My original map design was a supercity based in 2065. Babel is the only habitable place on planet Earth after the rest of the world was wiped by famine and depleted resources, caused by extreme pollution. Since it’s the only “active” place on Earth, it gets its energy from the PowerGem – a small stone found in the middle of the map. This stone was stolen by the government, that is using it mostly to their advantage for nuclear weapons, and not providing electricity to the rest of the inhabitants of the city. Because of this, people are furious, fighting over a stone just for a pinch of energy. It is your job as a military soldier to take this stone and put it in the the right hands, providing energy for everyone.

The feedback I got was mainly positive. Everyone liked the feel of the game and how it looked accurately futuristic, despite it not fitting in the Overwatch style. The size of the map was good, not too big to lose yourself in, and not to small; perfect for free for all matches (as the map was intended for).
The main negative feedback was that the graphics did not match the Overwatch style. This was totally my fault, but I could easily fix this by hand painting cartoon like textures in Photoshop instead of using Substance Painter, which used physically accurate shading.

I was able to complete my assets list by the deadline given. This is mostly because I planned ahead and set specific deadlines to complete individual tasks. Being organised with my work helped a lot.

During the modelling process, I had a problem with the stairs collision where the player would get stuck on a step because it was too high. I fixed this by making a custom collision mesh, which looked like a smooth ramp with no steps, that I used instead of the stairs collision.

I also ran into a problem in UE4 with the light map for my outer walls. Since the walls had two sides, the maps would collide and create weird shadows. Since you couldn’t see the walls from the outside, I fixed this problem by deleting one side of the walls and making sure that “two sided material” was ticked in the wall material. I made sure to do this last step to retain shadows, if not the sky would act like one side of the wall is missing and wouldn’t cast a shadow.
This is the difference between sigle-sided and two-sided materials.

The final problem I found was that the shootable targets, that were part of the interactivity of the level, would spawn on top of each other. I would fix this by first, in the spawner BP adding a boolean variable named “isAvailable” (and make sure it is ticked by default) which would tell the target if the spawner is available to show a target mesh. In the Target BP I would then set “isAvailable” to ticked when the target is destroyed, making it available again.

The part in the code that is missing, which would fix this problem, is setting the “isAvailable” variable to un-ticked when the target spawns in the level, making that specific spawner unavailable.
To fix this I added two functions in the level blueprint. One that found available spawners and the other that would get set spawner.
After this I added the following code, which would find available spawners every two seconds, spawn a target and set the target to not available.

This fix was a bit of a headache, as coding is not my strong point, but with help I managed to correct it in the end.

Thinking about how I could improve for next time, I would definitely make smaller resolution textures, since some of mine were unnecessarily high detailed, which is great for a 3D photo-realistic still or product shot, but not in a video game map where the player will most likely not stand still and admire the detail of a small gas canister. This would also benefit running speeds, as smaller resolution textures will save up processing power.

The main reason why I think i was able to create my whole map exactly like I wanted, with little to no problems, is modularity. Before even attempting to model a single asset, I planned and made an asset list where all the assets were broken down into modular parts. For example, for the main circular building I modeled the ground floor piece and middle piece but in halves, so I could easily hand paint textures on the inside of the sections too, cause of course you can walk into the building.
This made the final process a lot easier, since I just duplicated a single piece a few times and re-positioned them to create a high tower.

Taking a brief overview and test of the level I can safely say that all of my assets show up properly with their according textures, the 3D normals all face the right way (there are no parts where on a model the normals are inverted, and therefore a hole appears in it).
The animations work well and are smooth. There is only a small bug where the player’s “camera bob” when walking, to simulate a slight camera shake, doesn’t stop sometimes, when the key is released, despite it being expressed in the PlayerBP’s code.

The sound also works perfectly fine and is in sync with the animations (the gun shooting sound matches the weapon animation, and the footsteps match the character’s camera bobbing).
Regarding the HUD, everything is displayed correctly, the scoring system works well, and the instructions that appear on screen are displayed for long enough. Part of my feedback for my game was about how the HUD was a bit intrusive when staying still, but when actually playing the game it seemed like the player was wearing a mask/goggles (this is the result I was aiming for) and was not intrusive whatsoever.
People said, because it was so subtle, it seemed like it disappeared in a way when you were moving around and concentrating on shooting targets.
The feel of the game is right and it is overall fun to play, despite it not matching the style of Overwatch, and matching more Call Of Duty’s graphic style.
Overall, the audience liked my level, finding it particularly realistic and having decently fun interactivity to play with. If we had more time to develop this level , along with other maps, to work with multiple players at once, my map would have been pretty good for free-for-all matches.

For the future, I would definitely develop a real, working loading screen instead of an image in a HUD that loads before the level for an aesthetic purpose, but does not actually load anything.
I added mine in the level because it made the whole project look more professional, even though it did prevent the player from playing the game for 8 seconds, which is like a waiter handing a steak to a customer, but waiting 8 seconds before allowing them to eat it. It also was used to give a brief introduction / backstory to the map so you knew what you had to do as soon as you spawned, instead of waiting and figuring it out on your own.

Game Engines

A game engine is a software designed to build video games. Developers use them to create games for consoles, mobile devices, and computers. The core functionality typically provided by a game engine includes a rendering engine (“renderer”) for 2D or 3D graphics, a physics engine or collision detection (and collision response), sound, scripting, animation, AI and may include video support for cinematics.

Two terms in the games industry that are closely related to game engines are “API” (application programming interface) and “SDK” (software development kit).
-APIs are the software interfaces that operating systems, libraries, and services provide so that you can take advantage of their particular features.
-An SDK is a collection of libraries, APIs, and tools that are made available for programming those same operating systems and services. Most game engines provide APIs in their SDKs.

Examples of Game Engines are:

  • Buildbox (2D engine)

Buildbox is a 2D game development engine that allows users to build simple games without any code. It offers a clean user interface where you can simply drag and drop design elements to create your very own game in no time. This can be used for both Android and iOS platform thus, it works perfectly as a cross-platform game development engine. It is ideal for designing simple games like ColorSwitch, The Line Zen, SKY etc. In fact, all these games were designed using BuildBox. On the contrary, the engine lacks 3D capabilities and you will have constraints in implementing features which are not available in the development console. Overall it is a good solution for non-programmers to create games.

  • Unreal Engine (3D engine)

Unreal Engine is, graphically, the most powerful game engine on the market. Developed by Epic Games, although primarily developed for first-person shooters, it has been successfully used in a variety of other genres, including stealth, fighting games, MMORPGs, and other RPGs. With its code written in C++, the Unreal Engine features a high degree of portability and is a tool used by many game developers today. It has won several awards, including the Guinness World Records award for “most successful video game engine”.

Unreal Engine 4 is developed in house. The first version was released in 1998. This Engine was used to develop Unreal Tournament. Games like the Batman Arkham series on last gen consoles, Epics own “Gears of War” series were all made on udk3 as well as Indie hits like Toxic Games “Qube”. More recently games such as “Daylight” for PC and PS4 by Zombie Studios and another Epic Games title called “Fortnite” were developed in Unreal Engine 4.

  • Lumberyard (CryEngine)

Amazon’s Lumberyard is a free AAA game engine which can be used for Android, iOS, PC, Xbox One and PlayStation 4. It is based on CryEngine, a game development kit developed by Crytek. With cross-platform functionality, Lumberyard provides a lot of tools to create AAA quality games. Some of its best features include full C++ source code, networking, Audiokinetic’s feature-rich sound engine, seamless integration with AWS Cloud and Twitch API. Its graphics are accelerated with a range of terrain, character, rendering and authoring tools which help to create photo-quality 3D environments at scale. Pricing is a major competitive advantage of Lumberyard. There are no royalties or licensing fees attached to the game usage. The only cost associated with the tool is the necessary usage of AWS Cloud for online multiplayer games. But that comes with an advantage of faster development and deployment thus, proving worth the cost.

  • Frostbite

Frostbite is a in-house game engine developed by EA DICE (now managed by Frostbite Labs), designed for cross-platform use on Microsoft Windows, seventh generation game consoles PlayStation 3 and Xbox 360, and now eighth generation game consoles PlayStation 4 and Xbox One.
The game engine was originally employed in the Battlefield video game series, but would later be expanded to other first-person shooter video games and a variety of other genres. To date, Frostbite has been exclusive to video games published by Electronic Arts.

 

As stated above, the core components of a game engine include a rendering engine, a physics engine or collision detection (and collision response), sound, scripting, animation, AI and may include video support for cinematics.

 

Graphics Rendering

Rendering is the process where an object in 3d space is converted into a two dimensional image-plane representation of that object.
There are a few rendering methods used to achieve this, and these are:

Rasterisation – rasterising is used to render real time 3d graphics, like games, as it is effective. This works by looking at the thousand of triangles that make up the scene and determines which are visible in the current perspective, with that information it then analises the light sources.
Rasterising does a good job, but is not able to create the same level of detail as raytracing in real time.

RayTracing – is a rendering technique that is capable of creating photorealistic images from 3d scenes. It works by calculating the path of every ray of light, following it through the scene until it reaches the camera.

It can create very accurate reflection and refraction, but because it calculates every ray and displays for every pixel of the screen, this means that a lot of data needs to be calculated, for this reason ray tracing is not best suited for real time rendering.

Radiosity – is a rendering technique that focuses on global lighting and how it spreads and diffuses around the scene. This is best used to recreate natural shading.
This example shows a scene rendered without and with radiosity.

These rendering techniques could be used together to create amazingly realistic scenes that can run in real time. In games this can be achieved with light maps, often used on the static elements of games, such as terrain or architecture. This works by baking the lighting data straight onto the texture.

 

Rendering Pipeline – Shaders

A shader is a small program than runs on a graphics card which manipulates a 3d scene during the rendering pipeline, before the image is shown on screen. These allow for different rendering effects, great for real-time.
Shaders can be Vertex or Pixel:

Vertex
vertex shaders are used to modifiy the position of vertex, colour and texture coordinates in the rending process.
Example of an image before and after vertex shading. Both are the same model, but the second has a shader applied to it that modifies the position of the existing vertices.

Pixel
Vertex shaders can make changes to existing vertices, but cannot create new ones. A pixel shader is used to calculate effects on individual pixels (generally used for calculating, colour, translucency, fogging and lighting). The most popular use of pixel shaders are using Normal or Bump maps, which affect how the geometry looks when it reacts to light.
This makes models look more detailed without the need to have a high polygon count.

Lighting – This is used to illuminate the object and scene. This helps to create realism by using light positioning, brightness and direction.
In 3D programs we have numerous lights to choose from to illuminate our scene.

Ambient
An Ambient light creates a soft/ subtle feel in terms of lighting. It is a combination of both direct and indirect light, for example it can be used to simulate the glow of a lamp (direct) or the lamplight being reflected off of a wall.

Point
A Point light shines evenly in all directions from a small point in space. Point lights can be used to simulate objects such as a star or an incandescent light bulb.

Directional  
A directional light shines evenly but in only one direction. The light rays are parallel to one another and can be used to simulate a long distance light source such as the sun, which is viewed from the earth’s surface.

Textures
A Texture is a single file, a 2D static image. It is usually a diffuse, specular or a normal map file that you would create in Photoshop or Gimp, as a tga, tiff, bmp, png file. These can be manipulated photographs, hand-painted textures or textures baked in an application such as xNormals. You would then import these textures into any 3D program and use them as part of a material (you cannot apply textures directly to objects, textures have to be a part of a material).

Fogging
This method is used to hide details of far objects in the world to save up on computing memory. An example of fogging is in GTA IV; when you are looking at a far building your view is limited due to fogging. However the closer you get, the LOD of the object will increase.

Shadowing
Also known as shadow mapping, is the course of action an engine takes when rendering shadows to applied objects. These shadows are done in real-time, depending on if there is enough light within a pixel for it to be projected. In comparison to ‘Shadow Volumes’ (another shadowing technique) Shadow Mapping is less accurate. However Shadow Volumes use a ‘Stencil Buffer’ (temporary storage of data while it is being moved one place to another) which is very time consuming, whereas shadow map doesn’t, making it a faster.

Level of Detail

Level of detail refers to the method of reducing the number of polygons used in 3d models, based on their distance from the player / camera. This technique increases the efficiency of rending by reducing the workload on pc hardware.
Each model represents the same gun. LOD works by how close the model is to the player / camera, choosing the level of detail to show. In a game, if we are up close to the gun we would see the version “LOD 0”; if the gun was on the other side of the map, we would see “LOD 3”.
This method is great as it reduces the amount of power spent on rendering high quality models that cannot even be seen properly, making sure to retain an acceptable frame-rate.

Culling

The term Culling refers to not rendering anything that is unnecessary for optimisation. The overall aim of culling is to find out what can be missed during the rendering pipeline, as it will not be seen in the end result. There are different methods that can be used:

Occlusion Culling

Occlusion culling is a feature that disables the rendering of objects that cannot be seen by the camera. The process will go through the scene using a virtual camera to build a hierarchy of potential visible objects, from which the data will be used during runtime by each camera to identify what will and won’t be visible.

Backface Culling

Backface culling is a method used to reduce the amount of polygons drawn in a scene by working out which ones are facing away from the viewer. Detecting and eliminating back-facing polygons from the rendering pipeline reduces the amount of computation and memory traffic.

Contribution Culling

Contribution culling is the process of removing objects that would not contribute significantly to the image, due to their high distance or small size.

Portal Based Culling

Portal Culling is a method by which the 3D scene can be separated into areas called cells that are joined together by Portals – a window in 3d space that allows objects in one cell to be seen from another cell. Portal culling works best in scenes where there is limited visibility from one area to another i.e. a building or cave. When rendering, the camera will be in one of the rooms and that room will be rendered normally but for each portal that is visible in that room a frustum is set for the size of the portal and then the room behind it is rendered.

BSP Trees

A Binary Space Partitioning Tree (BSP) is a standard binary tree that is used to search and sort polytypes in N-dimensional space.
A BSP tree works by dividing and organising parts of a game scene then sorts the parts into a practical formation by arranging the parts of that space into a list from which you can obtain information in relation to how the parts relate to each other.
When playing a game in first person perspective such as Doom or Quake you are required to move around numerous rooms, along corridors or halls (see figure below) and as you move around during gameplay the computer has to continuously redraw everything that you see as your viewpoint changes.  Given the fact that the computer has to repetitively redraw the scene constantly, this means that everything must be drawn at a rapid pace otherwise there would be pauses in the motion and the game would seem jumpy.

There may be a high number of polygons per scene and in order for the computer to redraw the scene it has to analyse and decide which polygons should be drawn as there is no need to draw the ones that are not in view.  In this instance a BSP tree will be used to sort through and store the polygons of the game world in a way that the computer can access them in swiftly which results in faster rendering during gameplay.

Anti-Aliasing

Anti-Aliasing detects rough polygon edges on models and smooths them out using a quick scan method. The more edges a model has the easier the model can be scanned at once. This method is mainly used on the PS3 rather than the Xbox because it’s a more powerful console and has better processing power. However PC gamer’s may choose to turn of this rendering method because it often slows down the initial performance of your game.

Depth testing

A depth channel (also known as Z depth or Z buffer channel) provides 3D information about an image. It represents the distance of an object from the camera. Depth channels are used by compositing software i.e. you are able to use the depth channel to suitably composite numerous layers whist with regards to the appropriate occlusions. By default Maya generates an image file with three colour channels and a mask channel.

Depth Testing is where occluded pixels are discarded and the concept of overdraw comes into play. Overdraw is the number of times you’ve drawn one pixel location in a frame. It’s based on the number of elements existing in the 3D scene in the Z (depth) dimension, and is also called depth complexity. Depth testing is a technique used to determine which objects are in front of other objects at the same pixel location, so we can avoid drawing objects that are occluded.

Physics Engine

Physics are used to give the game some form of realism to the player. Depending on the game some would need more accurate physics that the other, for example a fighter jet simulation game would use more accurate physics in comparison to ‘Tom Clancy’s – H.A.W.X.’. As of the late 2000′s games are made to look more cinematic and the use of a good physics engine is detrimental for the realism of game.

Collision detection
This is the response taken when two or more objects collide with each other. Every game uses features of Collision Detection, however the level of importance it has in a game will vary. A solution for Collision Detection is ‘Bounding Box’ which is a rectangular box surrounding your character or object. The Bounding Box has three values Width, Height and the Vector location and anything that intercepts this invisible square boundary is a sign of collision. Often a favourite for developers as it is mainly used for small objects.
Game Engines can only use convex shapes as collision boxes. If a model “Tapers in”, the engine cannot use a concave bounding box. It will instead use multiple convex boxes together.

Animation System

Path Based Animation

A spline path is a path that is represented by a cubic spline.
We use spline paths in video games to make an NPC’s movement path look ‘lifelike’, it helps if there aren’t any sudden direction changes in ambient conditions, or when making a 3d platformer, to make the character follow a certain path backwards and forwards.

 The process of animating one or more objects moving along a defined three-dimensional path through the scene is known as path animation. The path is called a motion path, and is quite different from a motion trail, which is used to edit animations. Path animations can be created in two ways:

The term Animation system refers to various methods that are used at different stages during the process of animation such as Particle Systems, Keyframing and Kinematics.

There are two main techniques that are used when it comes to positioning a character for animation, the first is Inverse Kinematics and the second is Forward Kinematics.

-Inverse Kinematics (IK)

When working with Inverse kinematics you are able to create an extra control structure known as an IK Handle for certain joint chains such as legs and arms. The purpose of the IK handle is to allow you to pose and animate an entire joint chain by moving a single manipulator.
You initially pose the skeleton by moving the IK handles that is located at the end of each joint chain (If you move a hand to a door knob the other joints in the arm will rotate to accommodate the hands new positioning).
diagramIK.png

On the left, the character is not using IK setups. In the middle, IK is used to keep the feet planted on the small colliding objects. On the right, IK is used to make the character’s punch animation stop when it hits the moving block.

-Forward Kinematics

Unlike posing with Inverse Kinematics, forward kinematics requires you to rotate each joint individually until you reach the desired position. Moving a joint will have an effect on that particular joint and any joints that are below it in the hierarchy therefore If you wanted to move a hand to a specific place you must rotate several arm joints in order to reach that location.
diagram_FK.png

Whilst using Forward Kinematics to animate a complex skeleton it is considered to be strenuous and is not ideal for specifying goal-directed motion it is ideal for creating non-directed motions such as the rotation of a shoulder joint and simple arc motions.
Inverse kinematics is more intuitive for goal-directed motion than forward kinematics because you can focus on the goal you want a joint chain to reach without worrying about how each joint in the chain should rotate.

Particle Systems

A particle system is a technique that is used for modelling. It is a collection of minute particles that allows you to create dynamic and fluid objects such as fire, water, clouds and smoke unlike an object that has a well-defined structure and is of a smooth nature.

For each frame that is used in an animation sequence there are a series of steps that need to be followed, this consists of:

1. A new particle is generated
2. Each new particle is assigned its own set of attributes
3. Any particles that have existed for a specific lifetime are destroyed
4. The remaining particles are transformed and moved according to their dynamics
5. An image of the remaining particles is rendered

Typically a particle system’s position and motion in 3D space are controlled by what is referred to as an emitter. The emitter acts as the source of the particles, and its location in 3D space determines where they are generated and where they move to. A regular 3D mesh object, such as a cube or a plane, can be used as an emitter. The emitter has attached to it a set of particle behavior parameters. These parameters can include the spawn rate (how many particles are generated per unit of time), the particles’ initial velocity vector (the direction they are emitted upon creation), particle lifetime (the length of time each individual particle exists before disappearing), particle color, and many more.

A particle can be given a lifetime in frames once it has been created. Once the lifetime reaches zero the particle is then destroyed. This can also be carried out when the opacity/colour is below a certain threshold or when a particle has moved from the region of interest i.e. at a certain distance.


Unreal Engine contains an extremely powerful and robust particle system, allowing artists to create mind-blowing visual effects ranging from smoke, sparks, and fire to far more intricate and otherworldly examples. Unreal’s Particle Systems are edited via Cascade, a fully integrated and modular particle effects editor. Cascade offers real-time feedback and modular effects editing, allowing fast and easy creation of even the most complex effects.
The primary job of the particle system itself is to control the behavior of the particles, while the specific look and feel of the particle system as a whole is often controlled by way of materials.

Emitters Panel – This pane contains a list of all emitters in the current particle system, and a list of all modules within those emitters.

Curve Editor – This graph editor displays any properties that are being modified over either relative or absolute time.

Artificial intelligence

Artificial Intelligence is creating the illusion of an NPC having realistic reactions and thoughts to add to the experience of the game.

A common use of is ‘Pathfinding’ which determines the strategic movement of your NPC. In the engine you give each NPC a route to take and different options to act if that specific route is not accessible, this also takes other things into account like your level of health & the current objective at the time. These paths will be represented as a series of connected points. Unreal Engine 4 uses a math algorithm called A*

Another similar type of AI usage is ‘Navigation’ which is a series of connected polygons. Similar to Pathfinding it’ll follow these connected polygons only moving within the space, however they are not limited to one route. Thus having the space and intelligence to know what objects or other NPC’s to avoid it can take different routes depending on the circumstance.

A fairly new method of AI is ‘Emergent’ which allows the NPC to learn from the player and develop reactions or responses to these actions taken place. Even though these responses or reactions may be limited it does does often give of the impression that you are interacted with a human-like character.

An AI works through a behaviour tree, made up of a root, selectors and sequences. A Root is the starting point and is at the top of the tree (despite working from left to right). A Selector takes the task from the root to the sequences, however if a node fails then it stops the task altogether and exits through the root. The Sequencer is what holds the task nodes.

An example of usage of AI in videogames is Far Cry 2, a first-person shooter where the player fights off numerous mercenaries and assassinates faction leaders. The AI is behavior based and uses action selection, essential if an AI is to multitask or react to a situation. The AI can react in an unpredictable fashion in many situations. The enemies respond to sounds and visual distractions such as fire or nearby explosions and can be subject to investigate the hazard, the player can utilize these distractions to his own advantage. There are also social interfaces with an AI but however not in the form of direct conversation but more reactionary, if the player gets too close or even nudges an AI, the player is subject to getting shoved off or sworn at and by extent getting aimed at. Other social interfaces between AI exist when in combat, or neutral situations, if an enemy AI is injured on the ground, he will shout out for help, release emotional distress, etc.

neural nets and fuzzy logic

Middleware

The middleware acts as an extension of the engine to help provide services that are beyond the engines capabilities. It is not apart of the initial engine but can be hired/rented out for it’s usage which can be for various purposes. There is an engine for any feature in game engines. For example the physics on Skate 3 were terrible, if you were to drop of your board you would bounce or fly to unrealistic lengths. In that circumstance for there next game they may want to hire out ‘Havok’ which is a well established and respected physics engine to help fix this problem. There are also other middle ware engines that you can hire out to assist your games in many ways like Demonware, they are a networking engine who’s sole purpose is to improve your online features.

Sound

Sound in game is detrimental because it’s a notifiable response that can occur from interactions in the game. Another purpose of sound is that it can add more realism to the game by having ambient sounds that make your environment more believable to be apart of. For example if the scene setting is in an army camp you’ll be able to hear marching, guns reloading or being shot, chants, groaning of injured soldiers etc. Or you could include soundtracks that bring out different emotions in the player, for example in Dead Space they use music to shift your emotions from calm to scared in a matter of seconds. Usually games are made and edited outside of the engine, however some engines do include there own auido technology.

Geometric Theory

A 3D shape is a mathematical representation of an object in 3D space, that is described by its edges, faces, and vertices.

A face is a 2D shape that makes up one surface of a 3D shape, an edge is where two faces meet and a vertex is the point or corner of a geometric shape.
A wireframe model is an edge or skeletal representation of an object.
Image result for what is the wireframe of a 3d model

3D Coordinate Space and Axes

The 3D dimensional space that applications like Maya and Unreal rely on is based on the cartesian coordinate system. Using this system, space is defined by using 3 axes (x, y and z – representing width, height and depth accordignly).
Image result for cartesian coordinates system 3d

Mesh Construction

The group of faces that make up a 3D model is what we call a Mesh.
The number of faces in a mesh isknown as the poly count, and the polygon density is referred to as the resolution.
All 3D models are made from primitives that are extruded, bavelled and scaled, sculpted etc. The edges, faces and vertices of a primitive are manipulated in such way to create a certain shape.
Primitives are the building blocks of 3D basic geometric forms, that you can use as is or modify to your liking. Most software packages build them in for speed and convenience.
The most common 3D primitives are cubes, pyramids, cones, spheres, and tori, which are building blocks for more complex forms.

VECTARY is a great free online basic modelling software used to create, share and customise 3D models with easy-to-use tools. It is a great starting point to understand how shapes are made and to get a feel of vertices, edges and faces.

As I said above, when making a 3D model, primitives are extruded, bavelled and scaled, sculpted etc. in such way to create the desired shape.

Extruding
An extrusion is simply pushing a 2D shape into the third dimension by giving it a Z-axis depth. The result of an extrusion is a 3D object with width, height, and now, depth.

Bevelling
Bevels expand each selected edge into a new face, rounding the edges of a polygon mesh

Scaling
Use Scale to adjust the overall size of an object. Like other transforms, the results of a scale operation may vary according to the coordinate system, axis constraint settings, and pivot point. For example, if the X-axis is the only one active, a scale stretches the object horizontally only. If all three axes are active, scale re-sizes the object in all directions.

Rotating
Rotate makes an object revolve around the selected axis. The tricky part about Rotate is making sure everything is set up so that the object revolves in the way you want.

Positioning
Positioning relocates objects, allowing you to place shapes and objects anywhere in the 3D world.

Insert Edge-Loop
The Insert Edge Loop Tool lets you select and then split the polygon faces across either a full or partial edge ring on a mesh. It is useful when you want to add detail across a large area of a mesh or when you want to insert edges along a user-defined path.

Box Modeling vs Extrusion Modeling

Box modeling is a technique in 3D modeling where a primitive shape is used to make the basic shape of the final model, which is then used to sculpt out the final model. The process uses subdividing to reach the final product, which can lead to a more efficient and more controlled modelling process.
Box modeling is a modeling method that is quick and easy to learn. It is also appreciably faster than placing each point individually. However, it is difficult to add high amounts of detail to models created using this technique without practice.

Extrusion modeling is also a technique where a model is created from a primitive shape. but rather than adding subdivisions, like with box modeling, new geometry is created through extrusion
These two techniques are usually used together to create accurate and complex 3D models.
There is a range of other modeling techniques used to create 3D objects, these include: edge modeling, nurbs modeling (created by bezier curves), digital modeling, image based modeling and 3D scanning.

Modeling a LootBox

To model a loot box I started off with a (Primitive Object) Cube, positioned it in the middle of the world, scaled it properly and rotated it to match the world axis. I then added a few edge loops, selected some faces and extruded them. Finally, to add final details, I bevelled the edges. To create this lootbox I used both box modeling and extrusion modeling techniques together.
 

Application of 3D – Theory

Contemporary technology has made great strides, and our culture is marked by many advances. One of them that is becoming more and more useful is the use of 3D modeling software. This growing field is also becoming more lucrative, too, with more jobs becoming available in the field.

These are some industries that use 3D modeling software:

Film (VFX, CGI, Animation)

One of the most obvious places you can see the use of 3D modeling growing is in film. It’s clearly useful in special effects, especially for creating environments that never existed before, for example in the film “Avatar” it would’ve been really had to find floating islands to use as a movie set.
Image result for avatar landscape
Video game and animation companies use 3D scanning of still face/body for realistic human representation. Movie animators have scanned famous actors facial features to produce life like animation characters, especially for animating stunts instead of hiring a stunt double.
3D is also used for 2D / 3D Animation. These are both a really lengthy process that require patience and skill. An example is the film “Despicable Me” where Universal Pictures had to animate 3D models for the whole movie, frame by frame.
   
More and more Universities offer courses that teach modeling for films.

Architecture

A few years ago, architects would draw out plans and blueprints onto paper to present the ideas to a client; this was their form of rendering. In this day and age, these “renderings” are done on computers, to add motion and depth, so clients can see a “fly-by” that illustrates all angles of vision (including a birds-eye or ground perspective view) of the project. Additionally, the views can go inside the structure, too. This way, clients know exactly what they’re in for on their project.

Simulation

3D modelling is used for educational purposes as well. Educational graphics could be for example a detailed 3D model of the human anatomy. For example, BioDigital uses 3D models and interactive animations that allow users to study the human anatomy.
Image result for human anatomy 3d model   Stanford Director of Neurosurgery Dr. Gary Steinberg joins Malie Collins for a final refresher of the location of the aneurysm before Steinberg begins the surgery. Collins is the program manager of the Stanford Neurological Simulation & Virtual Reality Center.
Thinking about simulation is particular, doctors are now using 3D models in VR more and more to test difficult operations / surgeries and for scientific research.
Geologists and scientists can use 3D modeling to create models that simulate earthquakes and landforms, such as ocean trenches, that let them see the affects of stresses. Additionally, they can simulate motion, like flight patterns, including various factors that affect them.
Also, architects use 3D to simulate if a building is sturdy enough and if it would hold its weight, by simulating gravity.

Games (Landscapes, Props, Characters)

Another area where 3D modeling software is evident is in games. Video games are becoming more realistic, the scenes, the props and even the people are starting to look more like actual scenes, props and people. There are an increasing number of universities that offer courses in 3D modeling for video games as well as for film.
Image result for games 3d character model  Image result for games 3d landscape model untextured
Related image

Engineering

In engineering, 3D printing has been a slow revolution. It initially started off with businesses but is slowly becoming a consumer revolution. And as 3D printer prices come down, more regular people will use them. 3D modelling is needed to create designs before they go to the printer. The applications of 3D printing are hugely varied. Prototypes can be easily built, replacement parts manufactured and specialist tools can be created. There are endless possibilities for 3D printing, but these are only accessible if the product can be modelled on a computer first.

Publishing

Publishers of textbooks and other illustrated books are making use of 3D modeling more and more. It allows them to show pictures that otherwise they might not be able to get, for various reasons, including access and copyright issues. Sometimes, the illustrations may be fantastical, and they can help show an artist’s version of something that mankind has never seen, like historic events or visions of the future.

Government and Military

Government projects, including road design through to building management and restoration, can be incredibly costly and, like architecture, it is incredibly important to communicate the vision and scale of a project. 3D has become an excellent tool doing just that. It is now possible to build and render photorealistic images and animations from civil data. Other tools can even provide real-time hydrology and hydraulic analysis to simulate drainage in instances of varying weather conditions. Military and government agencies are also increasingly using 3D in simulation to train police officers and troops for combat engagements and first response management. Due to the development in realism of gaming engines, military simulations can now be conducted through the same platform.

Marketing

3D modeling artists can help advertisers and marketers depict their products in the ideal state. It allows companies to render new cars, new product packaging and prototypes at drastic savings. If it doesn’t ‘play’, they can fix it by merely changing the computer model. Additionally, once they have developed the right rendering, they can use that to sell the item before they have to invest capital in production.
Products are being designed in 3D because it gives the designer great perspective, for example they are able to view the product from all angles rather than just a flat image on paper.

Graphics Pipeline

A Graphics Pipeline is the sequence of steps it takes to create a 2D raster representation of a 3D section. When the 3D model has been created. the pipeline is in charge of turning the model into what the computer displays.
Image result for the 3d graphics pipeline illustrated
Graphics Pipelines are all unique and made up of numerous stages:
Input Assembly – at this stage, the geometry is built so it can be rendered out later in the process.
Vertex Shader – here, code is run that operates on each of the vertices (this process applies shaders).
Rasteriser – this determines what pixels are visible through clipping and culling geometry, and sets up the pixel shaders, and sorts out how they will be applied.
Culling is where any geometry that does not forward in the dimensions of the view-frustum (rectangle representing the view of the screen) is discarded.
Clipping is where any geometry that lies partly out of the view-frustum is clipped and reshaped.
Fragment (Pixel) Processor – here, geometry is taken, and the fragments are shaded to comprise the shapes
Output Merging – the final stage. This is where everything from the previous stages all comes together. The final image is built and sent to the screen.

In a game running at 30 FPS, this process repeats 30 times each second.

Examples of Graphics Pipelines are DirectX 11 (Windows), OpenGL (Open-Source) and Metal (IOS).

API

An API (short for Application programming interface) is a set of protocols and tools that are used for building software applications. API is code that allows the communication between two software programs. Two main APIs used in the gaming sector are Direct3D, part of the DirectX API, and OpenGL.

DirectX is a collection of technologies first developed by Microsoft in the mid 1990s and was released with windows 95. It has also been included in every version of Windows since. This set of technologies included several APIs designed for use in game development as well as other multimedia applications. This API allowed these games and application to talk to the hardware. DirectX is composed of a range of APIs covering 3D and 2D graphics, input, sound and networking but we’re only really interested in the 3D graphics API known as Direct3D. Direct3D is used for PC gaming on machines running Windows as well as on the Xbox, Xbox 360 and Xbox One. This is actually how the original Xbox got its name.

OpenGL, which stands for Open Graphics Library, is an alternative to DirectX and generally offers the same functions to game developers as it also offers access to the low level functions of graphics hardware. One of the main advantages of this API is that it runs on many different platforms. It is used on Windows, Mac OS and Linux as well as on portable devices such as iPhones, iPads and Android powered phones and tablets. These portable devices use a cut down version known as OpenGL ES. Tweaked versions of OpenGL have also powered all of the playstation consoles. OpenGL was originally developed by Silicon Graphics, Inc but from 1992 the development has been overseen by the OpenGL Architecture Review Board (ARC) which is made up of major graphics vendors and other industry leaders such as ATI, Intel and Nvidia.

The reason that APIs such as Direct3D and openGL are used is that they make the advanced capabilities of graphics cards available to the developers. This includes features such as z-buffering, mipmapping and atmospheric effects. Developers can take advantage of these features without having to program them from scratch as they are made available in the common libraries of the APIs. This saves a lot of time but also ensures that these features will work with a variety of graphics hardware.

3D Development Software

Autodesk Maya – Maya is a 3D animation, modelling, simulation, rendering, dynamics and effects software. This software allows artists to create and use models, animations, lighting and VFX. Many creators use this software to create video games such as Deus Ex: Human Revolution.
It is also currently the industry-leading 3D software application. This application is great for 3D animators and is most often used by both the film and gaming industries because of its advanced animation and effects tools. However, it is also good for creating 3D concepts and can be used in architectural design and visualisation.

Some of the pros of this software is that it is built in scene assembly and accelerated modeling workflows, which will increase productivity. There are a wide range of tools, making complex animations easy to create. Real time renders allow artists to work in an environment that nearly matches the final output, and finally it encompasses a Skeleton Rigging System, that allows for quick and easy character creation and animation.
     

Cinema4D – Cinema 4D is a 3D modeling, animation and rendering application developed by Maxon and is a great application for motion graphics, modeling, and texturing. While most large company’s tend to use Maya and 3DS Max, Cinema 4D is typically best for single artists or small teams.
Some of the most notable movies incorporating the use of Cinema 4D include Spiderman 3, 2012, Watchmen, and the animated film ‘Cloudy with a Chance of Meatballs’ which also used BodyPaint 3D to texture every sequence.
Cinema 4D is considered to be one of the most intuitive and user-friendly of the 3D applications. Setting something up in Maya can be time-consuming and often can be done in a matter of seconds in Cinema 4D.

Some other great benefits that Cinema 4D offers are: MoGraph and Sketch and Toon, allowing Cinema 4D to do things that other programs can’t without a lot of scripting. MoGraph is a great feature adding to a fast and easy workflow. It allows you to clone numerous objects, create extruded text, add effects, create motion and more, quickly and easily. Sketch and Toon are tools for cel shading, cartoons, and technical drawings.
In film, Cinema 4D is mostly used because of Bodypaint and the applications excellent projection mapping capabilities.
Finally, you are able to import and export a variety of file formats, allowing you to integrate it in almost any pipeline.
     

Z-Brush – Z-Brush is a digital sculpting tool that combines 3D/2.5D modeling, texturing and painting. The main difference between ZBrush and more traditional modeling packages is that it is more akin to sculpting.
ZBrush is used for creating high-resolution models (able to reach 40+ million polygons) for use in movies, games, and animations, by companies ranging from ILM to Electronic Arts.
ZBrush was developed by the company Pixologic Inc, founded by Ofer Alon (also known by the alias “Pixolator”) and Jack Rimokh. The software was presented in 1999 at SIGGRAPH.
       

Numerous 3D softwares come with Plugins that can be downloaded (scripts created mostly by users to simplify workflow or to enhance it).

Constraints

When creating 3D art, especially for games, there are a number of constraints that you need to consider:

Polygon Count
The more polygons that appear within the render view, the longer each frame will take to render. This constraint applies equally to real time applications such as games as well as non real time such as animation or special effects. This is directly linked to the hardware available.

Render time
This is a huge constraint for real time 3D graphics. For a game to run at 30 frames per second, the hardware needs to render 30 frames every second. This is quite an achievement when you take into consideration how long it can take to render a frame of a 3D animation.
This frame from Toy Story took 16 hours to render.
This means that render time can be costly in two ways. The first way is for games and concerns performance. If the game drops noticeably below 30 frames per second then the player will become aware of the drop in performance which will then have a negative impact on their experience. The second way in which this can be costly which applies more to pre rendered projects is the actual monetary cost. The longer a project takes to render, the more it will cost in equipment, resources like electricity and in man power.

File size
File size is a constraint for two main reasons. The first is that 3D graphics need to be saved somewhere. This could be onto a disc, a hard drive or somewhere in the cloud waiting for a digital download. For these reasons the file sizes need to be kept efficient to make sure that they fit onto the media they are designed for. The second reason is that the hardware this relates to is the amount of ram available to a system. For the XBOX360 this was 256mb. Within this amount of memory everything that might be needed at the point of the game you were playing needed to be read from the disc and stored for very quick access. This includes geometry, textures and shaders, animation, audio, gameplay instruction as well as any information needed for running the underlying operating system. This meant that the polygon count and textures for the 3D art needed to be kept very efficient.

File Type compatibility
Of course file types need to be compatible, for example if you are developing a game in Unreal Engine and modeling your assets in Autodesk Maya, you will have to make sure to export the asset in a file format that Unreal can read properly, for example “.fbx”.

Audio in Videogames

Sound

Sound is an accumulation of vibrations that travel through the air or another medium and can be heard when they reach a person’s or animal’s ear.
Sound can be represented with sound waves.

A sound wave is the pattern caused by the movement of energy traveling through a medium (such as air, water, or any other liquid or solid matter) as it propagates away from the source of the sound. A waveform is a depiction of the pattern of sound pressure variation (or amplitude) in the time domain.

-The amplitude, a, of a wave is the distance from the centre line (or the still position) to the top of a crest or to the bottom of a trough. The greater the amplitude, the louder the sound.
-The wavelength, λ, of a wave is the distance from any point on one wave to the same point on the next wave along.
-The frequency, v, of a wave is the number of waves passing a point in a certain time. We normally use a time of one second, so this gives frequency the unit hertz (Hz), since one hertz is equal to one wave per second.

The way the right score or effect can produce a specific emotional reaction is something film-makers have been capitalising on for decades, and more recently, videogame developers too. Everybody who grew up with games will instantly recognise the opening bars of Super Mario Bros or the distinctive chime of Sonic collecting a ring.
With games now telling more complex stories, we’re seeing more cinematic aspects incorporated into their sound design.

Sound in videogames can affect the player’s emotions / mood in many ways.

  • One of the earliest and most effective examples of sound being employed for psychological effect in games is the use of pentatonic scales. That “happy” sound you get when you complete a stage in Super Mario Bros uses the major pentatonic scale.
    Developers need a way to tell the player when they’re on the right track (and preferably non-verbally, given the international nature of gaming). The major pentatonic scale is a great example of an audible reward cue.
  • Another powerful weapon is the use of non-linear noises (they exceed the normal musical range of an instrument, or the vocal cords of a living creature).
    Example: Resident Evil Soundtrack
  • Adaptive music is also used to great effect in stealth games like Metal Gear Solid, where a shift in tempo indicates a change in the pace of the game. As well as serving a useful purpose, these musical shifts add to the immersiveness of the overall experience, ensuring the sound always corresponds with what’s going on on-screen.
    Example: Metal Gear Solid – Alert Phase

Essentially, when developing a game, we have to pay great attention to sound design as it gives cues to the player that will help him to react to the world (ex. you can hint the presence of some NPC behind a wall or inside a building or let the player know that an enemy is rushing on him while it’s not visible yet), and also provides instant feedback to the player’s inputs.

Overall, sound effects greatly contribute to the player’s immersion. The absence of sound, or bad sound design will break your game’s feel.

 

Sourcing Audio

Audio can be sourced through numerous ways:

Websites for quality Royalty Free sounds

There are several places where you can find decent sound effects. Sometimes, if you want specific audio samples and tunes, you will have to spend a few bucks. More on that later. But here are places where you’ll find great free sound effects to get started.
Pixel Prospector, has a long list of royalty free music and sound effects. The quality may vary at times, as with everything free.

Websites where you can record your own sounds

BXFR is a great example where you can use its built in tools to create a sound effects.

Foley

Foley is the reproduction of everyday sound effects that are added to media to enhance audio quality. These reproduced sounds can be anything from the swishing of clothing and footsteps to squeaky doors and breaking glass.
Foley artists do not create audio for standard special effects, such as explosions or background noises from cars, instead they recreate the finer details that require a high degree of precision: The clop of someone’s shoes as they walk across the floor or the rustle of their jacket as they sit in a chair.

Pros of using Foley in video game sound design are:
– It gives an overall higher sense of natural rhythm
– It adds a unique and exclusive sound texture to the recordings

Cons might be:
– Extremely time consuming (recording and editing the sound clips).
– Continuity is vital (if you misplace an object that is being used to record, for example a pair of trainers for footsteps, and you start using boots, there will be no continuity)
– Finding suitable locations and surfaces to record on in complete silence, for example an oak-wood floor or a marble floor will not be easy, unless you are in a Foley recording studio.

Recording it yourself using instruments/midi

You can also record sound yourself using a MIDI device or musical instruments. This method will be best to avoid any legal issues.

Source from other sound designers

Sourcing from other sound designers will come with copyright issues as they have to ensure they attain the rights to use the music/sound effects.

 

Legal Issues

Copyright

Copyright is the assignable and exclusive legal right, given to the creator for a certain amount of years, to publish, print, record, or film literary musical or artistic material.
If you have taken a sound or piece of music that someone else has conducted into a video game without permission, this can result in a fine or maybe imprisonment.

Property Rights

Property Rights are the rights that are granted to a composer to release content through his/her own means.
Property Rights are more commonly used for soundtracks whilst ancillary rights allow you to make money by giving others permission to use your content.

Licenses

Licensing is when the owner of a copyrighted piece of work grants exclusive or nonexclusive use of the work for a negotiated fee.
If you, as a composer or sound designer sell a composition or audio material to a games company, it is theirs to license as they wish.

An example of licensing is when a games company wants to use audio content from a TV show, or movie. In order for the games company to receive the rights to audio content from a TV company, the games company must ask for permission from the TV company to obtain the rights because if the games company takes audio content from the TV company without permission, they can suffer the same fate as if a TV company was to steal licensed content from another company.

Royalties

A royalty is a payment to an owner for the use of property, especially patents, natural resources, franchises or copyrighted works. A royal payment is made to the legal owner of a patent, property, franchise or copyrighted work by those who desire to make use of it for the reasons of generating revenue or other such desirable activities. Royalties are often explained as a percentage of the revenues acquired using the owner’s property, but can be discussed to meet the certain needs of an arrangement. The use of royalties is ordinary in situations where a creator or original owner decides to sell their product to a third party in trade for royalties from the later revenues it may make.

The games company, sometimes the sound designer, and sound composer will receive royalties and bonuses if they sign a talent release contract.

 

Audio Limitations

Although the quality of sound has improved in games over the years, audio limitations still exist.
Some of the reasons for these limitations include:

Sound cards – Less advanced sound cards produce less advanced sounds. Sound cards have limitations on the number of audio streams that can be played through them at any time.

Processing power – A lack of processing power or too much processing power given to the sound will take away from the other parts of the game. For example, trying to have too high a sample rate can diminish the performance of the graphics card, making games run slower. Also, audio can take up a lot of RAM, and if not properly managed, could lead to a negative impact on the game’s performance.

Storage devices – Storage devices, such as CDs and Blu-Ray discs have limitations on the amount of data that can be stored. In the case of computer games, all the music and sounds must be stored with the rest of the game – 3D models and environments, for example – limiting the amount of space available for it.

Speakers & Headphones – Depending on the output of the sound of speakers and headphones, the produced sound may not be as crisp or accurate as the composer intended. Older speakers may have a grainier sound when compared to newer speakers, for example.

Optimisation of audio for video games

Audio can be optimised for video games in numerous ways:

  • Trimming long tails -Cutting audio clips short, removing parts of useless silence that would otherwise take up memory.
  • Combining assets
  • Converting short files – Short audio clips like coin pickup sounds can be converted from Stereo to Mono, to reduce file size without having big impact on the quality, since it’s such a short clip.
  • Compressing long files -There are 2 types of compression: lossy and lossless.
    -Lossless File Formats: WAV, AIFF, SMP
    -Lossy File Formats: MP3, VOXCompressed sound allows us to store it more easily and take up less disk space. However, this does mean longer loading as it decompresses the audio and lesser quality.

Bit Depth and Sample Rates
Audio quality depends on the bit rate, sample rate, file format and encoded method.

Bit Rate is the audio quality of the stream. It is measured in Kilobits per second (kbps). Bit rate is number of bits (data) encoded per second / bits transmitted or received per second. Higher the bit rate with more sampling rate, requires high bandwidth and produces good audio quality. Low bit rates refer to smaller file size and less bandwidth with a drop in audio quality. For good quality music usually 64-128kbps bit rate is preferred.

Sample Rate is the number of samples per unit time. A sample is a measurement of signal amplitude and it contains the information of the amplitude value of the signal waveform over a period of time.
Higher the sample rate, and the audio obtains a signal which is similar to the original analog signal for good audio quality. The file size depends upon the sample rate .(sample rate is measured in hertz (Hz)).

The bit depth refers to the number of bits in each sample, and it determines the maximum signal to noise ratio. The bit depth may be 16-bit, 24-bit, 32-bit, for audio CD 16-bit is preferred.

Sound can be recorded either in analogue or digital. However, when it is put into a computer it is recorded digitally.

Calculating File Size

Kbps = Bitrate x Sample Rate
Kbps / 8 = KB per second (one kilo bite is made of eight kilo bits)

 

Sound Effects for the Optical Maser

The Optical Maser (my weapon) has two firing methods. One is a laser shooter, that charges and fires a laser beam. The second is a standard high impact bullet firing method, a really loud bang will be the main sound you will hear from this.

Components / Sounds that will compose the gun’s firing sound:

High Impact Firing:

Loud Firing Bang
Retracting/unfolding sounds — metallic clicks, servos, electronic sounds
Activation sensors (safety on/off) — switch between laser blaster and high impact

Laser Blaster:

Charging – often used for big, slow-firing guns
Energy discharge — this is one of the key elements in a weapon. It can be any matter or a heavy projectile, depending on a concept, but the possibilities are truly endless.
Heat management — sounds of overheating, overloading upon discharge, failure

How to record Objects to achieve weapon sounds

Gunshots:
Recording balloon pops and anything alike can yield good sources for creating convincing gunshots since we do not have access to real firearms.
A simple and safe way to record electricity is to record a buzzing TRS cable (that sound of plugging your guitar into an amplifier).

Mechanics:
For the mechanical sounds, depending on a weapon size, almost everything that clicks or rattles is useful. Small things like plastic toys, battery cases opened and closed, mouse clicks, kitchenware, etc. To match a gun’s size, it can be bigger things such as metallic tubes and door handles. Recording motors and servos like a drill, sander, and electric toothbrush may suit for the automatic movements in a gun. Sounds of various pneumatics and hydraulics can also contribute to the design of gun mechanics. Recording various camera components may be used for weapon aiming/zoom sound effects

Charging:
The formula for a basic charging sound is to take a set of few oscillators with the chosen waves to go up with the envelope, then detune them into atonality. You can modulate the tone and add a pulse via amplitude or ring modulation and sweeten it all up with chorus, flanger or even a granulator. Charging sounds tend to have longer attack and shorter decay times, contrary to the gunshots.

Inspiration

Games that inspire the sound of my gun are Overwatch, Doom and Call Of Duty Zombies.

Various Weapons from Doom have a great sound design that matches the Sci-Fi feel of the weapons, including blasts, energy discharge, charging etc.

The same goes for the RayGun Mark2 from Black Ops 2, which is an iconic weapon.
 Since I based my Gun from Overwatch, a great example of a weapon that would have similar look and sound design to mine would be Zarya’s weapon.


Considerations

There are some technical considerations that I will have to keep in mind whilst making my gun sound design.
These are, making sure not to use pirated sounds, just royalty free ones, and to use only safe tools to create sounds, not dangerous ones like poking a fork in a plug socket to record the sound of electricity crackling.

Optical Maser Sound Design Process

As I said above, my gun’s sound design will be made up of:
– Loud Firing Bang
– Metallic clicks
– Activation sensors
-Charging
– Energy discharge
– Overheating sound

Production Log
To start, I visited a site called “VideoGameCaster” to find royalty free sound effects that I could use. Luckily I found a few that fit my gun theme and that sounded like what I wanted.

I could not find any sound effect for a gun “zoom” when charging an impact blast, so I had to create this myself. To do this I recorded the sound of me adjusting the focus on my DSLR lens. I then imported the sound in Audacity, trimmed it, added reverb and a lot of bass to “mute” it. I Also made my own bass impact shot sound by recording my hand bang on a table and adding a lot of bass to it.

These are my final sounds for my gun.

Incorporating sounds in game


After I had collected all of the sounds that I wanted and mixed them together in Adobe After Effects to create one Audio clip (and exported it as a .wav file), it was time to import it into Unreal Engine 4 and to make it work with my gun.
After importing the audio clips into my level, to synchronise them with each corresponding weapon animation, I opened the animation bp of for example the shooting animation, added a “play sound” notify at the right frame and selected the corresponding sound in the details window. I repeated the same process for the other two animations.

Evaluation
The whole process of designing and implementing my sound design to work with my gun was straight forward. I did not encounter any problems this time, but next time I would definitely concentrate more on the synchronisation of the sound to match up the animation perfectly.

Working to a Brief – Hadleigh Castle Project

Hadleigh Castle Brief

What is a Brief

A creative brief is a short, written document used by project managers and creative professionals to guide the development of creative materials (e.g. drama, film, visual design, narrative copy, advertising, websites, slogans) to be used in communication campaigns.
A media brief can be referred to as a checklist for the media planners to help them prepare media plan for a client organization. Media planning is not an isolated function but an integral part of an overall campaign planning. Hence a media planner needs to have a thorough knowledge of all the variables.

A good media brief should ideally include the following:

-Marketing information checklist (marketing objectives and proposed strategies, product characteristics, distribution channels, brand category and expenditure level).

-The objectives (This must clearly indicate whether the objective is to introduce a new product, increase awareness about the existing brand, reinforce the current position or improve or enhance the company’s reputation).

-Product category information (This helps in assessing the strengths and weaknesses of the brand and also helps in setting achievable targets).

-Geography/location (The media brief helps the planner in knowing his media markets).

-Seasonality/timing (Information regarding seasonality of the product is an important consideration for the media planner).

-Target audience (A profile of those who buy the existing category as also those who buy competitive brands is a very important consideration for the media planner)

Brief Structure

There are many ways to structure a design/idea brief. Here’s one recommended structure:

-The goal. Identify the core nugget that explains what you’re pitch is trying to achieve.

-The idea. This is a version of your 5 second pitch.

-The problem. This is a modified version of a pitch set-up: as it provides a framework for the idea.

-The audience. Who will this idea appeal to? What is the profile of the potential customer? What is the profile of the non-customer?

-The approach. How does the idea work? Explain, at a high level, the outline for how the idea will be implemented.

-Challenges & Unknowns. What are the big open issues that need to be resolved, or are questions a reasonable person would ask?

Structures of different Briefs

Contractual Brief (example A)

A Contractual Brief is a contract created by the Client who will give the brief to a Production House or Media Company. This type of brief cannot be negotiated like a Negotiated Brief as the client sets all the guidelines for the Production. A Contractual brief explains what the rules are between both the client and the worker. It tells you about what the client wants from his job helping the worker understand what they want.
A Contractual Brief has its advantages as the Media Company will know and understand what the product is and how it should be created. However this does mean that the Production Company will have no input on how the production should go, and if any improvements can be made.

Negotiated Brief (Example B)

A Negotiated Brief is discussed between the Production Company and the Client. It will usually be discussed at a Client Meeting where the ideas, principles and the how the product will be produced are discussed. When negotiating through the contract, they don’t stop the discussion until they have an agreement altogether. This has the advantage over a Contractual Brief where the Production Company can have a form of input into the brief meaning that any proposals that the client makes that do not seem appropriate or suitable for the Production.

When comparing a Contractual Brief to a Negotiated Brief it is clear that depending on the product, both have their advantages. For a large-scale production a Contractual Brief is best suited as it will be more detailed and directed towards the outcome of the final product. Whereas a negotiated brief is best suited for a smaller production where the client may need some pointers and improvements, which will make the overall product a higher quality.

Formal/Informal Briefs

When writing any style of brief there is a significant difference between a Formal Brief to an Informal Brief. A Formal Brief is one that would be discussed in great detail between the Client and the Production Company, and would include Meeting Minutes and Sign Off sheets throughout the process. Whereas an Informal Brief is one that would be discussed in little detail, mainly with the client handing a Brief to the company and having a quick meeting. The Informal Brief can mainly be used when the project is a small idea or is to be used by a small-scale client, whereas a Formal Brief will be used in a large-scale production for a large client. When comparing a Formal to an Informal Brief, it becomes clear that a Formal Brief is a more effective style as it will include significant amounts of data compared to an Informal Brief, and this means that the production will be of a higher quality.

A formal brief is very easy to read and helps the client to understand the different aspects of the contract. It’s similar to an income brief where it’s nice and relaxed which means the contract isn’t complicated to read, but instead quite easy which means they can get the job done quickly and sufficiently.

An informal brief has no strings attached, but the issue with having an informal brief is that it’s just a meeting, which means all the details are verbally told and not on a written document.

Commission Brief

A Commission Brief is a standard brief with an extra increment, of which allows the Client of the Product to apply to have the money Refunded after the completion of the Product. This is a form of Commission, however, if the client does get a refund, the product can then no longer be used for the purposes desired, meaning that a new product or service will have to be made. When somebody commissions a brief they hire a separate independent media company to create a product for them coming, formulating their own research, proposals and ideas for the product which is then overseen by the commissioner.
This works well for the independent company because they are getting paid for doing their job as well as the chance to have a share in the profits made by the company meaning they are ending up with a healthy profit. The issue is that the company doesn’t get such an active role in the decision making for the product and so they could end up with something very different to what they expected.

Tender Brief

A Tender Brief is where a Client will produce a Brief and distribute it to multiple Production Companies. This will allow the Production Company to make an offer to the client and also explain any ideas that they may have. This will allow the Client to choose the best Production Company for their needs in order to have the product created. This very similar to somebody advertising a job advert, where by the client publishes an advert showing that they need a media product to be made. Advantages to using this type of brief is that the client has access to a number of different ideas that maybe were not initially thought of.

Cooperative Brief

A Cooperative Brief is where a Client will hire multiple companies, or where a Company approaches another company in order to help with a single production. This is mainly used on large-scale productions such as Films or Advertisement and Animations. It is a great method of collating multiple ideas and also helps to increase workflow, as it will allow more people to be working on the same project, allowing for a sooner completion time and also allow for a higher standard of work.

Competition Brief (Example C)

A Competition Brief is when an Idea is created, and a brief is distributed to the public in order to get the Public to make the Product in line with the brief. This is in order to get many free products made at the cost of the competitions prize, which is most typically a lump sum of money, or a contract to the Brief Creator in which will mean that the creator of the product will then on make the rest of the Products that the Client will require. An advantage for the company is that most of the time these types are projects become available through competitions or advertisments and the winning idea is given a prize or paid in cash which can work out cheaper for the company then any other way. As well as the fact that the company can take on other work on the side meaning they are making more money as the business is expanding. However, it means that after all that work put into each production company only one will get chosen and paid meaning the others have make their projects for nothing without prizes or being paid.

A.        C.            
B.         

Negotiating the brief

Working to a brief is all about client satisfaction, if they view is positively you’ve succeeded.
First, the client will have a issue and the client or a brief agency will write a brief to resolve that problem and then, give it to a company to produce it.

Consultation with the Client

Depending on the type of Brief that the Client and Production Company are working with, there will always be a Consultation with the Client. This will be either a formal or informal meeting between the client and the production company. The meeting will be used for discussing the brief and the ideas within the brief.

Degree of Discretion in Interpreting the Brief

Like negotiating a brief, there is also a degree of discretion that occurs when dealing with a client. For example, when you receive a brief you have to interpret the ideas and think about them in a technological sense. This means thinking of how you would create the production, how you would distribute the production and also how the client wants it to look. This is where a meeting or consultation is used in order to discuss the ideas between the client and production company.

Constraints (Legal, Ethical and Regulatory)

When dealing with any brief, or making any form of media product that is going to be seen by the public, it is crucial that the Legal, Ethical and Regulatory constraints are deeply thought through. This means making sure that the ideas and themes that are within the product are not in any way racist, homophobic or cause any emotional or physical harm to a group of people. This is also the case where Legal constraints have to be thought of, as there are many government created laws and regulations that have to be abided by. This can be the Data Protection Act, Advertising Standards Authority, Ofcom, EU Competition Commission and many more. It is vital that any production has a detailed report of how the product will not affect any of the Laws and Regulatory Bodies. When comparing the different Constraints that can affect any Media Product, it is clear that Legal Constraints are the hardest to avoid, as it requires a true understanding of all the laws and regulatory bodies that can have an effect on the product being made. Whereas the Ethical Constraints can be worked upon in order to be avoided, such as having different ethnic groups working on the one product in order to make sure that no beliefs or themes go against the different ethnic groups.

Amendments to Proposed Final Product

During the Production Process there can be many different stages at which ideas can be changed or developed. This is why all production companies use Sign Off Sheets. These are a form of security, which states that when something has been done, the client cannot change it. This makes the production process easier and more effective for the Production Company, as if the Client were to say that they wanted to change the product halfway through Production, then the company would have to do so, but with a Sign Off sheet it makes it impossible for this occurrence to happen.

Amendments to Budget

This will be either when new costs have arisen or prices have changed, and the new costs will have to be discussed with the client to ensure that they are happy to pay for the extra amounts.

Amendments to Conditions

Like making an amendment to the Final Product, changing the conditions of the product will have an affect on the final product.

Negotiating the Fees

When it comes to the first consultation with the client, it is vital that all of the costs that will be involved in making the product are discussed to a high standard. This is firstly by the client, by them running through all of the costs that they will pay for, and then the Production Company will have a detailed list of all the costs that will be involved, in order to settle a final price for the product.

Opportunities

-Opportunities for Self Development
When working with a Brief it allows the Producer and the Client to develop new skills. These can be from learning new communication techniques, development skills in producing the product and also increasing the range of ideas that one can have about different products.

-MultiSkilling Opportunities
When working for any Production Company or as a Sole Trader there will always be more than one project on the go. This means handling and following multiple briefs at any one time, and therefore helps to improve any multi-tasking skills that you may have.
This can be developed mainly by working on two different briefs, such as an Animation Brief and a Movie Brief, and also working with Contractual Briefs and also Negotiated Briefs at the same time can also be an advantage to the producer, as it means that they can work on one project effectively. The producer will also be able to switch between their working day on the different projects effectively, and this skill is crucial when working within any media industry company.

-Contributions to a Project Brief
When working on any brief, the creators of the product will normally be able to have their own input into the project idea. However this is not possible when working with a Contractual Brief. However, being able to have the input that the producer has can allow for the idea to be further developed, and also discussing different methods of filming and planning can rectify any flaws within the idea. This can lead to bigger and higher quality products to be made, which will not only be good for the producer, but also it will please the clients, making them more likely to recommend the company to other potential clients.

Skills required to fulfill a Brief

For any person who will be working with a Brief, whether it is a Negotiated Brief or Contractual Brief, it is crucial that the person dealing with the product and clients has the skills that will enable them to work effectively and appropriately towards the brief.

-Reading a Brief
When the Company or Producer receives a new Brief, it is crucial that the context of the Brief is fully understood. Firstly the Idea has to be understood, as without an idea the Product cannot be made. When starting any project it is important to read through the brief that you have been set because it will express what the client wants, and so with this information you are more likely to create something that is appropriate for its purpose.

 

Research

Hadleigh Castle is a ruined fortification in the English county of Essex, overlooking the Thames Estuary from south of the town of Hadleigh.
Built after 1215, during the reign of Henry III, by Hubert de Burgh, the castle was surrounded by parkland and had an important economic and defensive role. The castle was significantly expanded and remodelled by Edward III, who turned it into a grander property, designed to defend against a potential French attack, as well as to provide the King with a convenient private residence close to London.

Built on a geologically unstable hill of London clay, the castle has often been subject to subsidence; this, combined with the sale of its stonework in the 16th century, has led to it now being ruined. The remains are now preserved by English Heritage and protected under UK law as a Grade I listed building and scheduled monument.

History

In 1215 Hubert de Burgh was gifted the manor of Hadleigh by King John. Hubert at the height of his power transformed it into what is now Hadleigh Castle.
By 1239 he was forced to renounce his land to the royals, this led to its disuse for nearly 100 years till Edward II used the castle as his royal residence and was further built up to complete Hadleigh Castle.

Edward III was the first king to see the strategic importance of Hadleigh Castle – it was ideally situated as a base for defending the Thames estuary against French raids during the Hundred Years War.
Edward’s claim to the French throne had led to war with France. The need for a more systematic defence of the Thames estuary led the king to refurbish and extend Hadleigh Castle and to build Queenborough Castle on the opposite Kent shore.

Hadleigh became a favourite retreat for the ageing king. There are excavated foundations of the most important part of the castle – the great hall. It had a serving room at the end and beyond it a private withdrawing room, or solar.

Today

The Salvation Army gave the castle to the Ministry of Works in 1948, and it is now owned by English Heritage, classed as a scheduled monument and a Grade I listed building.
The castle is still surrounded by the 19th-century Salvation Army farm, and beyond that by Hadleigh Country Park, owned and managed by Essex County Council and a Site of Special Scientific Interest with special regard for invertebrates. In 2008, Hadleigh Farm, close to the castle, was announced as the venue for the mountain biking competition in the 2012 Summer Olympic Games.

What is left of the Castle

Surrounding Landscape

Interactive Signs (used for narration)

Example Textures

What we think the castle looked like

Example Buildings of the year 1300

 

Example Furniture of the year 1300

Castle Sketch with Dimensions

Analysing the Castle and its rooms

Barbican

A barbican is a fortified outpost or gateway, such as an outer defense to a city or castle, or any tower situated over a gate or bridge which was used for defensive purposes. Usually barbicans were situated outside the main line of defences and connected to the city walls with a walled road called the neck. Fortified or mock-fortified gatehouses remained a feature of ambitious French and English residences into the 17th century.

Stables

A stable is a building in which livestock, especially horses, are kept. It most commonly means a building that is divided into separate stalls for individual animals.

Bailey

In fortifications, a bailey or ward refers to a courtyard enclosed by a curtain wall. In particular, an early type of European castle was known as a Motte-and-bailey.

 
Solar

The solar was a room in many English and French medieval manor houses, great houses and castles, designed as the family’s private living and sleeping quarters. Within castles they are often called the ‘Lords’ and ‘Ladies Chamber’, or the ‘Great Chamber’.

The word ‘Solar’ derives from the Latin word ‘solaris’ meaning sun, and the Solar room was situated within the building, in a room with the brightest aspect.

Postern  Gate

A postern is a secondary door or gate in a fortification. Posterns were often located in a concealed location which allowed the occupants to come and go inconspicuously. In the event of a siege, a postern could act as a sally port.

 

Textures

Deadlines
This project’s deadline is the 30th of march 2018.

Team Roles and Bio:
-Giacomo Verri: Leader & 3D Modeller
-Jack Hallett: Researcher & Artist
-James Lewis: Programmer & Texture Prep
-Jon How: Voiceover & Producer

Time Scales:
We have divided this project in smaller tasks to easily plan ahead what we would need to prioritise.
Graphics and Drawings finished by the 23/02/2018 (2 Concept Drawings, Interactive Signs, Seamless Textures)
Castle Model (Primary) finished in a 20 day span, including landscape.
Decor Models (Secondary and Tertiary) started after the main castle was finished, and aimed to be completed in one week – by 20/03/2018
-With 10 days left we aim to record the Script and implement it into the level, as well as interactive objects like the drawbridge and opening doors.
-By the 27th we aim to have have finished everything, having just Visual Dress-Up left before recording the flythrough video.

Resources
The resources we will need to make this is going to include software to develop the interactive experience and model and design assets for it, and lots of various concepts using both paper and photos taken on site.
We decided to use Unreal Engine and Maya/Cinema4D to develop and model the experience because they are easy to learn and intuitive. We also decided to use Illustrator to design vector graphics (as they can be freely rescaled) and Photoshop for texture preparation. We chose these programs because we have previous experience in using them, thus making it the best choice, as they are effortlessly accessible and easy to use.

There will be some obstacles that we will have to conquer through as a team. One of the main obstacles will be the coding, as we don’t a lot of experience with it since we are still beginners learning as we go. Another obstacle that we will face is our graphics creation because we are not proficient with creating new original images without needing lots of time which isn’t an option as we are entering an industry where deadlines are strict.

Demographics
The demographic for the castle demo will be mainly people interested in the history of Hadleigh Castle. This will be both male and females of diverse backgrounds and education levels aged 14 and up. This means the experience will need to be interactive and easy to use.

Production Log

Here is our production log.

YouTube Flythrough

Here is a link to the YouTube flythrough

Tour Download

Here is a link to download the interactive tour

Evaluation
Overall the project turned out good. We had a few problems here and there, but we managed to overcome them.
When exporting the models into .fbx format, the tower model kept exporting with holes in it, we managed to fix it by inverting the normal on a few faces of the model.
As expected, coding was not easy, but we teamed up in a group and solved the main issues we had with opening doors, ladders and audio implementation.
Lastly, a teammate left us, so we had to quickly prioritise each role and focus on more tasks at once, which was hard, but we got through it in the end. We were not able to finish the YouTube flythrough in time for the deadline, so we had to take an extra week to finish it.

Overall, if we had a little more time we could have improved all of the models and texturing jobs all round, as well as having everything perfected and bug free regarding programming.

 

Muzzle flash design

Research
I want my muzzle flash to look and feel like it belongs in a Overwatch style game. I want it to be cartoony and exaggerated.

-Primary impact bullet firing muzzle flash:

-Secondary laser beam effects and muzzle flash:
    
My muzzle flash design will not contain gruesome elements, on the contrary, it will resemble energy beams, fitting in the exaggerated cartoon style of my game, and appealing to the audience.

Analysing muzzle flashes

Muzzle flashes are made out of two main parts. To recreate one I will have to make two graphics, one that shoots forward, and the other and flares outwards.

Concept generation

The primary firing method will have a significantly different muzzle flash than the second one. The first one will resemble a standard cartoon styled flash of energy, and will be made up of 3 main graphics: the main blast, the small exploding particles near the barrel of the gun, and the rings of smoke.
The first will be made in Photoshop, the other two in Unreal Engine’s particle system.

In Photoshop, I used the pen tool to trace my concept of the different parts that make up the muzzle flash. In this case, the main blast, small lightning particles and the back blast that will be seen from the front view.

  

The secondary firing method will resemble a cartoon styled energy beam.
Because this is a one hit kill, the gun will have to charge for half a second before firing, to give the impression of a high power shot. To create this beam I will have to design a few rectangle shaped graphics, apply them to planes and in Unreal Engine offset the timing to match up the firing animation.

In Photoshop, I created two beams with the gradient tool. These will be used for the actual main laser beam, and smaller beams. For extra rotation elements that will show during the charging animation of the beam, i created “HUD” elements by deleting slices out of a cylindrical “doughnut” shaped layer.

 

 

Logo Design

The logo is the face of any brand so its design is extremely important. Creating an effective visual representation of a brand requires much more than just graphic design.

Examples of good original logos related to video games include:

The gamecube logo is for sure a cleverly designed logo. If you look at it this way, the blue lines form the letter ‘G’ and the space in between forms the letter ‘C’; these standing for Gamecube.

George Opperman was the one who designed this logo back in 1972; the iconic “Fuji” Atari logo.It represents a stylised letter ‘A’ to stand for ‘Atari.’ He was inspired by Atari’s popular game at the time, Pong. The two side pieces of the Atari symbol represent two opposing video game players, with the center line of the ‘Pong’ court in the middle.

One of the most well-known and loved game consoles, PlayStation has the sort of logo that sticks in the mind and makes consumers remember it. The original PlayStation logo, which was unveiled in 1994, featured the familiar incorporated PS symbol. Interestingly enough, the company had not less than 20 versions of the logo to choose from. We can mention three oval shapes colored yellow, red, and blue, as well as several versions focusing around stylised letters “S” and “P”.

Nintendo’s logo uses the ”racetrack” typography type. It features all the parts that make the Nintendo logo so recognizable

The insignia of the Assassin Order, though varying slightly over different time periods and countries, held essentially the same shape and style of an eagle’s skull. Each of its variations represented the various sects of the Order. The insignia was also part of the armor of leading Assassin figures in a number of time periods.

 

Considerations when designing a logo

There are certain issues you must consider when designing a logo. These are:

Legal

-Copyright / Trademark issues
This is the first issue that business owners face. Without having copyrights to their own logo, they cannot legally claim infringement if a third party is using their logo as their own. Copyrights will ensure you that your business entity is secured and its identity cannot be infringed upon by anyone.

-Font Licensing / Design Patents
On this topic of font licensing you cannot send clients any fonts unless the user agreement specifically allows it. Fonts must be purchased separately per user otherwise it is a violation of the end-user license agreement between the logo designer and the typeface designer (you can still legally copy a font, you cannot distribute it).

Ethical

-Offensive Logo Design
While setting specifications for the design of your company’s logo, pay attention to all the details, such as the colours, words, symbols and fonts. Ensure that these characteristics are not offensive in any way to other company’s products, brands, etc. Furthermore, check if symbols and other characters depict meanings other than intended.


(examples of bad logo designs)

Technical

The logo is the face of any brand so its design is extremely important. Creating an effective visual representation of a brand requires much more than just graphic design.
This is why it is important to follow these rules when designing a logo:

-Draw sketches. These are an important first step in designing an effective logo.

-Create Balance. Balance is important in logo design because our minds naturally perceive a balanced design as being pleasing and appealing. Keep your logo balanced by keeping the “weight” of the graphics, colours, and size equal on each side.

-Keep it simple. The simpler the logo, the more recognisable it will be (eg. the Nike swoosh is an extremely simple logo and is also one of the most recognisable in the world). Work the design down to its essentials and leave out all unnecessary elements.

-Use colour to your advantage. Use colours near to each other on the colour wheel (e.g. for a “warm” palette, use red, orange, and yellow hues). Don’t use colours that are bright and hard on the eyes. Also, experiment with contrasting colours too (eg. Using Adobe’s Colour Wheel).

-Go Easy on Effects. Of course, playing around and seeing whether they enhance a logo is fine, but just remember that simplicity is key.

-Use Other Designs for Inspiration Only. don’t copy other designers’ work!

 

Vanguards logo concept design

Vanguards is an Intergalactic Police Force that protects the Multiverse from danger and terror.
The inspiration for my logo for Vanguards is a police badge.
Policemen have badges; these can be simple silver ones or detailed to represent the city they serve/protect.

Logo Concept – Final

(This logo was created in the span of one day, incl. idea generation)           

I started off my logo designing progress by first sketching out my idea as individual assets, beginning with the “V” for Vanguards. Having just a plain letter on a badge looks boring, so I added some type to it, making it look nicer, giving it some character. I did this by chopping off a corner from the “V”, giving it a sharp edge, and by incorporating the staple red and blue police siren colours into the letter.
Next I added a city at the bottom of the badge, taking inspiration from a city on an existing police badge. To show that Vanguards are intergalactic, I also added a “Bat Signal” inspired beam shining from another planet, aiming at the “V”. Since it would have looked weird to have a “V” randomly in the sky, I added wings to the logo because they make nearly everything look heroic.

The program that I used to create this logo was Adobe Illustrator. I chose Illustrator over Photoshop because I wanted this graphic to be vector (vector-based / infinitely scalable), and not raster (pixel-based). I found it fairly easy since the transition between it from other drawing programs was straight forward (all common tools – like brush, pen, eraser etc. are present in the program). The process was straight forward as well; I encountered no difficulties since it was not my first time using this program. Once my idea was drawn out on paper, it was just effortless tracing from there.
The Vanguards logo was exported from Illustrator as a “.png” image as opposed to a “.jpg” because PNG is a lossless compression type, it is high quality and supports Alpha Channel (transparency).

By reviewing the final piece, my main tips for the next logo I will have to create and critiques of this current graphic are definitely:
-using a variety of brighter colours
-working on the shine/reflection of metal, which is quite hard to recreate
-adding bevel effects, to make the logo pop out more and give the overall badge a more 3D shape and look.

Loot Boxes in videogames

Loot boxes are digital grab bags that players have to spend real or in-game currency on. The trick is that you never know what’s inside. These are an issue in the world of video games.


(A combination of visual and aural factors can make it feel like Christmas)

The issue became national news prior to the release of EA’s Star Wars Battlefront II, where gamers felt they were being forced to gamble / pay-to-win to get the most out of the game.
It all started with downloadable content (DLC). In its simplest form, downloadable content is some form of add-on content for a released game that you download to use in the game you purchased.
After the model of making money with tiny cosmetic in-game items was proved successful, it wasn’t long before other major publishers were exploring the idea. For example, EA was already seeing tremendous return on its own experiments in its popular FIFA games.

Loot boxes have become more and more popular in the last year; the most modern wave can largely be traced back to :

  • Blizzard’s Overwatch, which released in 2016. The game’s loot boxes feature cosmetic items, such as costumes, player avatars and voice lines for characters. You can’t pick which characters you get rewards for, so the items may or may not be for a hero you actually use. But none of the rewards affected actual gameplay, which may be why Overwatch wasn’t embroiled in controversy.
  • Free-to-play games, which have long relied on small transactions to make money. Developers would artificially restrict gameplay in order to get players to invest in these “microtransactions” that would make the game more fun.

The first shot at loot boxes on the Western side of things was Valve’s Team Fortress 2. In June 2011, Valve transitioned the game to a free-to-play business model after the launch of the Mann-conomy update in 2010, which introduced crates and item trading. MMOs that fell on hard times, like Star Trek Online and Lords of the Rings Online, switched to the model when they went free-to-play as well.

In 2009, Andrew Wilson, the CEO of EA started taking advantage of this economy with the release of FIFA 09.
         
For the past few years, FIFA offered a mode called Ultimate Team, in which players would collect trading cards that they could use to build virtual clubs. Although EA initially sold Ultimate Team as an add-on to the main game, in 2010 the publisher started offering the mode for free, relying solely on card pack sales to generate revenue. That move paid off, as Ultimate Team now generates EA $800 million annually.

These days, it’s unusual to see a big budget game without some form of microtransaction. Overwatch, for instance, sells for $60, but then includes the ability to purchase loot boxes packed with cosmetic items.

They key to that game’s continued success is that everything else, from new maps and characters to new modes, remains free for everyone.

The debate seems to be more about not whether microtransactions should exist in video games, rather how these are handled.

EA learned just how careful a company has to be in the way it uses microtransactions with the release of Star Wars: Battlefront II. It was originally set to have a microtransaction system that asked players to invest extra time or money to unlock major playable heroes. The outcry led the company to temporarily pull the microtransaction system on the eve of the game’s launch.

Most gamers consider loot boxes as gambling. You open up a box and get items that may or may be what you want, or they may be worthless. This means that to get items they want, players may have to invest lots of money in multiple loot boxes until they hit the jackpot.

As for microtransactions, the argument is that you’re paying for something that should already be in the game. Many refer to this as “pay to play/win.” In the case of Battlefront, it was far easier to pay for a ton of loot boxes filled with powerful cards for your characters, rather than spend dozens of hours grinding for them in-game.

At the end of the day, publishers are businesses, and they’re looking for a profit. Video game prices have largely been flat since the late 1990’s. While publishers fund developers working on cutting edge graphics with innovative gameplay, full voice acting and motion capture and, in some cases, continuing to support the game for years after release, they haven’t been charging more for those games. Publishers may argue that this is a way to get a greater return on investment so they can continue making more expensive, innovative games.
The problem is that more and more developers are taking advantage of this and including micro-transactions / lootboxes everywhere.

While microtransactions are fairly straightforward, many gamers suggest the random rewards from loot boxes may be a form of gambling.

In October, though, the Entertainment Software Rating Board (which rates games for age appropriateness and factors like violence or sexuality) decided not to classify loot boxes as gambling.
In Hawaii, Rep. Chris Lee recently issued a statement referring to loot boxes as “predatory practices” by developers. He called Battlefront II a “Star Wars-themed online casino,” and while he thinks national change may be difficult, he wants to legislate against it at the state level. Also, in Belgium, the country’s gambling authority has declared Battlefront II’s loot boxes do constitute a game of chance and combine “money and addiction.” The commission reportedly wants the practice banned throughout Europe.

What can we do

If you’re a gamer yourself, vote with your wallet. You can complain all you want about loot boxes and microtransactions, but if you buy them, developers and publishers will keep utilising them.
Parents that don’t want their children spending money on in-game items can check to see if games have microtransactions or loot boxes. They can also make sure that no payment method, such as a debit or credit card, are attached to consoles or accounts that they’re using.
in China, loot boxes have to show a clear odd on getting a certain rarity/item and will be checked and authorised whether the odds are accurate. I think this rule should be applied to every game all around the world so at least you know what you are investing in / what items you could possibly get.
In my opinion, loot boxes should be removed completely, in order to balance the gameplay and to prevent children from being influenced by the gambling mechanic, unless they contain only cosmetics or items that do not influence or give an advantage to the player.

As for publishers and developers, they’re likely watching what’s happening with Star Wars Battlefront II with a cautious eye. EA has put a pause on micro-transactions and removed some of the most rare prizes from loot boxes in a hope to calm fans, and that could have a big impact on how the next wave of AAA games implement them.