Optimize Your Games In Unity – The Ultimate Guide

Table of Contents

Optimize Your Games In Unity – The Ultimate Guide

Reading Time: 34 minutes
Level: Beginner – Intermediate – Advanced
Version: Unity (Any Version)

Help Others Learn Game Development

Share on facebook
Share on twitter
Share on reddit
Share on linkedin
When developing your game, one of the hardest things to do is optimization, no matter if you are creating a mobile, desktop or a console game.
 
And we all know that if your game is not well optimized, it will result in downloads lost, refunds requested, and ultimately labeling your game and your game development company as the ones who create poor games.
 
In this post, I will go over everything I learned over the course of 8 years developing games in Unity. I will share with you the wrong way and the right way to do things and I will explain everything I am using in the examples.
 

Important Information Before We Start

When it comes to optimization the Profiler is your best friend. Always profile your games on the platform you are creating them, especially mobile games.
 
The Profiler will give you information about the performance of your game and any spikes that are causing low performance.
 
That being said, everything that I am going to talk about in this post are things that will help you write better and more optimized code and use optimized options for your game which will help you boost your game to > 60 FPS.
 
The order by which I will write every tip doesn’t matter, I will write everything I know how I remember it and I will add new things as I learn them. You can use the menu navigation on the left side to directly navigate to the part you want to learn.
 
Now let’s get started.
 

Always Cache Your Components

A good practice that you need to get used to is to always cache your variables. Code like
 
				
					void FixedUpdate()
    {
        GetComponent<Rigidbody>().AddForce(new Vector3(200f, 300f, 300f));
    }
				
			
is very costly for your game performance, especially on mobile devices. If you need to access any component of any game object, always declare the type of the component you need
 
				
					Rigidbody body;
				
			

and then in Awake get a reference to that variable:

				
					void Awake()
{
    body = GetComponent<Rigidbody>();
}
				
			

I know that we can also use the Start function for this purpose, even the OnEnable function, but I used the Awake function for this example because the Awake function is the first initialization function that is called when the game starts, and if I want to get a reference to variables or initialize variables I always do it in the Awake function as that will be the first thing that will be executed and then the game can start normally.

After that you can safely apply force to your Rigidbody variable:

				
					void FixedUpdate()
    {
        body.AddForce(new Vector3(200f, 300f, 300f));
    }
				
			


Caching Components VS SerializeField

There is another way how we can get a reference to cached variables and that is by adding the SerializeField keyword above the variable declaration:

				
					[SerializeField]
    private Rigidbody body;
				
			

This will expose the variable in the Inspector tab and we can now drag the game object itself providing that it has the desired component attached on it, in the exposed variable field to get a reference to it:

Img-1-2.jpg

Now which of the two methods is more optimized, the answer is SerializeField. Because you don’t need to use code to get a reference to the desired component, and this is very effective especially if you have a lot of game objects such as enemies or collectable items that need to get a certain component when they are spawned.

Now there will be times where you need to get a reference to a component via code, but whenever you can, try to get a reference to the component of a game object by using SerializeField and attaching the desired component in the appropriate slot in the Inspector tab.

Cache Your Non-Component Variables As Well

One of the things that I see A LOT in tutorials and courses online are the following:
 
				
					void Update()
    {
        Vector3 distanceToEnemy = transform.position - enemyPosition;
    }
				
			
Once I profiled a mobile game that I was currently working on, and I found out that by creating a new Vector3 variable in the Update function like you see in the example above, it was taking my code 0.02 ms more to execute the code.
 
Given the fact that to get to 60 FPS and above, all of your code needs to execute at 16 ms tops, you can imagine how unoptimized this one simple line of code is.
 
The solution is to cache the variable before you use it:
 
				
					Vector3 distanceToEnemy;

    void Update()
    {
        distanceToEnemy = transform.position - enemyPosition;
    }
				
			

This is also a rule for any other variable types. It is always better to do

				
					float distance;

    void Update()
    {
        distance = Vector3.Distance(transform.position, enemyPosition);
    }
				
			

than

				
					void Update()
    {
        float distance = Vector3.Distance(transform.position, enemyPosition);
    }
				
			
So get in a habit to do this with all of your variables especially object type variables like vectors, components and custom classes you create, but also do this with floats, ints, booleans, and strings.
 

Don't Use Camera.main

This is also something that I see a lot in online tutorials and course which is killing your performance. Accessing the camera by using
 
				
					Camera.main
				
			
especially if you use this a lot will slow down your game. This is in a way connected to caching that we talked about above, but the solution for this, again, is to cache the camera variable before using it:
 
				
					Camera mainCam;

    private void Awake()
    {
        mainCam = Camera.main;
    }
				
			


Avoid Repeated Access to MonoBehaviour Transform

Another thing that you need to be careful of is reusing the transform property of MonoBehaviour. This internally calls GetComponent<Transform> to get the Transform component attached on the game object.

Again, the solution for this is to cache the transform variable:

				
					Transform myTransform;

    private void Awake()
    {
        myTransform = transform;
    }
				
			


Optimizing Strings

One of the heaviest performances in your Unity game are the strings that you are using. Yeah, you read it right, STRINGS.
 
The first mistake you will see everything online is when collision check is performed:
 
				
					private void OnTriggerEnter2D(Collider2D collision)
    {
        if (collision.tag == "Player")
        {
        }
    }
				
			

This is not the way to go. When you are checking the tag of the collided game object it is better to use CompareTag function:

				
					private void OnTriggerEnter2D(Collider2D collision)
    {
        if (collision.CompareTag("Player"))
        {
        }
    }
				
			

Another common mistake is when declaring an empty string people usually write:

				
					private string playerName = "";
				
			

A better way is to use string.Empty:

				
					private string playerName = string.Empty;
				
			


Strings And Text UI

When using strings and text UI you need to be careful when you are updating the text often, especially if that happens in the Update function.

This is something a lot of people do with timers, the usually write code that looks like this:

				
					[SerializeField]
    private Text timerTxt;

    private float timerCount;

    private void Update()
    {
        timerCount += Time.deltaTime;
        timerTxt.text = "Time: " + (int)timerCount;
    }
				
			

While this looks like a very simple operation, it is going to slow down your game significantly, especially a mobile.

The reason for that is because a string is an object type variable. Every time you concatenate a string like you see in the line 9 in the code above, you create a new object.

Now imagine doing this in the Update function which is called every frame. You are creating a new object that is stacked up memory every single frame and this is something mobile devices can’t handle.

The solution is to use StringBuilders.

				
					// needed for importing string builder
using System.Text;

public class OptimizingGames : MonoBehaviour
{
    [SerializeField]
    private Text timerTxt;

    private float timerCount;

    private StringBuilder timerTxtBuilder = new StringBuilder();

    private void Update()
    {
        CountTime();
    }

    void CountTime()
    {
        timerCount += Time.deltaTime;

        timerTxtBuilder.Length = 0;
        timerTxtBuilder.Append("Time: ");
        timerTxtBuilder.Append((int)timerCount);

        timerTxt.text = timerTxtBuilder.ToString();
    }

}
				
			
Use string builders wherever you need to use strings often especially when it comes to timers and countdowns.
 

Avoid Using Instantiate Function During Gameplay

When it comes to Instantiate function, which creates a copy of the provided prefab, you will find different opinions online. The majority say don’t use Instantiate during gameplay.

This is somewhat true. I say somewhat because I’ve used Instantiate during gameplay in one of my mobile games and when I profiled the game it was running smoothly never going below 60 FPS.

This goes to show that you should always revert back to the Profiler and the stats you see there.

That being said, it is always a better idea to use the pooling technique instead of relying on Instantiate, especially if you are spawning bullets, collectable items or any other game element that is often spawned in the game.

If you don’t know what is pooling, I am going to leave a basic generic pooling class below for you to inspect the code and learn:
 
				
					using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class BasicPool : MonoBehaviour
{
    [SerializeField]
    private GameObject bulletPrefab;

    [SerializeField]
    private Transform bulletSpawnPos;

    [SerializeField]
    private float minShootWaitTime = 1f, maxShootWaitTime = 3f;

    private float waitTime;

    [SerializeField]
    private List<GameObject> bullets;

    private bool canShoot;
    private int bulletIndex;

    [SerializeField]
    private int initialBulletCount = 2;

    private void Start()
    {
        for (int i = 0; i < initialBulletCount; i++)
        {
            // while instantiating the bullet game object also get the SpiderBullet component
            bullets.Add(Instantiate(bulletPrefab));
            bullets[i].gameObject.SetActive(false);
        }

        waitTime = Time.time + Random.Range(minShootWaitTime, maxShootWaitTime);
    }

    private void Update()
    {
        if (Input.GetMouseButtonDown(0) && Time.time > waitTime)
        {
            Shoot();
            waitTime = Time.time + Random.Range(minShootWaitTime, maxShootWaitTime);
        }
    }

    public void Shoot()
    {
        canShoot = true;
        bulletIndex = 0;

        while (canShoot)
        {
            // search for inactivate bullet to reuse
            if (!bullets[bulletIndex].gameObject.activeInHierarchy)
            {
                bullets[bulletIndex].gameObject.SetActive(true);

                bullets[bulletIndex].transform.rotation = transform.rotation;
                bullets[bulletIndex].transform.position = bulletSpawnPos.position;

                bullets[bulletIndex].ShootBullet(transform.up);

                canShoot = false;
            }
            else
            {
                bulletIndex++;
            }

            if (bulletIndex == bullets.Count)
            {
                bullets.Add(Instantiate(bulletPrefab, bulletSpawnPos.position, transform.rotation));

                // access the bullet we just created by subtracting 1 from
                // the total bullet count in the list
                bullets[bullets.Count - 1].ShootBullet(transform.up);

                canShoot = false;
            }
        }
    }

} // class

				
			

But sometimes there will be situations where you simply need to use Instantiate as that is the shortest solution, and there is no harm in using it if you see that the Profiler is not showing any issues, so keep that in mind.

Remove Empty Callback Functions

As you already know, Awake, Start, and OnEnable initialization functions are called when the game object is spawned.

Update and LateUpdate are called every frame and LateUpdate is called every fixed frame rate.

The issue with these functions is that they will be called even if they are empty because Unity doesn’t know that these functions are empty e.g. they don’t have any code inside.

If you leaved them defined in your script

				
					private void Awake()
    {

    }

    private void Start()
    {
        
    }

    private void Update()
    {
        
    }

    private void FixedUpdate()
    {
        
    }
				
			
even if they are empty like in the example above, Unity is going to call them.
 
Now you might say, okay but they don’t have any code inside so what is the harm when nothing is going to be executed.
 
The harm is the fact that whenever a game object is instantiated in the game, Unity will add any defined callbacks, such as Awake or Update, to a list of functions that will be called at specific moments(Awake will be called when the object is spawned, Update will be called every frame and so on).
 
This will waste CPU power due to the cost of the engine invoking these functions. And this can be a problem especially if you leave empty callback functions in let’s say a bullet prefab that you call every time you shoot, or a collectable item that is spawned when the player makes an achievement and so on.
 
Not to mention if you are creating a larger game and over time you populate your scenes with thousands of game objects with empty Awake, Start, Update, and so on, which can cause slow scene load time.
 
For a short demonstration, I am going to create two classes, one that has empty callback functions and another that doesn’t have any:
 
				
					using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class NoEmptyCallbacks : MonoBehaviour
{
    
}
				
			
				
					using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class WithEmptyCallbacks : MonoBehaviour
{
    // Start is called before the first frame update
    void Start()
    {

    }

    // Update is called once per frame
    void Update()
    {

    }
}
				
			
I am going to attach the two functions on two prefabs and I am going to spawn 1000 copies of each prefab using the following code:
 
				
					[SerializeField]
    private GameObject withFunctions, noFunctions;

    private int spawnNum = 1000;

    [SerializeField]
    private bool instantiateWithFunctions;

    private void Awake()
    {
        for (int i = 0; i < spawnNum; i++)
        {
            if(instantiateWithFunctions)
                Instantiate(withFunctions);
            else
                Instantiate(noFunctions);
        }
    }
				
			
Let’s take a look at the Profiler and see what happens when we instantiate the object that has empty callback functions and what happens when we instantiate the object that doesn’t have empty call back functions:
 

As you saw from the example, just by having empty Start and Update functions in the class the spike on the Profiler skyrocketed when we created objects which had that class attached to them.

When Raycasting Use Zero Allocation Code

Avoid using raycast code that allocates memory. All raycast functions have their non memory allocation version:

				
					// instead of 
    Physics.OverlapBox

    // use
    Physics.OverlapBoxNonAlloc

    // instead of 
    Physics.OverlapCapsule

    // use
    Physics.OverlapCapsuleNonAlloc

    // instead of 
    Physics.OverlapSphere

    // use
    Physics.OverlapSphereNonAlloc

    // instead of 
    Physics.RaycastAll

    // use
    Physics.RaycastNonAlloc
				
			


When Calculating Distance With Vectors Use Distance Squared

When you are creating a game in Unity you will be working with vectors A LOT. I mean A LOT A LOT, like A LOT A LOT A LOT – okay teacher we get the point.
 
Small joke, I know that I am the only one who find it funny he he he 🙂
 
Whenever we call the magnitude property
 
				
					Vector3 playerPos = new Vector3();

    private void Start()
    {
        float mag = playerPos.magnitude;
    }
				
			
or when we use the Distance function to calculate the distance between two vectors
 
				
					Vector3 playerPos = new Vector3();
    Vector3 enemyPos = new Vector3();

    private void Start()
    {
        float dist = Vector3.Distance(playerPos, enemyPos);
    }
				
			

we are asking the computer to perform a square root calculation. Now, square root calculations are complex to perform so calling square root operations often can lead to optimization problems.

The solution is to use the square root calculation:

				
					Vector3 playerPos = new Vector3();
    Vector3 enemyPos = new Vector3();

    float distanceFromEnemy = 5f;

    private void Start()
    {
        float dist = (playerPos - enemyPos).sqrMagnitude;

        if (dist < (distanceFromEnemy * distanceFromEnemy))
        {
            // your code
        }
    }
				
			


Using Coroutines And Invoke Functions For Timers

When creating timers it can be tempting to use Coroutines and the Invoke function to do the job:
 
				
					private int timerCount;
    private bool canCountTime = true;

    private int endOfTimeValue = 1000;

    private void Start()
    {
        StartCoroutine(CountTimer());
    }

    IEnumerator CountTimer()
    {
        while (canCountTime)
        {
            yield return new WaitForSeconds(1f);
            timerCount++;
            // display timer count

            if (timerCount > endOfTimeValue)
                canCountTime = false;
        }
    }
				
			
				
					private int timerCount;
    private bool canCountTime = true;

    private int endOfTimeValue = 1000;

    private void Start()
    {
        InvokeRepeating("CountTimer", 1f, 1f);
    }

    void CountTimer()
    {
        if (!canCountTime)
        {
            CancelInvoke("CountTimer");
        }

        timerCount++;
        // display timer count

        if (timerCount > endOfTimeValue)
            canCountTime = false;
    }
				
			
I’ve been guilty of this as well 🙂 But coroutines and the Invoke function are not cheap, especially when you call them over and over, and I’ve seen this by profiling one of my mobile games.
 
The problem with a coroutine is that it allocates memory when it is called to store the current state until it is invoked the next time. This memory allocation is not a one time cost, because every time a coroutine calls the yield statement it will create the same memory allocation over and over again. And you can see this clearly in the Profiler whenever a coroutine is called.
 
As for the InvokeRepeating function, it is not memory heavy as a coroutine, but it has a slightly lower cost than a coroutine.
 
A better approach is to create a timer using Time.time:
 
				
					 private float timerCount;
    private bool canCountTime = true;

    private float endOfTimeValue = 1000;

    private void Update()
    {
        CountTimer();
    }

    void CountTimer()
    {
        if (!canCountTime)
            return;

        if (Time.time > timerCount)
            timerCount = Time.time + 1f;

        // display time

        if (timerCount >= endOfTimeValue)
        {
            canCountTime = false;
        }
    }
				
			
I’ve been guilty of this as well 🙂 But coroutines and the Invoke function are not cheap, especially when you call them over and over, and I’ve seen this by profiling one of my mobile games.
 
A better approach is to create a timer using Time.time:
 
				
					private float timerCount;
    private bool canCountTime = true;

    private float endOfTimeValue = 1000;

    private void Update()
    {
        CountTimer();
    }

    void CountTimer()
    {
        if (!canCountTime)
            return;

        if (Time.time > timerCount)
            timerCount = Time.time + 1f;

        // display time

        if (timerCount >= endOfTimeValue)
        {
            canCountTime = false;
        }
    }
				
			
You can even optimize this further by checking the time condition outside the CountTimer function:
 
				
					private float timerCount;
    private bool canCountTime = true;

    private float endOfTimeValue = 1000;

    private void Update()
    {
        if (Time.time > timerCount)
            CountTimer();
    }

    void CountTimer()
    {
        if (!canCountTime)
            return;
        
        timerCount = Time.time + 1f;

        // display time

        if (timerCount >= endOfTimeValue)
        {
            canCountTime = false;
        }
    }
				
			
This is because Update is called every frame, if you have 60 frames in one second the CountTimer function will be called 60 times, this way we limit the calling of the CountTimer function only once every second.
 

Create Prefabs Out Of Your Levels Instead Of Scenes

One of the things I love to do in mobile games that I created is creating prefabs out of the levels.
 
If my game has 100 levels, I will have 100 prefabs instead of 100 scenes:
 
Img-2-2-scaled.jpg

When I want to “load” a new level, instead of using

				
					SceneManager.LoadScene("Scene Name");

or

SceneManager.LoadScene(sceneIndex);
				
			

I will simply use Instantiate to create the new level

				
					Instantiate(levelPrefab);
				
			

This is more optimized because when we load a new scene, all objects in the previous scene will be destroyed.

Depending on your gameplay and how many game managers you have in your scenes, this means that every time you load a new scene all those objects will be destroyed and when you load a new scene new objects will be created.

By making your level a prefab, you only destroy the level prefab, and then instantiate a new one which you can do while you preview a simulated loading screen to the player.

You need to be careful if your level is too complex and has a lot of objects, in that case a better approach would be to have all your levels in the scene and you would activate and deactivate the levels you need and don’t need, I used this approach in one of my mobile games.

A very important part when using this approach is to test it in the Profiler and see what it has to say just to be sure.

Load Scenes Asynchronously

If your levels are huge and creating prefabs out of them will result in a heavy operation every time you need to delete a level and create a new one then you will use scenes for your level.
 
In that case, instead of using
 
				
					SceneManager.LoadScene("Scene Name");
				
			

use

				
					SceneManager.LoadSceneAsync("Scene Name");
				
			

The difference between the two is when you use LoadScene the main game thread will block until the scene loads which will result in poor user experience.

But when we use LoadSceneAsync the scene will load gradually in the background without causing a significant impact on the user experience.

With LoadSceneAsync we can also display a realtime loading screen to the user. This can be accomplished with the following code:

				
					private AsyncOperation sceneLoadOperation;

    IEnumerator LoadSceneAsynchronously(int sceneIndex)
    {
        sceneLoadOperation = SceneManager.LoadSceneAsync(sceneIndex);

        while (!sceneLoadOperation.isDone)
        {
            Debug.Log("Loading: " + sceneLoadOperation.progress + "%");
            yield return null;
        }
    }
				
			
Of course, instead of using Debug.Log you will display the loading percentage with a UI text on the screen.
 

Use Arrays Over Lists

With its dynamic feature lists are more attractive than arrays, but with that feature also comes performance issues.

In general, if you need a fixed list of items then definitely go with arrays as they are much more efficient. If you need a resizable list of items then you need to use lists.

Now this topic can be debatable and if you searched online you probably saw people recommending lists over arrays and vice-versa, of course the Profiler will help you clear your doubts, but when you are working on a game, especially if the game has more than 200mb of file size, then you will have a lot of features in your game and a lot of the times you will have a need for a fixed size container so to say. So depending on your need you will choose the appropriate way to go.

For example if you have 20 collectable items in your game and you want to spawn them randomly, you will store those items in an array. But if you are using a pooling technique like I did in the example in this post, then you will use a list because you are dynamically adding new objects in the list.

Use For Loop Over Foreach

Speaking of arrays, if you want to process them, always use for loop over foreach.
 
I know that foreach loop looks fancy and it is so pleasing to the eye, but with this comes the lack of performance. The reason for that is because foreach loop uses enumerators to iterate a given list, and this is more complex than just iterating through elements of the array using a for loop.
 
So wherever you can in your code, always replace
 
				
					foreach (GameObject bullet in bullets)
{

}
				
			

with

				
					for (int i = 0; i < bullets.Count; i++)
{
}
				
			


Be Careful With GameObject.Find Functions

We all know that every Find function is notoriously expensive and it should be avoided at all cost. BUT, this is not quite true.

Well, it is true that any Find function is expensive, but if you know how to use it then you will not have issues with it, because, let’s face it, there are times where we simply can’t avoid using Find functions.

The way Find(“Name of game object”) and FindWithTag(“Tag of game object”) work is that they iterate through every game object in the scene until they find the object with the specified name or tag.

So if your scenes are not loaded with hundreds or thousands of game object, using Find or FindWithTag to get a reference to a specific game object will not be an issue if you make that call in one of the initialization functions.

This means that you should avoid using Find functions in the Update or any type of loop structure or otherwise face the consequences. So when you are using Find functions, use them in Awake, Start, or OnEnable, and always check with the Profiler for any issues.

IL2CPP VS Mono

One setting that is connected to programming but it doesn’t involve writing code is the Scripting Backend settings.

You can find it under Edit -> Project Settings -> Player -> Configuration -> Scripting Backend:

Img-3-2.jpg
So what is IL2CPP and what is Mono?
 
IL2CPP (Intermediate Language To C++) is a Unity-developed scripting backend and  Mono is an open-source implementation of Microsoft’s .NET Framework. The main difference between the two is in the compilation of the code.
 
IL2CPP (AOT – Ahead Of Time) compile takes more time, but the binary is completely specified when you ship a game.
 
Mono (JIT – Just In Time) compile is faster, but compilation to machine code occurs on the end of user’s device.
 
I will not dive deeper into more differences between the two as that is something that will not affect our development, but what is important to know is that you should always select IL2CPP as the Scripting Backend:
 
Img-4-2.jpg
The reason for that is because IL2CPP is a scripting backend developed by Unity, as I already mentioned, and over time Unity will improve that scripting backend to be better and better and eventually ditch the Mono Scripting Backend.
 
This is my opinion on the matter, I might be wrong and Unity will not ditch Mono, but for sure a better option is to select IL2CPP.
 

Set Objects That Are Not Supposed To Move To Static

If you attach a collider on a game object, be that 2D or 3D, but you don’t plan to move that game object in the game, it is a good idea to mark it as static:

Img-5-2.jpg
The reason for this is that when you mark a game object as static, Unity will view that game object as static. Because in terms of Physics, there are dynamic and static game objects.
 
Attaching a collider to a game object will make Unity count that game object as a physics object and it will take it into physics based calculations, for example if another game object collides with it.
 
Because of that, if the game object is not supposed to move, like a fence, or a stone, or a door and so on, marking it as a static object will make it take less resources and thus it will be less heavy on the overall performance.
 
This might sound like something that is not important, but imagine having 5000 game objects in your level, from terrain all the way to the smallest glasses and stones, and 4900 of those objects are not supposed to move and you don’t mark them as static, it can get ugly pretty quickly.
 
Another very important thing in regards to static colliders is how the physics engine handles them. When the game starts, the physics engine will generate the data for all game objects that are marked as static, now if new static objects are created during gameplay, it will cause the physics engine to regenerate the data for all static objects.
 
So be careful with marking game objects as static and don’t spawn new static game objects during gameplay which cause massive spikes in performance.
 

Collision Detection Settings

The default Collision Detection settings for every Rigidbody is set to Discrete:

Img-6-2.jpg
But if you click on the drop down list for the Collision Detection settings you will see that we have more options:
 
Img-7-2.jpg
So what is the difference between them?
 
The Discrete setting moves the physics object a small distance every timestep based on their velocity, then it performs a bounding volume check for any overlaps and if it finds any it will treat them as collisions and it will resolve them based on how the objects overlap.
 
The problem with this method is that if an object is moving too fast, then it will pass through the collider, and this is not something that we want to see in our game.
 
The solution for this problem is to use Continuous collision. The Continuous collision setting uses an algorithm that projects a shape across an object’s path of travel. This shape is then used to check for any collisions between frames.
 
This reduces the risk of of missed collisions but the price for that is a significantly higher CPU overhead compare to Discrete collision setting.
 
In essence the only time you will need to use this option is when you have fast moving objects, but I am mentioning this because when we run into a problem we all search on google for solutions and usually we go with the first answer that works for our project not worrying about will that affect the performance of our game.
 
When it comes to 2D physics system you also have the same option in the Rigidbody2D component:
 
Img-8-2.jpg

But in Rigidbody component we also saw two additional settings: Continuous Dynamic and Continuous Speculative.

These additional collision detection settings are further optimizations unique to Unity’s 3D physics system because 3D collision detection is much more expensive than 2D collision detection.

To understand the difference between the Continuous Dynamic and Continuous Speculative setting, we first need to understand how the Continuous mode actually works.

When you set a Rigidbody to Continuous collision detection mode, it will only use continuous collision detection on static objects e.g. objects that have a collider but don’t have a Rigidbody component.

This means that if we have two game objects who have colliders and their rigidbodies are set to Continuous collision detection, and they collide with each other, then it is possible that they might pass through each other.

Game objects that use Continuous Dynamic collision detection setting will not have these issues as they will continuously detect collision against all game objects except for game objects that have their Rigidbody set to Discrete collision mode.

Continuous Speculative setting is even more advanced because it collides against everything be that static or dynamic game object and no matter which collision detection mode they are using. This settings is also faster than the normal Continuous and Continuous Dynamic mode and it also detects collisions that are missed by other continuous collision settings.

Reuse Collision Callbacks

When an MonoBehaviour.OnCollisionEnter, MonoBehaviour.OnCollisionStay or MonoBehaviour.OnCollisionExit collision callback occurs, the Collision object passed to it is created for each individual callback. This means the garbage collector has to remove each object, which reduces performance.

When this option is true, only a single instance of the Collision type is created and reused for each individual callback. This reduces waste for the garbage collector to handle and improves performance.
 
 You can read more about this feature here.
 

Unbind The Transform Component From The Physics System

When a Transform component changes, any Rigidbody or Collider on that Transform or its children may need to be repositioned, rotated or scaled depending on the change to the Transform.
 
You can control if the changes made to the Transform are automatically applied to the correct components by setting this property true. When set to false, synchronization only occurs prior to the physics simulation step during the Fixed Update.
 
You can also manually synchronize transform changes using Physics.SyncTransforms. You can read more about this feature here.
 

Collision Layers

Not all objects in your game need to collide with each other, and this is a common thing in game development. You can use layers to determine which game objects can collide with each other.
 
To do that, you can either click on the Layer drop down list in the Inspector tab and then click Add Layer:
 
Img-10-2.jpg

Or above the Inspector tab you can click on the Layers drop down list and click Edit Layers:

Img-11-2.jpg

From there click on the Layers drop down list and in the User Layer fields define your layers:

Img-12-2.jpg

Let’s say you don’t want the enemy objects to collide with coin objects. Assuming that you defined Enemy and Coin layer, you can go in Edit -> Project Settings -> Physics or Physics 2D if it’s a 2D game, and in the Layer Collision Matrix uncheck the checkbox for the layers that you don’t want to collide with each other:

Img-13-2.jpg
Depending on the scope of your game, this can save you a lot of unnecessary calculations in regards to the physics system because if two layers are not supposed to collide with each other, Unity will ignore collisions between game objects who are set on that layer.
 

Be Careful With Collider Types

As you are aware there are different types of 2D and 3D colliders:
 
Img-9-2.jpg

Depending on your game and the shape of the game objects in your game you will select different types of colliders. What you need to keep in mind is that the most expensive colliders are the ones that take the shape of the game object.

In 2D that is Polygon collider and in 3D that is the Mesh collider. If there is a need for you to use these colliders make sure that the collider points are at the lowest as they can be in a 2D game:

By pressing the Edit collider button you can remove unnecessary points that are connecting with each other in order to form the Polygon Collider 2D. You remove the points by holding CTRL on Windows or CMD on MacOS and left clicking on the points you want to remove.

Unfortunately this is not possible to do with a Mesh collider for 3D games, so you need to be careful which type of collider you pick for your game objects.

If the game object is a stone for example and you absolutely need the collider to have the same shape as the stone, then you can use a Mesh collider, especially if you want other game objects to collide with that stone, but as a general rule try avoid the Mesh collider as well as Polygon Collider 2D as much as possible.

Reducing Draw Calls With Dynamic And Static Batching

Every object visible in a scene is sent by Unity to the GPU to be drawn. Drawing objects can be expensive if you have a lot of them in your scene, especially on mobile devices.

We can use dynamic and static batching to reduce the number of draw calls, but before we do that, it is important to know what draw calls and what batches are.
 
A draw call represents the number of calls to the graphics API to draw objects on the screen, while a batch is a group of draw calls to be drawn together.
 
Batching object together minimizes the state changes needed to draw each object inside the batch. Doing this leads to improved performance by reducing the CPU cost of rendering objects.
 
There are two ways how Unity groups objects in batches to be drawn: Dynamic Batching and Static Batching. It is important to note that only objects that share properties like textures or materials can be batched together.
 
Let’s see an example. Here I have a scene with 4 different game objects: cube, sphere, capsule and a cylinder:
 
Img-14-2.jpg
I have created a Test Material and attached it on all 4 shapes:
 
Img-15-2.jpg

When we run the game and open on the Stats window, we will see that it takes 19 batches e.g. draw calls to render the scene:

Img-16-2

To enable static batching, all we have to do is select all 4 game objects, and in the Inspector tab click on the Static drop down list and select Batching Static:

Img-17-2

If we run the game now and open the Stats window we will see that now it takes 7 batches(draw calls) to render the scene:

Img-18-2

Keep in mind that you can only use static batching on game objects that are not moving and they share the same material or texture.

When it comes to dynamic batching the batches are generated at runtime(during the gameplay of the game), the objects that are contained in the batch can vary from frame to frame depending on which objects are currently visible to the camera, and even objects that move can be batched.

For dynamic batching to work, the game objects needs to use the same material or texture, and it needs to have the same mesh, and in some cases the scale needs to be the same as well depending on the situation.

In the new scene I have 5 cubes that share the same material. Let’s run the game and take a look at the Stats window:

Img-19-2

It takes 33 batches(draw calls) to draw the scene. Now let’s turn on dynamic batching by going under Edit -> Project Settings -> Player -> Other Settings:

Img-20-2

When we run the game now, we will see that it takes only 9 batches to render the scene:

Img-21-2


Reducing Draw Calls With GPU Instancing

Another way how we can reduce draw calls is by using GPU instancing. One important thing to note here is that we can’t combine dynamic batching and GPU instancing, we can use only one of the two.

So in cases where dynamic batching can’t help, you can turn on GPU instancing, and this is very simple to do. You just select the material, and check the checkbox where it says Enable GPU Instancing:

Img-22-2

As I already mentioned, we can’t combine dynamic batching and GPU instancing, so before we test this out, we need to turn off dynamic batching in the Project Settings:

Img-23-2
If we run the game, we will see that it only takes 9 batches to render the scene:
 
Img-24-2
Whereas if you turned off GPU Instancing it would take 33 batches to render the same scene.
 

Optimizing Lights For Better Game Performance

Another Unity feature that can eat up performance is lighting. This is something that a lot of beginners as well as intermediate Unity developers are not aware.
 
I’ve seen a lot of projects where lights are just dumped into the game to make to make it more pretty, but with that beauty comes the lack of performance.
 
I am going to reuse the batching example where I have 5 simple cubes in the scene. I have turned off batching and GPU instancing and this is the current number of batches it takes to render the scene:
 
Img-25-2

Now I am going to change the Shadow Type settings for the Directional Light we have in the scene, from Soft Shadows to No Shadows:

Img-26-2

Now open the Stats window and take a look at the batches number:

Img-27-2
It is amazing how the batch number went from 33 to 7 just by changing this one setting on the Directional Light. This tells you that lights can hit the game optimization very hard, especially on mobile devices.
 
Another way how we can save up performance with lights is by using baking. Here I have a cemetery scene which contains this temple looking 3D model:
 
Img-28-2

We see that it currently takes 469 batches to render this scene. I am going to add a simple Point Light by Right Click -> Light -> Point Light in the Hierarchy tab:

Img-29-2

I am going to change the color of the Point Light and move it inside of this temple 3D model:

Img-30-2

Now it takes 484 batches to render this scene, which means it takes 15 batches just to render this simple light effect. This is where baking comes into play.

When you bake lights, Unity performs calculations for baked lights in the scene and saves the results as lighting data. This means that after we bake the lights, we can turn them off e.g. deactivate or even delete the light game object, but the light effect will stay in the same place where it was baked and this saves a lot of performance.

The first thing we need to do to enable baking is to select all 3D models on which we want to apply backing, and in the Inspector -> Model tab check the Generate Lightmap UVs:

Img-31-2

Next, you need to select the light you want to bake in the scene, and in the Inspector tab change the Mode to Baked:

Img-32-2
Now open the Lighting tab under Window -> Rendering -> Lighting Settings:
 
Img-33-2
In the Lighting tab, click on the Scene tab and then scroll all the way to the bottom and press the Generate Lighting Button:
 
Img-34-2
This can take some time depending on the complexity of your level so don’t worry if you see the process lasting for a few minutes or even more. If we take a look at the Stats window now, we will see that we saved 14 batches with baking:
 
Img-35-2
Even if we turn off the Point Light object, we will still see the light in the game:
 
Img-36-2-scaled

This is the power of baking, it literally takes the light information, creates data for it, and stores it in the game so that now we can use that data to simulate lights in our game instead of using real time lights which is a huge performance saver.

We can also take advantage of the Culling Mask property for every Light component. The culling mask works like collision layers, it determines which layers are affected by the light component:

Img-37-2
This way we can exclude game objects from being affect by lights and thus save additional game performance.
 

Selecting The Right Settings For Your Sprites

When it comes to the assets that you use for your game, I will first start with sprites as they are the most common assets used in both 2D and 3D games.
 
The biggest mistake Unity developers make when it comes to sprites is that they don’t set the compression settings for the sprite:
 
Img-38-2
The majority of developers leave these settings on default not aware that this can affect the size of your game.
 
With the current default settings, the size of the sprite image is 1.3MB:
 
Img-39-2

This represents the actual file size this image will have in our game. Now imagine creating a mobile game where you have 1000 assets and you don’t optimize their size, your game is going to have over 2GB of size and if it’s not some stunning action game that will get you hooked up after 2 seconds of playing it, no one is going to download your 2 GB game.

Now the settings that you will set will depend on the actual size of your sprite image and the platform for which you are creating the game, because with the settings you also determine the quality of that sprite in the game.

You can take a look at Unity’s official guide about sprite format settings by clicking here which will help you select the correct settings for your sprites and textures based on the platform for which you are creating the game.
 
And just to show you how these settings can affect your sprites, I am going to change the Format settings from Automatic to RGBA 32 bit:
 
Img-40-2

As you can see, this one setting made our file 3x bigger in size. It went from 1.3MB to 5.3MB and this will be the file size of this sprite in your game. Imagine, this one image has 5.3MB, so you can see how this can get ugly pretty quickly in a game where you have a lot of assets.

I am also going to demonstrate what happens if we change the Max Size settings from 2048 to 512 for the background sprite:

Img-41-2-scaled
You will notice that our background is very blurry in the Game tab which shows you how these settings can affect how your sprites and your whole game will look like.
 

Optimizing Audio Files

Audio optimization is one of the most ignored aspects in Unity game development. Because most of the times people rely on Unity to take care of the audio optimization.
 
And a lot of the times this works, but as soon as your projects gets a little larger and you add more and more files, you will see a huge spike in performance increase when all things start to add up together.
 
Besides from the fact that audio can affect the CPU, it can also take up disk space and RAM, and that is why it’s important that we optimize audio files from the very start.
 
What is unfortunate is that these three areas overlap, and it’s not possible to optimize all three aspects, instead depending on which area is causing the most problems, we will optimize that area at the expense of the other.
 
Let’s explore the options we have one by one starting with Force To Mono:
 
Img-42-2
Audio sources that are fully 3D are essentially playing audio in mono. This is because even if the audio file is stereo, the two audio channels still originate from the exact same point in world space, the same way as mono.
 
Because of that to save memory, it is better to check the Force To Mono checkbox for 3D audio sources.
 
You will not notice any difference how the audio file sounds, but if you don’t check the Force To Mono checkbox for a 3D audio source you will waste memory because a stereo audio file is twice as large in size as mono.
 
In regards to the Normal option below the Force To Mono checkbox, if you enable it, it will readjust the gain of the audio file so that the mono sound is the same volume as the original audio file.
 
Next we have Load In Background and Preload Audio Data:
 
Img-43-2
I have mentioned these settings together because they have direct impacts on each other. Let’s take a look at difference scenarios depending on if the options are enabled or disabled:
 
Img-44-2

When both settings are enabled, and the scene starts loading, the audio file will begin loading without stalling the main thread. The audio file doesn’t finish loading by the time the scene has finished loading, then the audio file will continue to load in the background while the scene is playing.

Next we have:

Img-45-2
In this situation, when the audio file is played for the fist time, it will being loading in the background and it will play when it finishes loading. The problem with this setting is if the file is large, it will cause a delay between triggering and playing, but this is only happens the first time the audio file is played, every next time you play it, it will play normally.
 
Moving forward:
 
Img-46-2

With these settings the audio file is loaded in the same time the scene is loaded. The problem here is that the scene will not start until all the sound files with this setting are loaded into memory.

And finally we have:

Img-47-1

When both settings are turned off, the first time the audio file is triggered to play, it will use the main thread to load itself in memory. If the file is large, this can cause a frame freeze. This will not be a problem every next time you play the same file.

I recommend that you be careful when using this setting, you can do it with smaller files but even then the Profiler is your best friend so use it to measure the impact it has on your game.

Next we have the Ambisonic option:

Img-48-1

This option is mainly used for VR and AR applications provided that the audio file has ambisonic encoded audio.

Moving to the Load Type. Here we have three options:

Img-49-1
Decompress On Load: The audio file will be decompressed as soon as it is loaded. This option is more suitable for small-sized files to avoid any performance issues that can arise when decompressing the file on the go. This process is heavy on the RAM, and it will also increase the loading time, but it is very cheap for the CPU and it is fast to process.
 

Compressed In Memory: This will keep the audio file compressed in memory, and it will decompress it while playing. As a result this takes less RAM and less loading time, but it is heavy on the CPU because the file needs to be decompressed every time it is played.

Streaming: The audio file will be stored on device’s persistent memory(storage) and it will be streamed when played. With this option RAM is not affected at all instead the loading is done with the CPU. This doesn’t have a huge impact on performance as long as you don’t play a lot of audio files simultaneously. Especially you need to pay attention to mobile devices when using this option.

Next we have three Compression Format settings:

Img-50-1

PCM: With this option the audio file will be loaded as is e.g. with its original size which takes up storage space and RAM. But playing this file is almost cost free because it doesn’t require to be decompressed.

ADPCM: This option is very effective because it is very cheap to compress and decompress files which will reduce the CPU load significantly, but the down side is that the sound file might have some noise. You can always preview the audio file after you set this settings, if it sounds the same as the original file, then you are good to go.

Vorbis: This option supports most major platforms. It can handle very high compression ratios while maintaining the sound quality but is expensive to compress and decompress on the go.

Next we have the Quality option:

Img-51-1
You can combine Compression Format Option with the Quality option to decrease the size of the compressed audio file in exchange for sound quality. Anywhere between 100 and 70 the quality should stay the same, but always preview your audio files when you edit them with this setting.
 
And lastly we have the three options for Sample Rate Setting:
 
Img-52-1

Preserve Sample Rate: This option will keep the original sample rate of the audio file.

Optimize Sample Rate: This option will determine and use the lowest sample rate without losing sound quality.

Override Sample Rate: With override you can manually set sample rates, but if you are not a professional audio manager or a DJ at least, I would not mess with this option.

Optimizing The UI Canvas

We know that every UI element needs to be a child of the Canvas. The Canvas has a primary task of managing the meshes that are used to draw the child UI elements of the Canvas and issue draw calls that are necessary to render all UI elements that are children of that Canvas.
 
Now, one important thing to understand is, whenever any change is made to the Canvas or any of its children, this is known as dirtying the Canvas, and whenever a Canvas gets dirty, it needs to regenerate meshes for all child UI elements before it can issue a draw call.
 
Now you can only imagine how unoptimized is to put all your game UI in one single Canvas. Especially in gameplay scenes where you have timers that update every second which will force the Canvas to rebuild(regenerate) all of its elements before it can issue a draw.
 
A solution for this is to have more than one Canvas. What I like to do is put UI text that display timers in one Canvas, that way when the text changes which happens often with timers, the Canvas will only have to rebuild a few UI elements because each Canvas is independent and only rebuilds the UI elements that are its children.
 
But be aware, don’t create too many Canvases as that can also lead to performance issues. Searching online I found some people saying that you should not have more than 4 Canvases on mobile devices, however from testing my own games where I have more than 4 Canvases I found out that my game was running above 60 FPS even on low end devices, so always consult the Profiler before you make a decision.
 
A good idea is to separate static and dynamic UI elements. Static UI elements are those that don’t move, like labels, logos and background images. Dynamic UI elements are buttons, texts with timers and so on.
 
When it comes to dynamic UI elements I would also separate the elements that change often from the ones that don’t, for example buttons only change when you hover over them or press them, whereas a timer text changes all the time.
 

Disable Raycast Target For Non-Interactive UI Elements

Another way to optimize UI elements is to disable Raycast Target option on every non-interactive UI element:

Img-53-1

If you have backgrounds, labels, icons, and other UI elements that are not supposed to interact with the user e.g. react to the users input, then you should disable Raycast Target option on every single element.

Deactivate The Canvas Component Instead Of Deactivating The Game Object Holding The Canvas

When hiding and showing a Canvas, it is better to deactivate and activate the Canvas component itself instead of deactivating and activating the whole game object:

Img-54-1

The reason for this is, when a game object that has a Canvas component is activated prior to being deactivated, it will make the Canvas rebuild all its UI elements before issuing a draw call, but if you activate a Canvas component prior to being deactivated, it will continue drawing UI elements where it stopped when it was deactivated and it will not rebuild them before doing that.

Don't Animate UI Elements With Unity's Animator

A very big NO NO when working with UI elements is to avoid Unity’s Animator component. If you animate UI elements with Unity’s animation system, then every time the Animator component changes the UI element, the Canvas will get dirty and it will need to rebuild all of its UI child objects.
 

Be Careful With UI Scroll View

One of the MOST painful UI element is the Scroll View. You can’t go without it in any mobile game, but it takes a lot of performance since the Canvas needs to update it constantly.
 
Every time you scroll up or down the Canvas will have to rebuild everything and issue draw calls, and with everything I also mean every element of the Scroll View as well, so you can imagine how performance heavy can this UI element be.
 

This is something that I struggled with a lot in my mobile games because I use the Scroll View in my level scenes to enable the user to scroll through the available levels.

Luckily, there is a really good way how we can fix this issue and that is by using a RectMask2D component:

Img-55-1
The RectMask2D component will hide the Scroll View’s child elements that are not visible on the screen, this way those child elements that are not visible because of the RectMask2D component, the draw calls will not be issued for them.
 
Only when these child objects appear on the screen the draw calls will be issued and then they will be rendered on the screen.
 

Don't Hide UI Elements Using The Alpha Property Of Its Color

If you want to hide a UI element then don’t do it by setting the alpha property in its color to 0, because this will still issue a draw call for that UI element.

Instead, disable the UI component that you want to hide, be that Image, Text, Button and so on.

Leave a Comment