Unit testing complex objects using OzCode

A good unit test examines a specific scenario using the required minimal input and then verifies that the system has reached a specific state.

This could prove quite a challenge when the unit of work requires complex input, or if the resulting state is difficult to isolate from the rest of the system.

When faced with a complex problem domain, developers tend to over simplify the scenario tested. The problem is that over simplification causes unit tests to lack any meaning while making them increasingly fragile, breaking with any trivial change. From there on it’s a slippery slope until eventually the developer stops writing unit tests due to frustration with the state of these simple yet unmaintainable tests.

Another solution is to write acceptance/scenario/integration tests that execute a big chunk of the system. While those tests are needed in every project, they should be used sparsely and only for main requirements/execution paths. The reason is that they usually run longer and are easily effected by external factors – a scenario test can fail even when the code works perfectly. Relaying heavily on those kinds of tests leads to long running sessions, and when these tests fail it’s difficult to quickly find the root cause of that failure. Instead of quickly fixing the problem, the developer must fire up his debugger and try to find the bug.

A developer trying to create an existing domain object may find out that many domain objects are not created by code but constructed from external services, run time variables and/or data repositories, and re-creating such objects requires an effort. I’ve seen some teams create serialization solutions in order to save and load the right instance for each test, and in fact many times unit testing efforts are postponed (sometimes indefinitely) until such a tool is created.

Handling complex input

For this article I’ve used a ray tracing application adapted from a blog post by Luke Hoban.

pixelundertest

Writing unit tests for this application is hard as there are many factors that affect every single pixel color: camera angle, object(s) placements, material types, direction and color of lights – you name it.

When facing this amount of data it’s hard not to be intimidated and choose to render the whole scene. But running the entire application would take a long time and testing correctness would be harder still.

Another problem with testing the whole scene is knowing whether the test has passed. One option is to compare the result with a previously rendered image. The problem with this option is that it could lead to a test failure for every single pixel change, and we won’t be able to pinpoint the reason that the test failed (image A is not exactly the same as Image B is just not good enough and would require quite a lot of work to determine if a bug actually exists and what caused it).

Ideally we’d want to be able to check several key pixels on that image to make sure that they were rendered successfully.

The logic I’m interested in resides in the TraceRay method which takes a Ray object and a Scene and calculates the color of a single pixel:


public PixelColor TraceRay(Ray ray, Scene scene, int depth)

{

	var intersections = Intersections(ray, scene);

	Intersection intersect = intersections.FirstOrDefault();

	if (intersect == null)

		return PixelColor.Background;

	return Shade(intersect, scene, depth);
}

The problem is that the input of the TraceRay method requires a whole Scene object to be initialized beforehand.

We could create a simple Scene without any interesting objects in it (and I might do so for trivial cases), but we want to make sure that the method works correctly even for a complicated scene.


[Test]

public void TraceRayTest()

{

	var rayTracer = new RayTracer.RayTracer(500, 500, (x, y, color) => {});

	PixelColor result = rayTracer.TraceRay(?, ?, 0);

	var expected = Color.FromArgb(0, 0, 0);


	Assert.That(result.ToDrawingColor(), Is.EqualTo(expected));

}

In the real world I might not even have the ability to create the Scene object by hand since its being calculated from external inputs.

Lastly even if creating the Scene is easy, it would require some trial and error to get all of the inputs right, and finding a specific place in the code would prove challenging.

How export saved my day

First let’s test that a specific pixel got a specific color. For this example I want to test the value of pixel 100,230. It’s interesting since it is affected by the floor reflection, sphere surface and the various lights.

pixelundertest_arrow

Instead of creating the image or trying to extract the needed information from the actual scene I’ll use OzCode’s new Export feature.

The first order of business is to decide which scenario to test, and debug until we reach that specific scenario. Using Conditional Breakpoints or a simple temporary code change I get the debugger to the following line for the desired pixel (100,230).

Note: In this case using conditional breakpoints to stop at the right location case can take a while since they use interrupts to break on every loop iteration 
– consider changing the code temporarily using a simple “if” statement.

Once we have the debugger where we want it all we need to do is open the watch window for the instance we want and choose Export.

export

The export windows will show:

exportoptions

From here we can set several parameters:

  • Output format –> in this case C#
  • Depth -> for Ray we can use the default (3)

When satisfied with the results we can either Copy To Clipboard or Save To File to be used later.

The second instance needed for the test is the Scene, which in this case was created from an outside service which we do not want to run each time we run our unit test.

export_scene

Exporting the Scene object is done similarly, the only difference being that the depth needs to be increased to 5 in order to capture all of the information.

choosedepth

Now we can stop the debugger and fill our unit test with the two new objects.

After a few tweaks, as well as adding the expected result, we have the following unit test:


[Test]

public void TraceRay_PixelIsSphereAndReflection_PixelIsDarkBlue()

{
	var ray = new Ray
	{
		Start = new Vector { X = 3, Y = 2, Z = 4 },
		Direction = new Vector { X = -0.53063752537127, Y = -0.304891435482677, Z = -0.790863470668084 }
	};
	var scene = new Scene
	{
		Things = new SceneObject[]
		{
			new Plane{Norm = new Vector{X = 0,Y = 1,Z = 0}, Offset = 0, Surface = new CheckerBoard()},
			new Sphere{Center = new Vector{X = 0,Y = 1,Z = 0}, Radius = 1, Surface = new Shiny()}
		},
		Lights = new[]
		{
			new Light{Pos = new Vector{X = -2,Y = 2.5,Z = 0}, Color = new PixelColor{R = 0.49,G = 0.07,B = 0.07}},
			new Light{Pos = new Vector{X = 1.5,Y = 2.5,Z = 1.5}, Color = new PixelColor{R = 0.07,G = 0.07,B = 0.49}},
			new Light{Pos = new Vector{X = 1.5,Y = 2.5, Z = -1.5}, Color = new PixelColor{R = 0.07,G = 0.49,B = 0.071}},
			new Light{Pos = new Vector{X = 0,Y = 3.5,Z = 0}, Color = new PixelColor{R = 0.21,G = 0.21,B = 0.35}}
		},
		Camera = new Camera
		{
			Pos = new Vector { X = 3, Y = 2, Z = 4 },
			Forward = new Vector { X = -0.683486126173409, Y = -0.256307297315028, Z = -0.683486126173409 },
			Up = new Vector { X = -0.27185494199858, Y = 1.44989302399242, Z = -0.27185494199858 },
			Right = new Vector { X = -1.06066017177982, Y = 0, Z = 1.06066017177982 }
		}

	};

	var rayTracer = new RayTracer.RayTracer(500, 500, (x, y, color) => { });

	var result = rayTracer.TraceRay(ray, scene, 0);

	var expected = Color.FromArgb(22, 70, 111);

	Assert.That(result.ToDrawingColor(), Is.EqualTo(expected));
}

Not a pretty test, but at least it passes and tests exactly what we need. Now we can refactor the test to make it slightly more readable.

Refactoring the unit test using external files

Now that we have a passing test, one option for refactoring our unit test is to use builders and factories for creating the Scene and Ray objects.

Another option would be to store the inputs in external files which would be converted to instances during the test run.

Obtaining the external files is simple – All that is needed is to choose the JSON (compatible with Json.NET) or XML tab, and export to a file.

For example, this is what the Ray instance from the previous test would look like:

{
	"$type": "RayTracer.Ray",
	"$tostring": "RayTracer.Ray",
	"Start": {
		"$type": "RayTracer.Vector",
		"$tostring": "RayTracer.Vector",
		"X": 3.0,
		"Y": 2.0,
		"Z": 4.0
	},
	"Direction": {
		"$type": "RayTracer.Vector",
		"$tostring": "RayTracer.Vector",
		"X": -0.53063752537126985,
		"Y": -0.30489143548267672,
		"Z": -0.79086347066808416
	}
}

Using the same method explained above I’ve created two external JSON files. In the test I use Json.NET to read those files and serialize them into classes before running the test. I’ve added the files as an Embedded Resource to make sure that they are always available for the test, and wrote a simple method to load and deserialize the JSON files.

Now the test is simple and readable:

[Test]
public void TraceRay_PixelIsSphereAndReflection_PixelIsDarkBlue_JSON()
{
	var ray = DeserializeFromResource<Ray>("ray_100_230.json");
	var scene = DeserializeFromResource<Scene>("scene_100_230.json");

	var rayTracer = new RayTracer.RayTracer(500, 500, (x, y, color) => { });
	var result = rayTracer.TraceRay(ray, scene, 0);

	var expected = Color.FromArgb(22, 70, 111);

	Assert.That(result.ToDrawingColor(), Is.EqualTo(expected));
}

Conclusion

Exporting instances can be useful in many scenarios. In this article we saw how we can use OzCode in order to Export an instance for a specific later use – in this case in unit tests.

Using debug information to create unit tests can be useful when refactoring legacy code, as a developer can write automated tests for code which he has little knowledge of, just by debugging and exporting objects required to run a specific scenario. These tests enable the safely refactoring and reconstructing of the code, without fear and without the lengthy analysis that is usually required in order to understand the code well enough to create intelligent inputs for automatic tests.