<aside> đź’ˇ What went into this game? Catch the prototypes and test runs here, and learn about how the technology works!

</aside>

Concept Art

first ideas for collectible companions!

first ideas for collectible companions!

our initial inspiration was a combined vaporwave + nature aesthetic

our initial inspiration was a combined vaporwave + nature aesthetic

creatures drawn in a “papercut” style that ended up being our final art direction!

creatures drawn in a “papercut” style that ended up being our final art direction!

Prototype Video

https://youtu.be/qiEUioYR_cQ

Technology Deep Dive

The core technology includes a Unity game, a local Python server that handles computer vision, and a local JavaScript server that handles communication to the printer.

Scanning

We found a library https://github.com/balbarak/scanx that uses JavaScript to execute scan commands and receive scan data through a USB cable. Once the scan data is received, it is sent to the Python server that removes the background and cleans up any noise using OpenCV, and Unity receives the result as a PNG image. Fortunately, Unity has a function that auto-generates a polygon collider for a PNG image, so we didn’t have to write our own.

def process(image_array):
	# flips and resizes the image
	img = cv2.imdecode(image_array, cv2.IMREAD_COLOR)
	border = 25
	img = img[border:-border, border:-border]
	img = cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
	img = cv2.flip(img, 1)
	img = cv2.resize(img, (TARGET_WIDTH, TARGET_HEIGHT), interpolation=cv2.INTER_CUBIC)
	
	# threshold on white pixels
	thresh = 30
	lower = np.array([0, 0, 0])
	upper = np.array([thresh, thresh, thresh])
	
	# create mask to only select black pixels
	thresh = cv2.inRange(img, lower, upper)
	
	# clean up noise
	morph_amount = 10
	kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (morph_amount, morph_amount))
	morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
	morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)
	mask = 255 - morph
	
	# apply mask to image
	result = cv2.cvtColor(img, cv2.COLOR_RGB2RGBA)
	result = cv2.bitwise_and(result, result, mask=cont_mask)

	return result

Reading QR codes

We used the ArUco library that is compatible with OpenCV to generate and read the markers that look like QR codes. The library provides functions to get the location and rotation of the markers relative to the camera, in screen space. Each marker is associated with an ID, which is mapped to each special item. Upon scanning, Unity calls the Python server to get a list of markers that it detected. It’s a bit of a hack to convert the screen space values to world space in Unity, and it relies on the image being displayed in the same width and height in pixels as it was scanned.

Vector3 ScreenToWorld(Camera cam, int screenX, int screenY)
{
  // hardcoded values
  var vec3 = new Vector3(216.5f + screenX, 1080 - screenY, -cam.transform.position.z);
  return cam.ScreenToWorldPoint(vec3);
}
  
private IEnumerator HandleSpecialItems()
{
  using (UnityWebRequest webRequest = UnityWebRequest.Get(SERVER_URL + "/markers"))
  {
    yield return webRequest.SendWebRequest();
    print(webRequest.result);
    var result = JsonUtility.FromJson<MarkersData>(webRequest.downloadHandler.text);
		
		// handle markers
    foreach (var m in result.markers)
    {
      var marker_id = int.Parse(m.id);
      if (marker_id < specialItemsList.items.Length && marker_id >= 0)
      {
          var prefab = specialItemsList.items[marker_id];
          var bottomCorner = ScreenToWorld(mainCamera, m.bottomCornerX, m.bottomCornerY);
          var topCorner = ScreenToWorld(mainCamera, m.topCornerX, m.topCornerY);

					// spawn special item in the same rotation as the marker
          Instantiate(prefab, bottomCorner, Quaternion.LookRotation(Vector3.forward, topCorner - bottomCorner));
      }
    }
  }
}