When I was out grocery shopping a rather handsome guy caught my eye, he had a cute tattoo of oldschool red and cyan 3D glasses. Long before 3D TVs, VR, and 3D movies with high-tech glasses, these glasses were a classic solution to produce a 3D effect: the effect is called the anaglyph effect. I was inspired to try and recreate the effect using Go and WebGL shaders.

What is an Anaglyph?

An anaglyph is a type of stereoscopic 3D image that’s created by using two differently filtered colored images, usually red and cyan, that are offset from each other at just the right offset. When the image is viewed through corresponding colored glasses, each eye only sees one of the images, and this is what creates the 3D effect.

A Quick History Lesson

The concept of stereoscopic vision has been around since the 1830s when Charles Wheatstone introduced the stereoscope. However, the anaglyph technique, as we know it, only started taking shape in the 1850s.

In the 20th century, with the rise of cinema, anaglyph 3D movies became a unique attraction. The 1950s, in particular, was a decade known for its 3D movie craze, with many movies being shot in 3D and screened using anaglyph glasses. The 3D movie craze died down in the 1960s, but anaglyphs continued to be used in comic books and other media.

Modern 3D cinema no longer uses anaglyphs, having shifted to polarized or active shutter glasses. This is due to the better color accuracy, depth effect, and clarity they offer. However, anaglyphs hold a nostalgic charm for me. They are an essential part of the history of 3D visualization.

3D Perception

Eyes are spaced apart, so they naturally see the world from slightly different perspectives. This difference in viewpoint helps the brain perceive depth. Anaglyphs exploit this principle by presenting two slightly different images to each eye.

The most common type of anaglyph glasses has one red lens and one cyan lens. The red lens filters out the red component of the image, so only the cyan image is seen with that eye. Conversely, the cyan lens filters out the blue and green components (which combine to produce cyan), so only the red image is seen with the other eye.

When each eye sees only one of the images, and those images are slightly offset from one another (representing the slightly different viewpoints of our two eyes), the brain fuses them to create a single image with depth! Its pretty darn cool.

The Glasses And Tuning

I have a couple of 3D glasses lying around, so I decided to use them to create the anaglyph experiment.

3D Glasses

Before proceeding, I had to figure out what colors and offset to use for the anaglyph effects on images – as you can see, the glasses are different. To get the best effect, I had to experiment with different colors and offsets to get the best effect.

When a color is viewed through a lens of the same color, it generally appears as a lighter shade of that color, sometimes even appearing white or nearly white. This is because the lens filters out all the colors in the spectrum except for the color of the lens itself. As a result, only light of that specific color wavelength passes through, making the object appear in that color or a lighter shade of it. In some cases, it might appear as if the object has “blended” into the background if the background is also of the same color, thereby making it harder to distinguish the object.

For example, if you’re looking at a red object through a red lens, the lens allows only the red component of the light to pass through. Therefore, the red object would appear very light red or nearly white, especially if the background is also red.

We can use this effect to find the ideal colors for the right and left offsets to match the tint of the glasses. By creating two filled circles, one red and one cyan, we can adjust the tones of each until they appear as the brightest shade of red and cyan, respectively. This will ensure that the red and cyan components of the image are filtered out by the glasses, creating the anaglyph effect.

I wrote the following HTML page as a tool to help me with this process:

It allows you to experiment with different colors and offsets to find the best combination for your glasses.

An Important Note On Base Image Colors

The color combination used for the anaglyph effect depends on the colors in the base image too! The goal is to use colors that are not highly present in the base image so that they can be filtered out by the glasses. For example, if the base image contains a lot of red, then red-cyan glasses would not be a good choice because the red component of the image would be filtered out! So you need to be creative when creating the base image too! I decided to go with black and white images for this experiment, which is what would have been the most prevalent in the 1950s.

Make Anaglyphs In Go

In the Go code provided below, an anaglyph image is generated by manipulating and combining two slightly offset versions of the original image to simulate the perspectives of the left and right eyes. The steps are as follows:

  1. Reading the Image: The code begins by loading an image file into memory using Go’s standard image libraries.

  2. Creating the Anaglyph Image Canvas: A new empty image with the same dimensions as the original is created to serve as the canvas for the anaglyph effect.

  3. Manipulating Pixels: For every pixel in the original image, the code computes the corresponding positions in the left and right eye perspectives by shifting the pixel’s x-coordinate by a specified offset. The color values for these shifted positions are then retrieved.

  4. Blending Channels: To generate the anaglyph effect, the red channel from the left-eye perspective (left offset) is combined with the green and blue channels from the right-eye perspective (right offset). This emulates the traditional red-cyan anaglyph method.

  5. Outputting the Result: Once all pixels have been processed, the resulting anaglyph image is saved to disk.

Understanding the Go implementation offers valuable intuition when transitioning to the more abstract realm of shaders. The detailed, step-by-step nature of the Go code makes it a helpful tool for visualizing and understanding the underlying mechanics of the anaglyph effect, setting the foundation for the more streamlined shader-based approach in the next step!

go run anaglyph.go -input=test_image_4.png -output=output.png -leftColor=ff0a0a -rightColor=00ffff -offset=8

This produces the following effect! If you have 3D glasses, you can try viewing the image with them to see it. If you hover your mouse over the 3D image, it will be even more pronounced and it looks like a portal into another world 😵‍💫.

No 3D

No 3D

Yes 3D

Yes 3D

Next Lets Make Anaglyphs In WebGL Shaders

Creating an anaglyph effect using a fragment shader in Three.js typically involves post-processing, the rendered scene is passed through a simple custom fragment shader to apply the anaglyph effect after the scene has been rendered in the first render pass. This is done using the EffectComposer and ShaderPass classes from the Three.js post-processing module.

If you understand the golang implementation that we worked through, then the shader implementation should be fairly straightforward. I have heavily commented it, and I hope that helps, since shader code is a bit tricky to read.

// Declare uniform variables that will be accessible from JavaScript.
// sampler2D is a texture that we'll pass from the Three.js renderer.
// Separation, red, and cyan are floats that control the anaglyph effect.
uniform sampler2D tDiffuse;
uniform float separation;
uniform float red;
uniform float cyan;

// Declare varying variables that will receive interpolated values
// from the vertex shader. In this case, we'll get texture coordinates.
varying vec2 vUv;

// main() function is the entry point of the fragment shader.
void main() {
  // Sample the original color from the texture at the current
  // texture coordinate (vUv).
  vec4 color = texture2D(tDiffuse, vUv);

  // Sample the color for the left eye.
  // The texture coordinate is shifted horizontally by negative "separation" units.
  vec4 colorL = texture2D(tDiffuse, vUv + vec2(-(separation * -1.0), 0.0));

  // Sample the color for the right eye.
  // The texture coordinate is shifted horizontally by positive "separation" units.
  vec4 colorR = texture2D(tDiffuse, vUv + vec2(separation * -1.0, 0.0));

  // Combine the colors into an anaglyph effect.
  // The red component comes from the left-eye view.
  // The green and blue components come from the right-eye view.
  // We also apply the user-specified tint values for red and cyan.
  gl_FragColor = vec4(colorL.r * red, colorR.g * cyan, colorR.b * cyan, 1.0);
}

Now that we have the shader code, we need to set up the Three.js scene and renderer to use it. The following code snippet shows how to do this:

// Import required Three.js modules
import * as THREE from "three";
import { EffectComposer } from "three/examples/jsm/postprocessing/EffectComposer.js";
import { RenderPass } from "three/examples/jsm/postprocessing/RenderPass.js";
import { ShaderPass } from "three/examples/jsm/postprocessing/ShaderPass.js";
import * as dat from "dat.gui";

// Declare global variables for scene elements and effects
var camera, scene, renderer, composer;
var mesh;
var customPass;

// GUI parameters for shader
var params = {
  separation: 0.004,
  red: 1.0,
  cyan: 1.0,
};

// Declare a group object to hold multiple cubes
var cubeGroup;

// Initialize Three.js scene
function init() {
  // Create a WebGL renderer
  renderer = new THREE.WebGLRenderer();
  renderer.setSize(window.innerWidth, window.innerHeight);
  document.body.appendChild(renderer.domElement);

  // Create a perspective camera
  camera = new THREE.PerspectiveCamera(
    70,
    window.innerWidth / window.innerHeight,
    1,
    1000
  );
  camera.position.z = 600;

  // Create a scene
  scene = new THREE.Scene();

  // Add ambient light for consistent baseline lighting
  var ambientLight = new THREE.AmbientLight(0x404040, 40.0);
  scene.add(ambientLight);

  // Add point lights for dynamic shading
  const numberOfLights = 4;
  for (let i = 0; i < numberOfLights; i++) {
    let pointLight = new THREE.PointLight(0xffffff, 10000.0, 0);
    pointLight.position.set(
      Math.random() * 300 - 150,
      Math.random() * 300 - 150,
      Math.random() * 300 - 150
    );
    scene.add(pointLight);
  }

  // Initialize cube group as a THREE.Group object
  cubeGroup = new THREE.Group();

  // Loop to create a 3x3x3 cube of smaller cubes
  for (let x = -1; x <= 1; x++) {
    for (let y = -1; y <= 1; y++) {
      for (let z = -1; z <= 1; z++) {
        // Create geometry and material for each cube
        var geometry = new THREE.BoxGeometry(50, 50, 50);
        var material = new THREE.MeshPhongMaterial({
          color: 0xffffff,
          flatShading: true,
          shininess: 0,
        });

        // Create the cube mesh and position it
        var smallCube = new THREE.Mesh(geometry, material);
        smallCube.position.set(x * 100, y * 100, z * 100); // 100 units apart

        // Add each small cube to the cube group
        cubeGroup.add(smallCube);
      }
    }
  }

  // Add the cube group to the scene
  scene.add(cubeGroup);

  // Initialize post-processing composer
  composer = new EffectComposer(renderer);
  composer.addPass(new RenderPass(scene, camera));

  // Define custom shader for anaglyph effect
  var customShader = {
    uniforms: {
      tDiffuse: { value: null },
      separation: { value: params.separation },
      red: { value: params.red },
      cyan: { value: params.cyan },
    },
    vertexShader: `
      // Declare a varying variable vUv. Varying variables are used to pass
      // data from the vertex shader to the fragment shader.
      // 'vec2' means it's a 2D vector, which is perfect for 2D texture coordinates.
      varying vec2 vUv;

      // main() function is the entry point of the vertex shader.
      void main() {

        // Assign the built-in 'uv' attribute to our varying variable 'vUv'.
        // 'uv' holds the 2D texture coordinates provided by the geometry.
        // These coordinates will be interpolated for each fragment and
        // accessible in the fragment shader.
        vUv = uv;

        // Calculate the clip-space position of the current vertex.
        // 'projectionMatrix' is the camera's projection matrix,
        // 'modelViewMatrix' is the model-view matrix containing the
        // model's position, rotation and scale combined with the camera's position and orientation.
        // 'position' is the local position of the vertex.
        // Multiply these together to get the clip-space position.
        // We also set the w component to 1.0, as it's a homogeneous coordinate.
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
      }
    `,
    fragmentShader: `
      // Declare uniform variables that will be accessible from JavaScript.
      // sampler2D is a texture that we'll pass from the Three.js renderer.
      // Separation, red, and cyan are floats that control the anaglyph effect.
      uniform sampler2D tDiffuse;
      uniform float separation;
      uniform float red;
      uniform float cyan;

      // Declare varying variables that will receive interpolated values
      // from the vertex shader. In this case, we'll get texture coordinates.
      varying vec2 vUv;

      // main() function is the entry point of the fragment shader.
      void main() {
        // Sample the original color from the texture at the current
        // texture coordinate (vUv).
        vec4 color = texture2D(tDiffuse, vUv);

        // Sample the color for the left eye.
        // The texture coordinate is shifted horizontally by negative "separation" units.
        vec4 colorL = texture2D(tDiffuse, vUv + vec2(-(separation * -1.0), 0.0));

        // Sample the color for the right eye.
        // The texture coordinate is shifted horizontally by positive "separation" units.
        vec4 colorR = texture2D(tDiffuse, vUv + vec2(separation * -1.0, 0.0));

        // Combine the colors into an anaglyph effect.
        // The red component comes from the left-eye view.
        // The green and blue components come from the right-eye view.
        // We also apply the user-specified tint values for red and cyan.
        gl_FragColor = vec4(colorL.r * red, colorR.g * cyan, colorR.b * cyan, 1.0);
      }
    `,
  };

  // Add custom shader pass to composer
  customPass = new ShaderPass(customShader);
  composer.addPass(customPass);

  // Initialize GUI for user controls
  const gui = new dat.GUI();
  gui.add(params, "separation", 0.0, 0.2).onChange((value) => {
    customPass.uniforms["separation"].value = value;
  });
  gui.add(params, "red", 0.0, 1.0).onChange((value) => {
    customPass.uniforms["red"].value = value;
  });
  gui.add(params, "cyan", 0.0, 1.0).onChange((value) => {
    customPass.uniforms["cyan"].value = value;
  });
}

// Main animation loop
function animate() {
  requestAnimationFrame(animate);

  // Rotate the whole cube group
  cubeGroup.rotation.x += 0.005;
  cubeGroup.rotation.y += 0.005;

  // Rotate each individual cube within the group
  cubeGroup.children.forEach((cube) => {
    cube.rotation.x += 0.01;
    cube.rotation.y += 0.01;
  });

  composer.render();
}

// Execute the initialization function
init();
// Start the animation loop
animate();

and the result is:

3D Glasses

Wrapping Up

I hope you made it this far, and had fun with me! I have added all of the code for the JavaScript into a running example that you can experiment with here:

Sending good vibes <3