My pag e

Replicating the PS1 look (Unfinished)

I've spent way too long messing with my PSX shader and for your convenience, I've decided to compile what I have learned below. This is by no means a comprehensive list. I'm not a shader guy. But hopefully, this can be useful to someone or at least tell you what terms to google to find more.

(This page is unfinished and I'll maybe expand the sections at some point with proper image and code examples)

I might include some non-strictly PS1 adherent tips, in which case they will show up as follows:

Tip: At the end of the day, the reason to go for a PS1 look is in large part aesthetics, so don't be afraid to compromise accuracy if doing so gets you closer to the look you're looking for!

Also this entire tutorial is centred around a shader I did for unity, so there may be some points where you'll have to look for an equivalent function when it comes to unity specific stuff, but the overall content should remain the same.

Texture Color Indexing & Dithering

Color Indexing basically means that the image has a limited table of colors it refers to. There's a lot of filters you can use to reduce the number of colors in your texture, some of which may support dithering as well. This is cool and good to do, but I don't bother with it. I just slap it on as an after effect after everything is done. Just be aware that it is technically a part of the PS1 look!

TODO: Section

Dithering allows us to pretend we have more colors than we actually do. This could really use an image example

Vertex Snapping

Vertex snapping is responsible in large part for that signature woblyness when moving the camera around. Due to lower precision, the vertices on the screen are snapped to a noticable grid, the resolution of which I believe is equal to the resolution? To do this, we simply take the vertex clip space position, divide it by vertex.w to normalize it and then multiply it by the screen resolution, round it to the nearest integer, then divide it again and multiply by vertex.w.


      
      // VertPrec is equal to half of the desired resolution, as vertx.xy / vert.w goes from -1 -> 1.
      static float4 VertexSnap(float4 clipPos, int2 VertPrec)
      {
          float4 vert = clipPos;
      
      // First
          vert.xy /= vert.w;                                  // Remove depth (xyz are scaled by w (depth) by default) to normalize into -1,1 range
          vert.xy += 1;                                       // To scale from the corner rather than from the center, translates to 0, 2 range
          vert.xy = round(vert.xy * VertPrec) / VertPrec;     // Scale up by resolution, round to nearest integer, scale back down
          vert.xy -= 1;                                       // Translate back into -1,1
          vert.xy *= vert.w;                                  // Reintroduce depth
          
          // Also messed about with scaling the grid into the scene instead, should be the same result
          /*
      // Second
          float2 prec = VertPrec / clipPos.w;                 // Scale resolution grid with depth
          vert.xy = round(vert.xy * prec) / prec;             // Round xy coordinates to grid, both with depth in this one
          */
          return vert;
      }
      
      v2f vert (appdata v)
      {
          ...
          o.vertex = VertexSnapToResolution(UnityObjectToClipPos(v.vertex), _VertPrec);
          ...
      }
      

To note is that this is depth independant (that's what the dividing and multiplying by vertex.w is for, we temporarily bring everything to the same depth). So this will snap to a grid equal to the Vec2 VertPrec resolution on the screen.

If you want it to be more noticeable you can decrease VertPrec to snap to a larger grid.

Tip: Another thing you can do is skip the normalization step (Multiplying and dividing by vertex.w). This is technically not accurate, but results in objects closer to the camera snapping in larger increments (You can imagine our grid being projected into the scene).

Affine UV Mapping

Affine UV Mapping is the fancy way of describing that the PS1 did not know how to do perspective for its triangles. Looking at a face anything but straight on results in the characteristic warping.

TODO: Add image example.

This is actually really simple to accomplish. By adding the "noperspective" keyword on the v2f uv declaration, we let the pipeline know not to mess with it between the vertex stage and the pixel stage as by default, we correct for perspective these days.


      struct v2f
      {
        ...
        noperspective float2 uv : TEXCOORD0;
      }
      

Tip: However, while this is accurate, it might not be ideal. In cases where one of the vertices is behind the camera, the uv might flip in strange ways, which can be problematic if you're using for example tilemap textures.

To get around this, we can fake the noperspective behaviour. To do this, in the vertex shader, note down the value of max(vertex.w, n) (where n is the maximum stretch proximity you want. I have this value as 0.25f) and multiply the UV by this value. Then, pass the UV as normal and this value to the fragment stage and when sampling the texture, we divide the UV by this value again. You now have all the stretching, but with a bit more control.


      struct v2f
      {
        ...
        float affine : FLOAT0;
      }
      
      v2f vert()
      {
        ...
        // Presumes o.vertex has been set prior to this, snapped and all.
        o.affine = max(o.vertex.w, minStretch);
        o.texCoords = v.texcoord * affine;
        ...
      }
      
      
      fixed4 frag(v2f i) : SV_Target
      {
        fixed4 col = tex2D(i.texCoords / i.affine);
        ...
      }
      

Funfact: There's lots of fun alternate ways people accomplish this effect, each with its own little quirks. You can usually spot these by just remembering that stretch is all about perspective, not position or the like.

Lighting & The Vertex Color Channel

Vertex lit lighting is your friend here, so whenever possible, try to get it in the vertex shader. But if you really want to stick close then remember that PS1 games did not have much dynamic lighting. Instead, you can try going for an entirely unlit approach, where you "fake" lighting and bake it into the textures or the model vertices' color channel!

The modelling program of your choice should allow you to paint colors onto each vertex. Then you just get your color in the vertex shader, pass it to the fragment and multiply the overall color with it. Then you should be able to just paint your lighting on.


      struct appdata
      {
          ...
          float4 color : COLOR; // Retrieve the color
      };
      
      struct v2f
      {
          ...
          float4 color : COLOR; // Include color as part of the struct passed between the vertex and fragment stage
      };
      
      v2f vert (appdata v)
      {
          ...
          // Sample the color in the vertex stage, You might need to Gamma correct its rgb values if it doesn't match your editing program.
          o.color.xyz = GammaToLinearSpace(v.color); // v.color;
          o.color.a = v.color.a;
          ...
      }
      
      fixed4 frag (v2f i) : SV_Target
      {
          ...
          col *= i.color;
          ...
      }
      

Tip: While you're at it, you can also get the distance of a vertex to the camera here to do radial fog. A lot of the time, fog will simply be based on the Z-depth, which means that it is essentially flat plane that increases in strength as it goes further into the scene, usually meaning that if you turn the camera, you can see stuff at the corners popping out of the fog. If you use distance instead of depth, you can avoid this.


      struct v2f
      {
          ...
          float dist : FLOAT1;
      };
      
      v2f vert (appdata v)
      {
          ...
          o.dist = length(WorldSpaceViewDir(v.vertex));
          ...
      }
      
      // This is unity specific, sorry
      static float4 ApplyFog(float4 color, float distance)
      {
          UNITY_CALC_FOG_FACTOR_RAW(distance);
          color.rgb = lerp(unity_FogColor.rgb, color.rgb, saturate(unityFogFactor));
          return color;
      }
      
      fixed4 frag (v2f i) : SV_Target
      {
          ...
          #if defined(FOG_LINEAR) || defined(FOG_EXP) || defined(FOG_EXP2)
          col = ApplyFog(col, i.dist);
          #endif
          ...
      }
      

Tessellation

One often understated part is the Tessellation found within some PS1 games. This is the part responsible for the recognizable "pop" when you get close to objects. What is happening is that to reduce the aforementioned texture warping, games would split large faces into smaller ones.

A good starting point for implementing tessellation in your shader would be this amazing guide by CatLikeCoding on Tessellation (As well as all their other guides, honestly). The only downside is that it results in a lot of long, sharp triangles, which makes the texture warping more extreme, not less (Almost as if modern tessellation wasn't made with PSX shaders in mind).

As far as I could find, while you can change the tessellation factors of each side, there is no way to change how a face is exactly subdivided. Though one thing I haven't tried is messing about with the Quad domain, as I don't deal much in quads anyhow, so you might be able to get something out of that.

TODO: Section

Lack of a depth buffer

I've literally never seen anyone bother replicating this, but I can look into it if you really want to.