Games: How Do People Make Games?

The electronic gaming industry is huge. It’s been pulling in more money than movies for some time now, and as technological implementations like ever-increasing processing power and VR become more affordable, it’s only a matter of time before games become the primary medium of electronic entertainment.

Once technology opens up AR (augmented reality) interfaces, game design directly corresponds to real-world implementations. Further, the technology that feeds into games (e.g., graphics software) often feeds into other more practical outputs, such as 3D printing and general-purpose AI.


History

The first electronic game was created in 1958 as a demo by a physicist named William Higinbotham called Tennis for Two.

However, electronic games never reached commercial popularity until 1972 when Atari developed Pong. At the time, it was built with electrical engineering from the ground up, and didn’t use any PCBs (printed circuit boards).

While it wasn’t distributed commercially for a few years, technically the first computer game was a text-based experience called the Oregon Trail in 1971, though it wasn’t distributed commercially until 1975. For a long while, until the 1990s, there was a distinct divide between the domain of video games and computer games:

  • Video games were more interactive and had a lightweight learning curve (e.g., Combat, Space Invaders, Adventure).
  • Computer games were more challenging to master and required more reading (e.g., Zork).

The 1970s represented some pioneering ideas in gaming, mostly dominated by Atari. However, processor limitations left the UX somewhat lacking, and the most timeless experiences from that era came through vector graphics (e.g., Missile Command, Asteroids, Battlezone).

In the early 1980’s, the graphics heavily improved, meaning that the visual elements started looking a bit more like what they were supposed to represent:

  • Vector graphics was phasing out, but a few games carried the idea farther in arcades (e.g., Tempest Star Wars).
  • Many legendary games arose during this time: Pac-Man, Centipede, Defender, Donkey Kong, Frogger, Galaga, Joust, Ms. Pac-Man, Pitfall!, Robotron: 2084, and Lode Runner.
  • Text-based games were in the background on personal computers (e.g., Wizardry: Proving Grounds of the Mad Overlord).
  • Some PC games started exploring more strategic games with the improvements in graphics (e.g., Archon: The Light and the Dark, M.U.L.E.).

Starting in the early 1980s, the game console companies had experienced an economic boom based on licensing. The console manufacturers had designed their games platform, but there were a lot of games available, and most of them were poorly-designed.

Christmas of 1982 saw two terribly designed games released to the Atari home version: a badly tested port of Pac-Man, and an almost-unplayable movie cash-in for the E.T.: the Extra-Terrestrial. This led to a general distaste for video games for years.

Other titles, however, arose in the midst of the industry collapse:

  • Arcade games were still popular (e.g., Marble Madness, Gauntlet, Ghosts ‘n Goblins).
  • PC games still had a market throughout the 1980s (e.g., Elite, Tetris, Ultima IV).

It wasn’t long until the Japanese companies Sega and Nintendo were able to fill the void from Atari. Starting in 1985, some of the best games were made for Nintendo and Sega consoles (e.g., Super Mario Brothers, The Legend of Zelda).

The late-1980’s started seeing the maturity of multiple technologies, along with improved graphics, and games became more associated with genres than the platforms that ran them:

  • By design, arcade games had better computers in them with more peripherals, so they still stood out (e.g., Arkanoid, Bubble Bobble, Out Run, Contra, Double Dragon, R-Type, Final Fight).
  • Many of the easily-accessible games were still typically on consoles (e.g., Mike Tyson’s Punch-Out!!, Mega Man 2, Ninja Gaiden, Super Mario Bros. 3, Prince of Persia).
  • Strategy and role-playing games started becoming the specialty of the PC (e.g., Dungeon Master, Sid Meier’s Pirates!, Populous, SimCity).

The most dramatic development for games, though, came in the early 1990s, where the graphics and computer processing requirements had improved well-enough to effectively imitate anything:

  • 2D platforming games, in particular, had reached the pinnacle of possible UX (e.g., Super Mario World, Sonic the Hedgehog, Super Castlevania IV, Sonic the Hedgehog 2, Donkey Kong Country, Super Metroid).
    • For a while, there was a unique platforming genre called Run ‘n Gun (e.g., Contra III: The Alien Wars, Gunstar Heroes, Mega Man X)
    • The extra storytelling capacity created the Action-Adventure genre (e.g., Another World, Legend of Zelda: A Link to the Past, Flashback, Legend of Zelda: Link’s Awakening)
  • The processing power was high enough to produce many genres and domains:
    • Adventure (e.g., The Secret of Monkey Island, Monkey Island 2, Indiana Jones and the Fate of Atlantis, Day of the Tentacle, Sam & Max Hit the Road)
    • Simulation (e.g., Wing Commander, SimCity 2000, Theme Park)
    • Strategy (e.g., Civilization, X-COM: UFO Defense)
      • The graphics became fast enough to develop the Real-Time Strategy genre (e.g., Dune II, Star Control II, Syndicate, Command & Conquer)
    • Role-Playing (e.g., Final Fantasy IV, Ultima VII, Phantasy Star IV, Secret of Mana, EarthBound, Final Fantasy VI, Chrono Trigger)
    • Sports (e.g., Speedball 2, Sensible Soccer, NBA Jam, Sensible World of Soccer)
    • Racing (e.g., Micro Machines, Virtua Racing)
    • Fighting (e.g., Street Fighter II, Mortal Kombat, Mortal Kombat II, Samurai Shodown)
      • A unique spinoff of fighting called Beat-Em-Up (e.g., Streets of Rage 2)
    • Other strange genre permutations, such as puzzle and artillery (e.g., Lemmings, Worms)

Somewhere, starting around 1992 but in home consoles by 1995, there was a jump to 3D graphics. They were rudimentary at first, using polygons to vaguely represent shapes, but the technology rapidly developed as well:

  • A new genre called the First-Person Shooter (e.g., Wolfenstein 3D, Doom, Doom II)
  • A new genre called the Shoot-Em-Up (e.g., Star Fox, Tempest 2000)
  • Racing (e.g., Super Mario Kart, Virtua Racing, Ridge Racer, Daytona USA, Sega Rally Championship, Wipeout)
  • Adventure (e.g., Myst)
  • Survival Horror (e.g., Alone in the Dark)
  • Simulation (e.g., Star Wars: TIE Fighter, MechWarrior 2)
  • Fighting (e.g., Virtua Fighter 2)

From there, most of the late 1990s was polishing the styles and formula present in the earlier part of the decade:

  • Action-Adventure (e.g., Tomb Raider, Grand Theft Auto, Legend of Zelda: Ocarina of Time, Shenmue)
    • A new genre called Stealth (e.g., Metal Gear Solid, Thief: The Dark Project)
  • Holdouts to the old Adventure genre (e.g., Grim Fandango)
  • Platforming (e.g., Yoshi’s Island, Super Mario 64, Castlevania: Symphony of the Night, Crash Bandicoot: Warped)
  • Strategy (e.g., Warcraft II, Civilization II, Command & Conquer: Red Alert, Age of Empires, Total Annihilation, StarCraft, Age of Empires II, Homeworld)
  • First-Person Shooter (e.g., Duke Nukem 3D, Quake, GoldenEye 007, Quake II, Star Wars: Jedi Knight: Dark Forces II, Half-Life, Quake III: Arena, Unreal Tournament)
  • Shoot-Em-Up (e.g., Star Fox 64)
  • Role-Playing (e.g., Pokemon Red and Blue, Diablo, Fallout, Final Fantasy Tactics, Final Fantasy VII, Baldur’s Gate, Fallout 2, Panzer Dragoon Saga, Suikoden II, Planescape: Torment, Pokemon Gold and Silver)
    • The first foray into Action Role-Playing (e.g., System Shock 2)
  • Fighting (e.g., Tekken 3, Soulcalibur)
  • Survival Horror (e.g., Resident Evil, Resident Evil 2, Silent Hill)
  • Racing (e.g., Wave Race 64, Wipeout 2097, Mario Kart 64, Gran Turismo)
  • The first Rhythm games (e.g., PaRappa the Rapper, Dance Dance Revolution)
  • A new multiplayer concept of Massively Multiplayer Online Role-Playing Game, or MMORPGs (e.g., Ultima Online, EverQuest)
  • Further genre explorations (e.g., Nights into Dreams)

Throughout the 2000s, the availability of better computers and simpler software development meant the constraints of the past were mostly gone. This yielded longer expected play times and more in-depth storytelling:

  • Platforming (e.g., Jet Set Radio, Psychonauts, Super Mario Galaxy, LittleBigPlanet, Spelunky)
  • Action-Adventure (e.g., Legend of Zelda: Majora’s Mask, Devil May Cry, Ico, Metroid Prime, Beyond Good & Evil, Prince of Persia: The Sands of Time, Devil May Cry 3, God of War, God of War II, Uncharted 2)
    • Stealth (e.g., Thief II, Metal Gear Solid 2, Splinter Cell, The Chronicles of Riddick: Escape from Butcher Bay, Metal Gear Solid 3, Splinter Cell: Chaos Theory, Hitman: Blood Money)
    • A new style of Action-Adventure called the Third-Person Shooter (e.g., Max Payne, Max Payne 2, Gears of War, Gears of War 2)
    • The first steps into the Open-World genre (Grand Theft Auto III, Grand Theft Auto: Vice City, Legend of Zelda: The Wind Waker, Grand Theft Auto: San Andreas, Shadow of the Colossus, Legend of Zelda: Twilight Princess, Grand Theft Auto IV)
  • First-Person Shooter (Counter-Strike, Perfect Dark, Halo: Combat Evolved, Battlefield 1942, Half-Life 2, Call of Duty 2, BioShock, Call of Duty 4, Halo 3, Team Fortress 2, Left 4 Dead, Left 4 Dead 2)
  • Shoot-Em-Up (e.g., Ikaruga)
  • Strategy (e.g., Advance Wars, Warcraft III, Rome: Total War, Civilization IV, Company of Heroes)
  • Role-Playing (e.g., Baldur’s Gate II, Final Fantasy X, Star Wars: Knights of the Old Republic, Dragon Quest VIII, Mass Effect, Persona 4, Valkyria Chronicles)
    • Action Role-Playing (Deux Ex, Diablo II, Vagrant Story, Elder Scrolls III, Kingdom Hearts, Elder Scrolls IV, The World Ends with You, Fable II, Fallout 3)
  • Fighting (e.g., Marvel vs. Capcom 2, Super Smash Bros. Melee, Soulcalibur II, Street Fighter IV)
  • Sports (e.g., Tony Hawk’s Pro Skater 2, Tony Hawk’s Pro Skater 3)
  • Racing (e.g., Gran Turismo 3: A-Spec, F-Zero GX, Burnout 3, Gran Turismo 4, Burnout Paradise)
  • Survival Horror (e.g., Silent Hill 2, Eternal Darkness: Sanity’s Requiem, Resident Evil 4, Dead Space)
  • MMORPGs (e.g., World of Warcraft)
  • Puzzle (e.g., Angry Birds)
    • Puzzle-Platformer (e.g., Portal)
  • Further explorations into new aspects of Simulation (e.g., The Sims, Animal Crossing, The Sims 2)
  • Further genre explorations that didn’t take advantage of new technologies (e.g., Phoenix Wright: Ace Attorney, Plants vs. Zombies)
  • A new genre called the Multiplayer Online Battle Arena, or MOBA (e.g., League of Legends)
  • Other one-off genre explorations (e.g., Rez, WarioWare Inc.: Mega Microgames!, Katamari Damacy, Ōkami)

Around the mid-2000s, there was an attempt to incorporate new peripherals, with at least moderate success:

  • Rhythm games with musical instruments (e.g., Guitar Hero, Rock Band, Rock Band 2)
  • Interactive games that heavily used motion controls (e.g., Wii Sports)

Somewhere starting around 2009, most of the games started developing huge worlds with highly immersive rendered environments that took at least dozens of hours to play through:

  • Action-Adventure (e.g., Assassin’s Creed II, Batman: Arkham Asylum)
  • Beat-Em-Up (e.g., Bayonetta)
  • First-Person Shooter (e.g., Borderlands)
  • Role-Playing (e.g., Dragon Age: Origins)
    • Action Role-Playing (e.g., Demon’s Souls)

Somewhere before 2010, game innovation had peaked its trend.

From about 2010 and onward, almost all the new games became either high-budget “AAA games” or smaller-scale “indie games”.

AAA games throw a ton of budget behind games they expect will safely sell, typically by releasing sequels of prior successful games:

  • They tend to simply be an improved version of the previous game, with borrowed “game mechanics” from other games.
  • For marketing reasons, the game would add easily-implementable gimmicks or hacks to advertise a feature:
    • Endless Customization! – add hundreds of useless features to the starting character interface
    • X Hours of Gameplay! – spread out the world with tons of relatively empty space to travel through
    • Character Progression! – add an incrementing bar that slowly leveled up, even if it didn’t materially change the experience for the player
  • Since 2010, most popular games have been AAA, and the genres have been hybridized to the point that a genre is simply for marketing purposes:
    • Platformer (e.g., Super Mario Galaxy 2, Super Mario Odyssey)
      • Puzzle-Platformer (e.g., Portal 2)
    • Action-Adventure/Open-World (e.g., God of War III, Red Dead Redemption, Batman: Arkham City, Grand Theft Auto V, The Last of Us, Uncharted 4, Legend of Zelda: Breath of the Wild, God of War remake, Red Dead Redemption 2)
      • Stealth (e.g., Dishonored, Dishonored 2)
      • Third-Person Shooter (e.g., Fortnite)
    • Adventure/Narrative (e.g., Heavy Rain, The Walking Dead)
    • First-Person Shooter (e.g., Borderlands 2, Far Cry 3, Bioshock Infinite, Destiny, Overwatch)
    • Role-Playing (e.g., Fire Emblem Awakening)
      • Action Role-Playing (e.g., Mass Effect 2, Xenoblade Chronicles, Dark Souls, Elder Scrolls V: Skyrim, Bloodborne, The Witcher 3)
    • Strategy (e.g., Civilization V, Starcraft II, XCOM: Enemy Unknown)
    • Sports (e.g., Rocket League)
    • Fighting (Super Smash Bros. Ultimate)
    • Rhythm (e.g., Rock Band 3)
    • MOBA (e.g., Dota 2)
    • Collectible Card Games (e.g., Hearthstone)

Indie games, by contrast, are made by no more than a few dozen developers:

  • The limited budget meant the games were meaningful, but comparatively short.
  • Limited marketing resources also meant the game didn’t stand out among the other developers, making it hard to find unless the developer was lucky.
  • Some of them have persisted alongside AAA games, however:
    • Platformer (e.g., Super Meat Boy, Shovel Knight, Inside)
      • Puzzle-Platformer (e.g., Braid, Limbo)
    • Adventure (e.g., Journey)
    • Role-Playing (e.g., Undertale, Disco Elysium)
      • Action Role-Playing (e.g., Hades)
    • Sandbox (e.g., Minecraft)
    • Casual/Life Simulation (e.g., Stardew Valley)
    • Other genre-defying experiences (e.g., Hotline Miami, Papers Please)

While web-based PC game platforms in the late-2010s (e.g., Steam, GOG) have leveled the competitive playing field a bit, there’s still a stark discrepancy between the risk-taking artist indie game developers and the mass-produced corporate AAA game development team.

Console Eras

No matter how competitive games are, there have been hard limits on what games could be, based on the technological limits of the time.

For that reason, it helps to understand the general eras of “video game consoles”. There are idiosyncratic differences between the eras, but there’s a general technological trend that defined the limits of game consoles:

  1. 1972-1980 First Generation:
    • Very rudimentary black-and-white graphics
    • Each console had all its games pre-installed
    • e.g., Magnavox Odyssey, Pong arcade console, Coleco Telstar
  2. 1976-1992 Second Generation:
    • Still rudimentary 8-bit graphics
    • 1-2 MHz CPU, 32-KB memory
    • Introduced loadable game cartridges instead of integrated games
    • e.g., Fairchild Channel F, Atari 2600, Mattel Intellivision)
  3. 1983-2003 Third Generation:
    • The pinnacle of 8-bit graphics (like Super Mario Bros.)
    • 2-4 MHz CPU, 72 KB memory
    • e.g., Sega Master System, Atari 7800, Nintendo Entertainment System
  4. 1987-2004 Fourth Generation:
    • 16-bit graphics
    • 4-8 MHz CPU, 8-128 KB memory
    • Introduced CD-ROM add-ons
    • e.g., NEC TurboGrafx-16, Sega Genesis, Super Nintendo Entertainment System, SNK Neo Geo
  5. 1993-2006 Fifth Generation:
    • 32-bit/64-bit graphics
    • 12-100 MHz CPU, 2-4.5 MB memory
    • CD-based except for the Nintendo 64
    • e.g., Sega Saturn, Sony Playstation, Nintendo 64
  6. 1998-2013 Sixth Generation:
    • 128-bit graphics
    • 200-733 MHz CPU w/ 100-233 MHz GPU, 16-64 MB memory
    • All CD-based, some internet support
    • e.g., Sega Dreamcast, Sony Playstation 2, Nintendo GameCube, Microsoft Xbox
  7. 2005-2017 Seventh Generation:
    • 0.73-3.3 GHz CPU w/ 243-550 MHz GPU, 88-512 MB memory
    • Started distributing digitally, using better graphics resolutions (HD) and introducing motion-based controls
    • e.g., Microsoft Xbox 360, Sony Playstation 3, Nintendo Wii
  8. 2012-now Eighth Generation+:
    • 1.0-?? GHz CPU w/ 300-?? MHz GPU, 2-?? GB memory
    • Introduced even better graphics resolutions (4K) and SSD internal memory caching
    • e.g., Nintendo Wii U/Switch, Sony Playstations 4/5, Microsoft Xbox One/Series X/S
    • At this point, it doesn’t matter because nearly every game eventually “ports” to PC

For a long time, PC titles were lagging behind video games’ graphics. Mostly, this was because of the quantum leaps in technology that PCs saw, meaning that the average game developer had an absurdly wide range of PC specifications to create for, so it made more sense to design for video games except for a geekier PC-leaning game-playing crowd (like flight simulators or real-time strategy genres).

Now, PC gaming has effectively taken over the dedicated graphical experience of video games, for multiple reasons:

  • Graphics technology has largely slowed down now that it’s reached a hard limit. More cores is possible, but ~3.8 GHz is a core’s electrical limit before the heat sink is no longer economically feasible. This makes the technology generally more homogenous for developers to build.
  • PCs have certain advantages over consoles including personal freedom in choice of gaming peripheral, the multi-use nature of a PC versus dedicated gaming hardware, online download services like Steam’s, GOG’s, and Epic Games’ that give a wider variety of playable titles than a console could provide, and even the freedom to emulate game consoles directly.
  • The power of typical graphics cards peripherals for PCs are more-or-less the same price/power ratio as many game consoles.
  • All the developers use mostly the same tools for different platforms, so console-exclusive titles are slowly becoming a fading trend from the lost profits that could have come from porting to PC, though Nintendo has still been holding out as of 2023.

Peak Performance

Gaming computers are the most expensive consumer-grade computers on the market, and why they tend to push hardware to their limits. This is because game development is the pinnacle of high-quality software development, for several reasons:

  1. The visual elements are usually far more vast, and the designers need to worry about both animation and static elements.
  2. The audio must be synchronized to the visual experience, so it has to be correctly designed, from the sound effects to the soundtrack.
  3. Games require immediate feedback from the input peripherals (such as the keyboard or mouse, and now VR). While people can endure a 1-second delay in many other programs, 0.1 seconds is enough to make a janky game.
  4. For many games, you need an elaborate mathematical framework to keep all the visual and audio elements working in tandem. This is much more than most other programs.
  5. Across the internet, the demand for rapid-response networking in many genres is a perpetual need, for both player input and visual output.
  6. In multiplayer games that need computer players (e.g., a 2v2 game with 3 humans), the AI has to have enough logic to permit the player to feel like the “bot” is human-like.
  7. Graphics technology is often driven by large-scale game developers, who build incredibly elaborate designs for their games that need ever-increasing processing power. This ranges from the character designs all the way to expansive and beautiful “skyboxes” and long-distance rendering to recreate extremely large worlds.

This means you need lots of processing power to make sure it stays above 30-60 FPS (frames per second). Most modern games require a dedicated GPU (i.e., a CPU that only processes graphics) strictly for this reason.

At one time, producing games became absurdly complicated because of the limits in graphics processors.

  • e.g., it’s relatively easy to set up a static display (e.g., player score and health) overlaid over a scrolling portion (where the game was happening) frame-by-frame by rendering the scrolling part and then rendering the static part on top of it.
  • However, in the 1990’s Nintendo Entertainment System, the developers had to hack the CPU to halt drawing one portion with a predetermined sound that forced the hardware to stop, then used the hardware’s “collision detection” bit to draw the other part.
  • Even through to today, vintage game emulators are notoriously difficult to build because the software must reproduce the individual clock cycles and constraints of the original hardware.

Netcode

Across a network, games have extremely stringent requirements for input, and heavy requirements for output. From frame to frame across a network, the game must match on both computers.

Online games are typically designed as “deterministic”, which means two instances of the same program will create precisely the same outputs with the same inputs. Even random numbers can be algorithmically derived deterministically. This gives the added convenience that a replay of a game can simply be the inputs mapped to timing inside a game, but also allows other games to reconstruct a match even with network latency.

Instead of using a server that holds the information, games on computers can synchronize with each other with “lockstep networking”, where they can communicate back-and-forth about the state of the game. By bypassing a central server for both computers, it can theoretically be a quicker network speed. One side advantage of lockstep networking is that hacks that mess with the timing of the game (such as a character “sprite” that goes faster) will cause the game to desynchronize, which prevents cheating.

But, networks are still slow, relatively speaking. So, there’s still a delay in the “netcode”. There are two ways to deal with this.

Delay-based netcode will intentionally slow down the input from registering on the frame it was used. Instead, it’ll update a few frames later (each frame represents ~1/60 of a second). While a few people well-trained in games may notice the delay, clever game design and the delay staying consistent make delay-based netcode work fine. Unfortunately, networks are rarely consistent: any slowdown in the network will slow the game down, even with plenty of tricks that predict network behavior, and physical distance will have a profound impact on gameplay.

Rollback netcode is designed to uniquely deal with network uncertainty:

  1. When a network drops the information, the computer will predict the behavior of that input (likely by holding down the same button as before).
  2. When there’s a solid connection again, the computers will send a frame-by-frame retroactive input mapping for their respective players.
  3. Both games will update the data of the previous frames to synchronize with each other, then deliver the same consequences.
  4. This will mean certain portions of the game may move around a tiny bit, but not enough to significantly alter the gameplay, and precise button timings are preserved.

A rough and dirty example will help the concept. First, a human conversation with delay-based netcode:

  • Aaron: Hello, I heard you’re from Wales, right?
  • Bob: …
  • Aaron: …
  • Bob: No, I’m from Scotland. And where are you from?
  • Aaron: I’m from Pakistan.

And, a rollback netcode conversation:

  • Aaron: Hello, I heard you’re from Wales, right?
  • Bob: …
  • Aaron: I had a friend in Wales. Maybe you know him?
  • Bob: No, I’m from Scotland. And where are you from?
  • Aaron: I’m from Pakistan.

Peak Psychology

Games are a unique medium compared to books, audio, and movies because it’s the only medium that directly interacts back with the user’s interaction.

  • The only exception to this is specific game genres (e.g., visual novels, some VR experiences), but advancing the interface doesn’t define as a “game” (i.e., there should be a capacity for making decisions).

Game UX should be non-intrusive, where the user will feel or observe what they should do without explicit instruction. To avoid text boxes with instructions or written signage, there are several tools to guide the player:

  1. Place desirable items to direct the player where they should go (e.g., a healing potion located near the next important passageway).
  2. Use “non-diegetic” lighting to indicate what the player should interact with (e.g., a lever has a flashing effect on it).
  3. Train the user by designing specific, new enemies that are most easily taken down by the actions or abilities they should learn to use.

Most games have an initial setup, followed by a “gameplay loop”, up until the end.

  • A gameplay loop is a hot-and-cold cycle through a developed habitual routine.
  • Gameplay loops can be classified as primary, secondary, and tertiary:
    1. Primary gameplay loops are iterated across seconds (e.g., pressing button combinations).
    2. Secondary gameplay loops iterate across minutes (e.g., completing a level).
    3. Tertiary gameplay loops iterate across longer stretches of time (e.g., complete enough levels to get a high score).
  • Every loop must be a struggle for the player, but not so much that they feel overwhelmed.
  • Each loop should deliver a significant payout for the player (e.g., extra points, bonus health, visually stimulating WIN screen).

The contrast in intensity between the action and relaxing modes of a gameplay loop determine how exciting and thrilling the player will feel.

  • e.g., peppy overworld music that transitions to peppy level music maintains a general sensation throughout the game, while calm menu music before high-intensity level music makes each level feel like a rush before the menu screen feels comparatively calmer.

Every game developer must balance three major psychological balances throughout the player’s experience:

  1. The game has to be fun, which is balancing between enjoyable familiar things and novelty.
  2. The game must be challenging enough: too much and the player will “ragequit”, too little and the player will be bored. The challenge must come from the player’s interaction with the game, not from software errors or arbitrary game design that stretches out time.
  3. The game must make the person curious, or they won’t want to keep playing. This sometimes come from the game’s story, but more often comes from the functionality of what the player controls and the possibilities that it opens (e.g., seeing if they can jump across a normally impassable large chasm using a weapon that launches them backward).

The most reliable way to keep a player engaged with the game is to give them a clear way to overcome the challenge, as well as potentially helping them to overcome it more easily:

  • Give them back health if they react after getting hit.
  • Give them the exact same level over and over to make it easy to memorize actions.
  • Have mid-level checkpoints that let them restart later in the level when they fail.
  • Give hints (or the chance for a hint) after they’ve failed a few times.

The challenges the player experiences from the game define what the player learns and, ultimately, their experience when they leave the game.

Skilled/smart players want a difficult and unforgiving experience, but newer/unskilled players want simpler and forgiving challenges. The best way to accommodate both is to create at least a two-tiered way to win:

  • The easy way involves merely getting through the level and rewards more levels, all the way to the final level.
  • The hard way requires tons of dedication and skill.
  • The extra challenge should never follow the “critical path”, and the player should receive silly things like hats or “easter eggs” that don’t make future challenges easier.
    • The easiest implementation of this is a star or rating system.
  • However, no game can please everyone. It’ll be too hard for some people, or too easy for others.

Story-Telling

Most games, with the exception of some skill or puzzle games, are driven by a story.

When an interactive story is created correctly, the capabilities and uniqueness of immersion are immense. Almost every other medium (book, movie, radio drama, etc.) involves the audience viewing the experiences of that character, but game development is typically designed for a player to be a character.

There are, however, inherent limits to the range of usable stories within a game.

  • Unless the game designer is particularly creative, there must be an enemy or set of enemies.
  • Often, the enemies need to scale upward in perceived danger (e.g., start with unimportant enemies like rats at the beginning, then mundane ones, then grandiose gigantic enemies at the end).
  • Most games involve one player character against many enemies, so there must be a reason why there are more enemies than the player character and why that character can successfully defeat them.

Compared to movie or book stories, most game stories are passable or awful, for many interconnected reasons:

  1. Writing high-quality stories requires a completely different skillset and talent than coding a game.
  2. The most important part of a game is that it’s playable: people are more inclined to buy and play a decent game with a terrible storyline than an unplayable game with an award-winning story.
  3. A story that adapts as the player interacts with it requires extensive “dialogue trees” that cover all the forms of the story according to player’s decisions, which is much more challenging than the one-story design of a movie.
  4. The game’s “ludonarrative” (difference between gameplay and story) has to stay small. To avoid dissonance, the story is forced to adapt to the gameplay, or the gameplay to the story (e.g., either a character can shrug off dozens of bullets and another one in a cutscene shouldn’t change anything, or the character can only ever get hit by one or two bullets).
  5. For the most part, players like to operate characters with superhuman abilities, so realistic stories are almost entirely off-limits without a high ludonarrative, which profoundly affects the experience.

Some independent game developers find ways to cleverly work with ludonarrative and have the story advance directly by player interaction (e.g., Undertale, The Stanley Parable), but most games use a few possible story/gameplay boilerplates:

  1. There is no story, and it’s designed strictly as a game (e.g., most games made before 1985). This one is literally timeless because it’s an abstract existence.
  2. The story is a general theme, but is almost completely unimportant beyond giving context for why the character is doing what they’re doing (e.g., most games made between 1985-1998, most first-person shooters). While this is also timeless, it’s also often lazy.
  3. The linear progression of the game gives a story that could be made into a full-length movie or TV show, but is relatively unrelated to the game mechanics (e.g., point-and-click adventure games, most Japanese role-playing games). While many people may enjoy this type of game, it’s a bit like forcing the viewers of a movie to pass skill games to keep watching scenes.
  4. The game has a clear story that makes sense in context, but allows the player/character excessive liberty to take their time and explore everywhere (e.g., open-world games). This one has probably been explored to its fullest, and requires extensive development time to get correct.
  5. The story of the game is actually many small stories triggered by player choices through the role of the character, often with a main story to advance the character through their experiences (e.g., many open-world role-playing games).

To advance the game’s events, there’s an invisible box in the game world called a “trigger volume” that starts an action when the user enters or exits it. For the savvy gamer, it becomes absurdly obvious by how an event happens suddenly, so setting events on a randomly generated delay of at least several seconds can add to the realism.

Industry Competition

Designing a good game requires graphics development, both in 2D and 3D, as well as tons of programming, but much of it is automated. It once required direct programming for just about everything, meaning it used to be far more challenging.

Today’s game development has many drag-and-drop interfaces that turn game programming into far more of a dynamic moving visual art than simply a highly-involved domain of software development.

The simplicity of making games now has created a heavily saturated market of game developers, and the increased demand from all the people who want to play games has made it a heavily competitive domain. Developers are constantly stealing intellectual property from one another, and most of the ideas from AAA game developers are blatantly pulled from an indie developer who wouldn’t be able to sue for copyright infringement. In the long-term, AAA games spend so much on mass marketing that they receive credit for new features or gameplay elements.

For each genre or style, game developers generally move through a trend of motivations that drive them into the industry:

  1. An independent developer creatively designs their heartfelt art. The game is beautiful, but often unpolished from the fact that they’re pioneering the entire thing and don’t have the staff to clean it up.
  2. Further developers polish that first creator’s art with their own games. Those games are superior to play (since the idea was already made), and they refine the formula.
  3. A big-budget game takes that formula into the mainstream. They use high-end graphics, often integrate it into a huge game world, and make the game represent more of a lifestyle than simply an experience.

Often, unscrupulous developers will borrow heavily from the operant conditioning chamber (also called a “Skinner box” after the psychologist B.F. Skinner) first developed in 1898. The general pattern runs most casinos by forming addictions, and operates as follows:

  1. Present a set of possible actions for the user, with rewards for some of those actions.
  2. For the rewarded actions, modulate the chances of the actions occurring (e.g., 1 in 20 chance, every 15 times, etc).
  3. Optionally, give the user a wide variety of possible rewards to provide the experience of making choices.
  4. Repeat indefinitely, changing out rewards occasionally, adding new types of rewards, and scaling them up before the user gets bored.
  5. Branch out the rewards into tiers (e.g., an easily acquired type of coin and a rare type of coin).
  6. Eventually, extend the experience into real-world economics by making real-world currency necessary to advance the game.

Most of the time, game developers will scale up by having other developers make the games, contributing their accumulated experience in ideas/marketing/debugging, then publishing with their brand attached to it and likely a royalty for sales.

The electronic games industry has gone through many trends that no longer exist, though they tend to pop up in weird places.

To change difficulty or configurations in the early 1980s, Atari games had simple toggle switches before starting a game that required documentation to decipher. By the 1990s, this difficulty switch eventually dissolved to an Easy/Medium/Hard selection when starting a game, and became an obscure feature hidden in the settings by the 2010s or simply nonexistent.

For some time in the mid-1990s, game installers took advantage of multimedia graphics, and started the UX into the game as soon as someone inserted a disc. The Command & Conquer: Red Alert game may have been the most interesting one, with full animations of a rocket going through stages of loading for getting fired as the installation progressed.

In the mid-1990s, game CDs had a then-staggering memory capacity, so full-motion video could fit on it (though it was pretty lame at first). One form of game was called QuickTime events, named after Apple’s then-new QuickTime protocol. The idea was that a full video would play, and the player would have to press a button when prompted. The pinnacle games of this were Dragon’s Lair and Night Trap that were literally nothing but QuickTime events. They weren’t particularly fun to play, but were novel enough to draw attention.

Memory constraints limited most games in the early and mid 1980s, so they all had some relatively similar features. Later games (especially after the 2010s) would imitate them for nostalgic/artistic effect:

  • No discernible plot, or it was in the supplied manual. Often, the goal of the game was a recorded high score, and maxing out the score (which was a feat of extreme skill or exploitation) would make the game crash.
  • To pad out play time, the game was absurdly and unfairly difficult, even on the easiest setting.
  • To help with play-testing, certain button combinations could give the player more lives, invincibility, swap out the sprites with other sprites, or skip ahead levels. These stayed in the game afterward and became known as cheat codes.

In the late-2000’s, games spent the better part of a decade embellishing a very dull color palette of grays and browns, as well as getting a bit carried away on shadows. It was meant to capture a dark, grungy impression that would fit a post-apocalyptic environment, but it made everything unpleasant to look at and difficult to see.

Shortly after the grungy, disgusting trend, games pivoted hard into the other end: they added tons of “bloom” and “particle effects”. This made the game very unpleasant to look at and difficult to see for the opposite, though this time it was easier to adjust visual settings to compensate for it.

Online games tend to use “leaderboards”, online high score systems that track everyone who played the game. Since this number can be thousands or a few hundred thousand players, it’s not particularly enjoyable for most players, and most leaderboards eventually devolve into hackers who exploited the game to achieve an inhumanly high score.

Since the late-2000’s, most graphics trends have moved toward trying to reproduce the constraints and artifacts from video cameras. This trend hasn’t stopped yet as of 2023, and has even gone as far as reproducing grit and water spots on the screen, which destroys the immersion of controlling a proxy character.

One trend that started around the time Fortnite became popular in the late-2010’s is to release the multiplayer game for free, then charge for the single-player experience. This allows the developer to collect user data, both to reconfigure the software to make it more addicting or to sell the data outright.

Versioning

The ambitious nature of games makes its software lifecycle more involved and complex than most non-game software.

Ever since the internet (and since online stores like Steam that also manage updates), the game will often release in an unfinished state. At its simplest, the game will have glitches during gameplay, but many developers intentionally release games in so-called Early Access, which is effectively a remarketed “alpha” or “beta” build of the game.

For many open-source games, they’re never officially done because of the constant improvements added by independent developers.

Game developers will sometimes work on an Early Access game until they understand the best way to balance it, then release that game from Early Access a few months before selling almost the same game entirely as a full version. To hide this, they’ll often change the theme (e.g., medieval, then space).

If a game becomes commercially successful, game developers can get more money relatively easily by creating expansions/addons/DLC (downloadable content):

  • They typically only add new graphical assets or tweak small mechanics of the game, and some are simply designed to support the developers.
  • However, some games may make the progression through the base game impossible without the DLC, and others can add hundreds of DLC.
  • In fact, some games are entirely free, with the expansions being most of the game’s content.