Signal Detection Theory and user interface design

Signal Detection Theory (SDT) is a way to quantify decision-making in the presence of uncertainty, which is also referred to as internal and external “noise.” The basic premise of this theory is that both signal and noise are represented in calculated probabilities within the individual making the decision, and the extent to which those representations overlap can be quantified based on the individual’s response and whether the signal is present or absent (Anderson, 2015; although the specific mechanics and statistics underlying this quantification are beyond the scope of this post, there are a multitude of resources available for those interested in further pursuing that information). Therefore, there are four possible outcomes to the SDT (matrix from Anderson, 2015):

SDT2

  1. Hits: correctly reporting the presence of the stimulus/cue
  2. Correct rejections: correctly reporting the absence of the stimulus/cue
  3. Misses: failing to report the presence of the stimulus/cue when it occurred
  4. False alarms: incorrectly reporting the presence of the stimulus/cue when it did not occur

Furthermore, the detection of a stimulus depends on both the intensity of the stimulus and the physical and psychological state of the individual. Recognition accuracy depends on whether a stimulus was actually presented, as well as the individual’s response.

In relation to games user research, the SDT can be used to measure and design many different aspects of games. Any instance in a game where a player has to make a decision under some uncertainty is when this theory can be best applied. Since SDT has been applied extensively in perception and decision-making research, let’s first discuss how some of this work is relevant to the player experience in video games.

Perception research commonly utilizes a behavioral task called the oculomotor delayed-response task, which requires an individual to make an eye movement to a cued location following a specific delay. The schematic of the oculomotor delayed-response task (from Goldman-Rakic, 1996) below should help demonstrate the connection between this task and video games.

Figure 1

For the oculomotor delayed-response task, during signal-absent trials, the individual has to hold off judgment until the whole observation period is finished whereas, during signal-present trials, the individual can judge the signal presence immediately after the signal is presented. Ultimately, researchers are typically interested in the number of trials an individual responded correctly and incorrectly (i.e., recognition accuracy). While the oculomotor delayed-response task has been manipulated in many ways (e.g., varying inter-trial intervals, cue/delay periods) to answer specific questions, perhaps its most relevant connection to video games is when players learn the meaning of a stimulus/cue presented in a game and then, at a later time point, need to respond appropriately about whether that stimulus/cue is present or not, as well as how to apply the information conveyed by that stimulus/cue in the current situation. Although players’ recognition accuracy of specific game elements is not being tested or predicted when they play games (unless, perhaps, they are playtesting), the relevance to the oculomotor delayed-response task and how it tests decision-making and working memory is evident.

Let’s look at some examples in games that might fit the four possible outcomes of the SDT:

Hit

IOBT6F8 - Imgur

In Prototype 2, a third-person action game, instructional text is introduced to the player about the effect of their actions on specific UI elements. This appears near the relevant UI, which allows the player to easily associate this with the specific UI, and flashes briefly to attract the player’s attention. Additionally, there is time manipulation, which minimizes in-game distractions and allows the player to focus on the conveyance. Ultimately, this type of UI introduction is likely to result in a hit, according to the SDT, because it is a very attention-seeking design that minimizes extraneous distractions, or external noise.

Miss

PPym2Zg - Imgur

In Battleborn, the tutorial starts once the player takes control of the character. White text is presented in the top-center of the screen, which explains features of the UI (e.g., minimap) as well as basic movement (i.e., sprint). However, since this text is small and cycles quickly, there is a high chance the player will miss this information. In addition, there is new objective text, character/story narrative, subtitles, and upgrade available notification. All of this presented simultaneously is cognitive overload for the player, who is also trying to experience the world and art style of the game at the same time. Ultimately, the low salience of the white text, in combination with higher-salient cues, is likely to cause the player to miss some, if not all, of what the white text is trying to convey.

Correct rejection

While correct rejections and false alarms can seemingly be more difficult to design around than hits and misses, a correct rejection would simply be the player correctly identifying the absence of a UI element. Designing for a correct rejection would likely be successful when implementing common UI industry-best practices, such as Neilsen Norman Group’s 10 Usability Heuristics for User Interface Design.

sU9jquc - Imgur

Infamous: Second Son introduces instructional text to the player describing one of the game’s main UI elements, the smoke meter, which is accompanied by a bright arrow that points to it. Also, in the GIF above, you can see the “Drain Shard” UI that appears when the player is in the appropriate position to complete such an action. Being that video games are only becoming increasingly more complex, with many things typically appearing on-screen simultaneously, it is important to note how the “Drain Shard” UI is noticeably different from other UI elements and notifications. It differs in size, color, and iconography. This and similar designs would likely result in correct rejections, according to the SDT, because the player would be able to easily notice the difference in the UI, not mistaking the “Drain Shard”, which is an in-game action, with another UI tutorial/notification.

False alarm

Continuing with the theme of video games becoming increasingly complex and containing numerous on-screen cues to attend to simultaneously, a busy screen is likely to result in some false alarms. Overall, The Division is an example of a game that handles the busy-UI screen relatively well by relying, primarily, on pop-up text boxes and a lot of flashing orange to convey information to the player. However, one example that might produce a false alarm can be seen below.

Fg6ZojZ - Imgur

During the tutorial, there is almost continuous tutorial text appearing on the left-side of the screen. Specifically, the constant orange blinking and similar text-box pop ups could result in the player attending to a certain cue expecting one thing and it is actually an entirely different message. Although consistency in game elements is typically praised, when all UI elements are of similar design, this might result in the player thinking a certain UI element represents one thing when it actually represents something else entirely.

Conclusion

Although this post focused primarily on the external noise component of the SDT, it is important to note that the internal noise component (i.e., how external aspects of the SDT affect players cognitively) is an integral part of the theory. The internal response is an internal state that produces an individual’s impression about whether a stimulus/cue is present, has shifted, or is absent. Also, these components of the SDT typically fluctuate from individual to individual and situation to situation (Imai & Tsuji, 2004; Kubovy, Epstein, & Gepshtein, 2013); numerous variables can factor into this fluctuation, such as current psychological state, cognitive (e.g., attentional) resources, and prior experience with the type of game, to name a few.

How a player perceives game elements, as well as how he/she converts this information into an in-game action, is a complex process and, consequently, game design can affect a lot about the player experience from both psychological and physiological perspectives.  As games user research grows as a field and begins implementing additional biometric methods, such as electroencephalography (EEG) to measure local brain activity, we will be able to correlate external and internal components of the SDT during gaming experiences. Although this will likely not be implemented for quite some time, and it will take time for basic research to accumulate enough evidence to convince companies to invest in such methods, understanding the intersection of these components could help inform game design. More specifically, this could help deliver designers’ precise intentions and, ultimately, an optimal player experience.

References:

Anderson, N. D. (2015). Teaching signal detection theory with pseudoscience. Front. Psychol. 6:762. doi: 10.3389/fpsyg.2015.00762

Goldman-Rakic, P. S. (1996). “The prefrontal landscape: implications of functional architecture for understanding human mentation and the central executive,” in The Prefrontal Cortex: Executive and Cognitive Functions, eds A. C. Roberts, T. W. Robbins and L. Weiskrantz (Oxford: Oxford University Press), 87–102.

Imai, A. & Tsuji, K. (2004). Event-related potential correlates of judgment categories and detection sensitivity in a visual detection task. Vision Research, 44, 763-773.

Kubovy, M., Epstein, W., & Gepshtein, S. (2013). Foundations of visual perception. In A. F. Healy & R. W. Proctor (Eds.), Experimental Psychology. Volume 4 in I. B. Weiner (Editor-in-Chief) Handbook of psychology (87-119). New York, NY: John Wiley & Sons.

Colorblind accessibility in video games – is the industry heading in the right direction?

Historically, some video games have taken colorblind accessibility into account, whereas others have not been as accessible to such color-vision deficiencies. Some upcoming games have been publicizing the inclusion of colorblind accessibility and how the developers plan to implement it. However, including these features is one matter, but ensuring these games are truly accessible to colorblind gamers is another. This post will review a few common approaches to colorblind accessibility in some games over the past few years, as well as plans for a couple of upcoming games. Ultimately, we will see where games succeeded, failed, and how this can inform the future of colorblind accessibility implementation in video games.

Types of colorblindness

Before we proceed to reviewing games, let’s briefly review the three most common types of colorblindness:

Protanopia – unable to perceive red light, resulting in red and greens looking murky, and blues and yellows standing out

Deuteranopia – unable to perceive green light, resulting in red and greens looking murky, and blues and yellows stand out

Tritanopia – unable to perceive blue light, resulting in greens looking murky, and reds appear pink

Approaches to colorblind accessibility in games

Filters

Perhaps the most common way of implementing colorblind accessibility in games is including modes for the different types of colorblindness via a whole-screen filter. This is meant to target the problem colors for colorblind people; however, these filters tend to oversaturate the entire color palette, resulting in some undesirable colors. Here are some examples:

Call of Duty: Advanced Warfare

Historically, Call of Duty games would only provide a colorblind assist mode that would change colors on the mini map and HUD above players. However, more recently, COD has added a full-on colorblind mode that not only filters colors on the mini map and HUD, but also colors throughout the map.

On some maps in Call of Duty: Advanced Warfare, it is not obvious that there is a filter applied. Below is a screenshot comparison of colorblind mode enabled (left) and normal color mode (right). Overall, the color palette seems unchanged; there is some slight, subtle discoloration with certain aspects of the map. The most noticeable difference, perhaps, is the color of the stairs through the scope of the gun, which appears more purple with colorblind mode enabled (left).

COD advanced warfare

Call of Duty: Ghosts

However, on other maps in the series, there are clear differences. As can be seen below, there are very noticeable differences in color when colorblind mode is disabled (left) vs enabled (right). Note the browns/reds turned purplish when colorblind mode is enabled. Because reds mesh too much with greens, it would be harder to play some of the maps in the game. So, colorblind mode adds a tinge of reddish bright pink to help players differentiate between the two; this comes off as unnatural, even to those with colorblindness.

COD ghosts

DOOM

DOOM applies the same type of whole-screen filter for its colorblind modes.

Protanopia mode

doom pro

Deuteranopia mode

doom deu

Tritanopia mode

doom tri

In all three colorblind modes, you can notice how the top images are very similar to the bottom images. The filter is making the image appear as if you’re colorblind instead of altering problem colors to make it easier to see for those who have colorblindness.

Here is a gameplay screenshot (the top is colorblind mode disabled and the bottom is deuteranopia mode):

doom colorblind

DOOM inherently uses a lot of reds; notice how turning on deuteranopia mode drowns all of these out, which essentially makes all the colors bland and muted (even those that aren’t an issue to those with colorblindness).

Overwatch

Below is normal color mode in Overwatch, where all vital information is represented by colors that can be easily differentiated. When tritanopia mode is activated, you can see that this filter makes the “friendly” and “party” colors and “enemy” and “alert” appear nearly identical. Even though there is an option to toggle the strength of the mode, this just results in differences in brightness/blandness, not contrast.

overwatch full color vision

The next screenshot is an example of how normal color mode looks to someone with tritanopia, where “friendly/party” look nearly identical and “enemy/alert” are fairly similar in color as well. However, once tritanopia mode is activated, the issue actually worsens. “Friendly/party” remains nearly identical and “enemy/alert” become even more indistinguishable.

overwatch tritanopia mode

Madden 17

Although not released yet, EA Sports has released information about accessibility options in their upcoming yearly title, Madden 17. EA Sports will be including colorblind modes, similar to the ones previously discussed here, which will include whole-screen filters. More information and screenshots of this feature can be seen on EA Sports’ official site. However, being that Madden 17 is adopting a method that many games have implemented poorly, it remains to be seen if the proposed filters will be implemented in such a way that doesn’t result in other, non-problematic, colors on the screen looking unnatural.

Customizable colors for vital information

Another way to implement colorblind accessibility in video games is to include preset or customizable color combinations, based on types of colorblindness, for representing different types of vital information in the game. This method tends to receive more favorable feedback from those with colorblindness since it only induces changes in colors that are problematic, without altering the rest of the game’s color palette.

Battlefield 4

Battlefield 4 changes the colors of specific essential visual indicators, such as the color of UI elements related to the player’s squad, team, and enemy. Additionally, as many people do not know their classification of colorblindness, the game instructs players: “If colorblind, choose the team colors that differentiate the most among each other.” Unlike whole-screen filters, this approach does not affect other colors on the screen that are not affected by colorblindness, resulting in a more natural experience for those with colorblindness.

Normal color mode: Squad: light green, Team: light blue, Enemy: orange

battlefield 4 normal

Protanopia mode: Squad: gray, Team: purple, Enemy: green

B4 protanopia

Deuteranopia mode: Squad: purple, Team: indigo, Enemy: orange

B4 deuteranopia

Tritanopia mode: Squad: purple, Team: blue, Enemy: orange

B4 tritanopia

Destiny

Similar to Battlefield 4, Destiny offers preset color palettes for these three types of colorblindness. Below are two examples:

Deuteranopia mode

destiny 2

Tritanopia mode

destiny tritanopia

Battleborn

Battleborn includes a feature called “Color-Coded Teams,” where players can choose the colors for essential information, such as each team’s heroes, health bars, and competitive objects. This is different from preset colorblind modes like Battlefield 4 and Destiny because there is more customization permitted. This is an especially-important inclusion for a game like Battleborn because of its naturally-vibrant color palette and chaotic gameplay.

Incorporating iconography as supplementary conveyance

Perhaps the best approach to colorblind accessibility is including iconography as a form of supplementary conveyance. While it is always best practice to convey (vital) information via multiple methods (e.g., the “trifecta” – audio, visual, and textual conveyance), it is not always plausible. It is imperative that vital game information not be conveyed solely by color, as it could negatively impact the experience of a player who struggles to see the specific color.

Grand Theft Auto 5

Grand Theft Auto 5, as well as many of the games in the GTA series (and other open-world games), include icons as a form of conveyance. Although there is color assigned to specific icons, color alone is not relied upon to understand what they mean.

GTA 5 map

Hearthstone

In Hearthstone’s most recent update, some noticeable changes were made to the card collection UI. While it may be possible that these changes are purely aesthetic and were not implemented with usability in mind, there are now colors associated with specific heroes for each of your decks. In addition to these colors, Blizzard also incorporated each hero’s symbol used on the tabs of the card collection UI. Again, including this iconography not only allows the player to make an association between the hero/color/icon, but not rely solely on color as the only source of information.

Hearthstone 2016-07-15 08-18-58-80

Recore

Although not released yet, the upcoming game from Armature Studio, Recore, will implement an “innovative icon-based” approach to colorblind accessibility. Specifics about this system have not been released, so little is known and it remains to be seen how effective its implementation will be. More on this system can be found at numerous outlets, such as SegmentNext and Polygon.

Conclusion

  1. Whole screen filters are, typically, not the best approach to colorblind accessibility.
    • Colorblind people see a limited range of colors.
      • Compressing the entire color palette pushes hues away from the problematic areas and bunches them closely up against other hues, swapping color clashes for other color clashes.
    • Changing all of the colors that are distinguishable to those with colorblindness makes the game look bizarre and unnatural.
      • Do not alter that which does not need to be altered.
      • Help the player distinguish between vital information necessary to play the game.
      • A player should not experience colors in games differently than they perceive them naturally in the world.
  2. Ideally, provide the option to let players select and customize colors for vital information.
    • These can be applied to outlines, health bars, icons, names, object indicators, etc.
    • “One size does not fit all”
      • There are varying degrees of colorblindness, so customization can offer a personal and, ultimately, more optimal experience.
  3. Avoid relying on color alone (by adding symbols, text, varying enemy design, etc.).
    • If not possible, include a simple color palette that can be used as a single-color choice that is not problematic for those with colorblindness (e.g., dark orange/light blue).
    • If neither of these are possible, a brief review of the game aspects that absolutely need to be differentiated in order to successfully play the game (e.g., teammates vs. enemies) can be done to decide if specific UI/gameplay elements can be modified.

Final thoughts

For some developers, colorblind accessibility is incorporated into the development of a game from the beginning whereas, for others, it may be an afterthought or complete oversight, accidental or not. The latter is understandable for many reasons: a lack of budgeting allocated to such resources, timing crunch, lack of flexible toolkit for designers, etc. If usability research is incorporated into the development schedule from the beginning, feedback on colorblind features (or lack thereof) can be noticed/iterated/implemented/tested before it is too late to include such accessibility.

The important question is, how will future games approach colorblind options in games? Will developers revisit previously-unsuccessful methods, as Madden 17 seems poised to do, learn which games have been successfully lauded by colorblind players, like Battlefield 4, or be proactive in implementing innovative systems that allow full accessibility for colorblind players, à la Recore?

DOOM (PS4) – subtle, yet appreciated, usability

Doom_Cover

Intuitive access to current objective and challenges

DOOM has many subtle usability features that contribute to the user experience. The first being the inclusion of a secondary way to display the current objective and challenges, which is mapped to down on the d-pad. The objective appears on the left-hand side of the screen and the challenges appear on the right, which can be dismissed by pressing down on the d-pad a second time. This is a handy, simple, one-button press that is a design feature that flows nicely with the overall relentless pace set by DOOM. Below are three screenshots demonstrating this feature.

DOOM_20160626135931
Tutorial text pop up explaining how to view mission information.

 

DOOM_20160626151103
Current objective on the left and challenges on the right.

 

DOOM_20160626143258
This feature allows you to easily keep track of completed challenges, as it is noted in the UI in all caps and a change of color.

Color-coded objective markers

Multiple times throughout the campaign, the player has to obtain colored key cards or skulls to gain access to corresponding doors. It is great practice to display vital information in more than one way; in this case, the color-coding of the objective markers provides supplemental information to the distance marker. If the player proceeds to the incorrect door first, he/she is less likely to make an error navigating to the other, as there is additional discriminatory information to help avoid mixing up the objective markers.

Color-coded objective markers

Green lights for vertical traversal

There are many scenarios in DOOM where the player has to traverse vertically. The density of some of these areas can make it seem like a daunting task; however, there are green lights on ledges to indicate a sense of directionality and climbable surfaces. This is similar to games like Tomb Raider and the Uncharted series that require a lot of traversal and design it such that it is noticeable to players, but also seems like a natural part of the game’s world; it doesn’t stick out like a sore thumb (I’m looking at you, Ryse: Son of Rome). DOOM uses this method in a couple ways, which can be seen below; flashing green lights seem like something you would realistically see if navigating a facility on Mars.

DOOM_20160626150428

Available weapon points/suit tokens reminder

Another subtle feature is a reminder about how many weapon points and suit tokens the player currently has available to spend, which appears on the lower-half of the screen in the center (see screenshot below). There were numerous times that I was caught up in slaying demons that I lost track of how many points and tokens I had, so this reminded me to level up consistently with the natural progression of the game, which, inevitably, only led to more brutal slaying. These appeared during slower moments in the game, so they didn’t interrupt any action and allow the player to focus on spending available points and tokens according to specific play styles. Promoting recognition, not recall, by players is always a win in my book.

Weapon mod reticles

Each weapon and its mods has its own distinct reticles. This system also added the ability to visually track current ammo for certain weapon mods without diverting the player’s attention away from the reticle in the center of the screen. Not only does it track the number of shots for each mod, but also color cues (i.e., green when ammo is full and flashing red when low).  This is very useful because many weapon mods include firing multiple shots in a row. See the video below for examples of this system with the charged (triple) shot mod for the shotgun and missiles for the assault rifle.

Simultaneous presentation of vital information

Lastly, an issue encountered was the simultaneous presentation of vital information, which included verbal narration progressing the story and tutorial text information. Although this only happened once to my recollection, this issue makes the player’s verbal and auditory channels compete for attention, likely resulting in him/her missing some of the information trying to be conveyed. The screenshot below demonstrates this scenario, with the verbal narration from Samuel Hayden under “Voice Comm” and the “Combat Rating” on the bottom of the screen.

Recommendation: The verbal narration is essential information here in order to progress the story, whereas the combat rating tutorial is not vital at that precise moment because it is describing, generally, how combat results in points to upgrade weapons. This issue could be resolved by simply changing the timing of the combat rating information to a time when the player’s attentional resources aren’t currently under load due to other stimuli.

DOOM_20160626150019

Final thoughts

DOOM is an unrelenting, adrenaline-pumping, demon-slaying, rush. Aside from the tight gameplay mechanics, the inclusion of design features such as specific visual cues on reticles for modded weapons allow the player to exert more resources toward slaying demons and surviving when the action is dialed to 11, rather than worrying about ammo running low and having to slow the action and pull up the weapon wheel, for example.

In the downtime between demon onslaughts, DOOM remains fun in part due to these usability features. The surprising amount of exploration and vertical traversal are aided by slick visual cues that aid navigation in multiple ways. Informational pop ups during this time let the player know he/she can take a moment to breathe, upgrade themselves, and prepare to, once again, fight like hell.

TMNT: Mutants in Manhattan – lack of accessibility and poor tutorialization

I didn’t want to believe the reviews (e.g., IGN, Gamespot); I really, really, didn’t. The nostalgia and desire for a good TMNT game was just too strong. However, I should have bitten the bullet and quit after the title screen:

TMNT_MiM 2016-06-21 19-50-11-10

From a usability/accessibility standpoint, this might be the worst possible opening message for a PC game. Luckily, I was able to play with a DS4 controller; otherwise, I wouldn’t have gotten the “optimal gameplay experience.”

Although this issue likely warrants no explanation, a PC game should be optimized for PC controls and controller support should be secondary (although it is a very welcomed accessibility feature). Turning on the game and seeing this screen makes me feel taken advantage of, because this message suggests that the PC version is not well-optimized and less time, money, effort, etc. was allocated to it. Moreover, this completely alienates anyone who plays on PC and does not own a controller from achieving the best possible experience playing this game. Ultimately, this limits accessibility to a game that should be available to a wide audience, ranging from young kids to adults who grew up with TMNT. Immediately, they know they are not playing the best version available, and this could ruin the experience from the outset.

The tutorial room

Another usability issue lies within the tutorial, which is an optional mode that teaches the player all of the necessary controls in a room separate from any in-game action. However, TMNT: Mutants in Manhattan has more than just a few controls to remember, making the tutorial room a less-than-ideal choice for this kind of game. While it’s not overly-complex, it has a surprising amount of controls, ranging from basic movement/combat, to item use, to turtle swapping, to commands, to distinct ninjitsu moves for each turtle. See below for a full playthrough of the tutorial.

 

A tutorial room that takes almost ten minutes to complete is not beneficial to players here because it is information overload, as there are multiple controls explained followed by gameplay executions for each. More importantly, it’s just plain boring and does not hold the player’s attention, making it even more likely to be skipped altogether, ultimately leaving the player even more confused when playing the actual game and unable to obtain the “optimal gameplay experience.”

Recommendations:

  • Incorporate the tutorial into the first mission of story mode. This will help reduce the likelihood of the tutorial being skipped altogether.
  • Exclude the redundant on-screen text during the tutorial, which is simultaneously being narrated by in-game characters (this requires the player to split his/her attention). Instead, integrate the textual explanations into gameplay diagrams/actions. This will allow the player to learn by doing while likely maintaining his/her focus. Additionally, it can reduce working memory overload for players with less prior knowledge, which likely make up a large percentage of the players for a game with such broad appeal.
  • Break aspects of the tutorial into chunks and present them as they occur organically in the open world. This will require the player to remember less and produce less load on memory. Also, this segmenting will allow him/her to proceed at his/her own pace, granting him/her a greater feeling of control. Ultimately, this will likely lead to less player frustration and a better overall experience.

Competitive analysis

A competitive analysis is an evaluation companies use to to see what similar companies are doing with their products in order to figure out how they can make their product unique and marketable. In games user research, a competitive analysis can be a useful internal tool to help inform game design and different features during many stages of development.

An example:

Company X has been running many internal and external playtests of their current in-development game. They have been noticing a consistent trend of players not realizing when an objective has been updated, completed, and transitioned to a new one. The design team believes this might be an issue with the new objective UI, so a competitive analysis is conducted to see how other games have handled such UI design. This can help give the development team ideas about UI that have worked and not worked for other games, and how they can design theirs to be functional, effective, and consistent with the art direction of the new game. The following are some examples of new objective UI in games that could be presented to the development team for consideration.

Borderlands 2

ezgif.com-video-to-gif

When the player obtains a new objective, the objective text/check box slides onto the upper-right portion of the screen, briefly flashing, and remains under the mini-map.

ezgif.com-video-to-gif (3)

As objectives are completed, they get checked off, flash, and exit the screen.

Sniper Elite III

ezgif.com-video-to-gif (2)

Completed and new objectives are indicated by text that appears in the top-left portion of the screen.

The Witcher 3: Wild Hunt

ezgif.com-video-to-gif (1)

Under the mini-map, the objective is updated as the player completes required portions of the main objective. Relevant text/numbers are updated and the text is highlighted as a cue to the player.

Diablo 3

ezgif.com-gif-maker

Objectives, on the right side of the screen, are updated with a check and a yellow “complete” and “new,” which act as visual cues to notify the player and then fade from the screen.

Middle Earth: Shadow of Mordor

Shadows of Mordor

New objective text indicator appears in the center of the screen momentarily before trailing off to the upper-left corner of the screen as a reference for the player.

Conclusion

A report for the development team can be generated, which would include descriptions of the the game-specific features and critical comparisons between the games. Additionally, suggestions for potential directions may be included in the report; however, this would depend on the philosophy of the research department and development team.

This is just one example of a competitive analysis for games user research. Others might include investigating how different games handle blood spatter, damage indicators, or tutorials. A vast knowledge of games is undoubtedly beneficial for such analyses; however, in this digital age with access to YouTube at all times, it is certainly possible to conduct such analyses with little prior knowledge of such specific aspects of different games.

Time spent playing different game modes – mock analysis

Company X wants to know how long players are spending playing different modes of their new third-person shooter game. The game has four modes: single-player campaign, online multiplayer, and local and online horde modes. An hour before Company X’s weekly update meeting, a researcher is asked to show data representing the time spent (in hours) playing these game modes during the game’s first week following launch from a small random sample of players. A time-spent analysis is important and of interest because it demonstrates what activities/game modes players are engaging in and for how long, which can help guide game development during production as well as post-launch to ultimately see if the game is matching design intention.

The researcher was not worried about generating such data with little prior notice, because Company X collects an exuberant amount of data via telemetry. The figure below was presented along with the statistical analyses of the data, which were paired-samples tests because of the within-subjects design (each group consisted of the same players).

Furthermore, the researcher calculated confidence intervals (the I-shaped bars on the graphs), which are ranges of scores constructed such that the true population will fall within its range in 95% of samples, for each group. The true mean would simply be the average time spent in each of the game modes of every player to play the game during its first week (as opposed to just the small random sample used in this example). Since the researcher doesn’t know the true mean of the entire population of players, he/she doesn’t know if the sample values (means) here are a good or bad estimate of this value. So, rather than fixating on these four means in the sample, the researcher could use an interval estimate instead, utilizing the sample means here as the midpoint, as well as setting a lower and upper limit.

Essentially, the researcher calculated upper and lower limits for each game mode in the sample. Again, since this is only a small sample of the population of the players of Company X’s new game, we do not know the true mean of the population. However, for example, if Company X gathered 99 more different samples of players to generate 99 more figures similar to the one below (and calculated 95% confidence intervals), the researcher could confidently say that in 95/100 of those samples, the true mean of the population of players would fall within that range.

time spent analysis

The small random sample consisted of 23 players and their average time spent in each of the four game modes: single-player campaign, online multiplayer, and local and online horde modes. While the figure alone can show differences visually, further analyses can validate whether there are statistically significant differences between the modes.

Based on this sample, players spent more time, on average, playing single-player campaign (M = 7.37, SE = .900) than local horde mode (M = 1.53, SE = .455). This difference, 5.84 hours, BCa 95% [3.875, 7.803], was significant, t(22) = 6.166, p = .000. Players also spent more time playing single-player campaign than online horde mode (M = 3.87, SE = .614). This difference of 3.5 hours, BCa 95% [1.180, 5.820], was significant, t(22) = 3.128, p = .005.

While there was no significant difference in time spent playing single-player campaign compared to online multiplayer, players spent more time, on average, playing online multiplayer (M = 8.35, SE = .857) compared to local horde mode. This difference of 6.82 hours, BCa 95% [4.727, 8.908] was significant, t(22) = 6.763, p = .000. Players also spent more time, on average, playing online multiplayer than online horde mode; the difference of 4.48 hours, BCa 95% [1.844, 7.013], was significant, t(22) = 3.664, p = .001.

Additionally, players spent more time, on average, playing online compared to local horde mode; the difference of 2.34 hours, BCa 95% [-3.674, -1.004], was significant,  t(22) = -3.634, p = .001.

Conclusion

To summarize, players spent more time playing both single-player campaign and online multiplayer than both local and online horde modes. Additionally, they spent more time playing online horde mode than local horde mode either by themselves or with friends. Again, the confidence intervals are an important piece of information here, because they allow Company X to confidently assume what the true mean of the player population would be for each game mode, and, ultimately, generalize findings based on this small random sample of players.

For more on paired-samples tests and confidence intervals:

Field, A. (2013). Discovering Statistics Using IBM SPSS Statistics. London, England. SAGE.

Alienation – tutorial blunders and a hindsight proposal for usability testing

cover

Having spent countless hours playing Dead Nation and Resogun, Housemarque’s next title, Alienation, has been on my radar for quite a while. It is definitely a worthy spiritual successor to Dead Nation, adding three different playable classes, deeper RPG elements, and four-player multiplayer. I’m currently about 7 hours into the campaign with the Saboteur; I’ve played solo as well as with 1-3 additional teammates. Both play styles are fun in their own way and, while the four-player multiplayer can, at times, be too hectic to know what’s happening, it’s an amusing type of chaos. However, I noticed a usability issue before even playing the first real mission.

Non-intuitive tutorial access

The main mission-select screen for Alienation can be seen below. It is an intuitive design because you complete missions that take place all over the world, many times multiple in the same location, and makes navigating easy.

ALIENATION™_20160430104104
Alienation main mission-select screen

However, below is a zoomed-in screenshot of what is important on this screen and, although it is an overall intuitive design, two issues are present here. The cursor the player uses to navigate the map is a green circle, which can be seen in the left-hand screen shot below, under “Barrow, Alaska.” This is where the cursor is located by default, although “Barrow, Alaska” is actually the first real mission and not the tutorial. The tutorial is “Raining Camp, Hawaii,” which is the second issue. The “T” in training is presumably cut off by the screen here, which could induce even more confusion by the player. They might assume that “Raining Camp, Hawaii” is the second mission, based on its displayed name and how the cursor is on “Barrow, Alaska” by default. The option to skip the tutorial is a nice addition for veteran players of Housemarque or twin-stick shooter games; however, how that option is presented in Alienation is not clear. This lack of clarity might also result in new players skipping the tutorial completely before entering the first mission and missing the introduction to the controls might induce frustration.

The tutorial’s title is partially cut from the screen and not selected by default.

Reloading visual cues – a hindsight proposal

Alienation has a “perfect reload” mechanic similar to the Gears of War series. Briefly, a real-time progress bar is presented to the player and they need to press a button (here, it’s R3) at the right time (here, indicated in green on the bar). Mistiming the reload will result in the green turning to red and the player having to wait longer for their character to reload. This bar is presented in two places: above the gun interface in the bottom-left-hand portion of the screen and right below the character; however, there is one main difference in these presentations. Besides being obviously larger, the bar above the gun interface presents an “R3” cue when the bar reaches the green, essentially reminding the player what button to press and when. The GIF below demonstrates both reloading bars in action.

reload gif 2
Reloading bar presented in two places: above the guns UI and below the character

This dual presentation made me wonder if both are actually necessary. Personally, from a line-of-sight perspective, I rarely look away from the character when playing unless I know I’m completely alone; there are just too many situations where there are an overwhelming amount of enemies swarming the character, and that split second it takes to look over to the bottom-left to reload could mean the difference between life and death. Additionally, if the player is fixated on the character, the R3 pop up could be mistaken as an enemy coming from the periphery. While I am ignorant to whether Housemarque conducts external usability tests for their games or not, I would imagine a handful of tests would have brought this matter to the forefront. Regardless, in hindsight, I would propose this to Housemarque – would playtests categorize this extra bar as unnecessary and/or distracting? Furthermore, I know that conducting eye-tracking usability studies can be out of budget’s reach for many companies, but I wonder if that, combined with direct observation and self-report testing, would have shone light on this topic.

Pierre Chalfoun, the biometrics project manager at Ubisoft Montreal, gave an excellent talk about eye tracking and line of sight at GDC 2016. While I highly recommend watching the whole video, I would like to direct your attention to 28:05 – 32:15 in the video below, where he discusses a mixed-methods case study they conducted with eye tracking in Assassin’s Creed Unity to help inform UI design based on player’s actions. He makes a great point that players can have difficulty self-reporting in-game reactions, such as where on the screen they would look when completing certain actions, which is where the quantitative method of eye tracking can be very useful.

For Alienation, I would go out on a limb and hypothesize that most players would not use the reload UI on the left of the screen, making it superfluous to an already busy lower quadrant of the screen. If this did happen to be the case, perhaps an “R3” cue under the character, in addition to the bar, would be beneficial, similar to how Ubisoft integrated stealth UI elements in Assassin’s Creed Unity based on the results of their study.

In the end, this is pure speculation and we can only guess how such a study would conclude. While this question could be answered with traditional usability tests, eye tracking does contribute a quantitative component, which is ultimately more trustworthy than a player’s self-report recollection about such a UI design/feature and its usability.

How Hearthstone perfects the digital card-collecting and deck-building experience

There are a variety of digital trading card games that contain any mixture of favorable and unfavorable user interface (UI) designs. More specifically, if there is not an intuitive system in place for the card-collecting and deck-building portions of these games, it can be a laborious process that places unnecessary effort on trivial tasks that could be focused on building an awesome deck. Since this can be one of the most enjoyable aspects of trading card games, this process could ultimately hinder the player’s experience. With the recent release of Whispers of the Old Gods, we will all be spending a lot of time with the collection and deck-building interface in Hearthstone, so let’s see how it stacks up according to well-established usability heuristics for UI design.

Brief overview

Below is a screenshot of the collection and deck-building screen for Hearthstone on iPad. It is setup like a virtual card binder with numerous different filters, all while maintaining aesthetic appeal and a minimalist design. Starting in the top left, there are tabs that let the player filter cards by the nine hero types and neutral cards. The two gold icons to the right of these tabs allow the player to view collected card backs and heroes. The bulk of the screen is comprised of the cards currently chosen, which includes the hero name at the top along with relevant artwork, the cards, as well as the quantity in the collection. On the far right is the second-largest portion of the screen, which is a scroll bar where the player can select from any of the basic hero decks and custom-built decks. Once a deck is selected, the player can tap again to see an overlay of the mana distribution, the hero’s power, and options to rename or delete it. The bottom-left portion of the screen allows the player to filter by different card sets and mana crystal costs. Additionally, there is an indicator of online friends, a clock and a battery indicator (much-welcomed inclusions). Moving from the center to the bottom-right of the screen, there is a search bar, a crafting button that opens the card-crafting pane, the current number of decks, amount of arcane dust, and back and menu buttons.

2

IMG_0573111

Match between system and the real world 

Upon initial navigation of Hearthstone’s menus and card-collecting and deck-building interface, it is immediately obvious how well-expressed the targeted theme and identity of Warcraft is. This, along with the virtual-binder design, which is similar to how the backpack is utilized as the menu system in the Pokemon games or how the player slices menu buttons in Fruit Ninja, can allow for deeper immersion and enhance the overall experience.

Hearthstone also makes the screen intelligible from left to right, both logically and intuitively as if the player is flipping through a card binder. There are arrows to navigate forward or back and, if no filters are selected, the cards are presented in order of mana cost, as if reading a book. Additionally, the tabs at the top of the screen that allow the player to filter cards by hero type are represented by intuitive icons easily recognized by players, such as knives for a Rogue and a sword for a Warrior.

IMG_057711

Visibility of system status

Hearthstone’s collection UI remains the same throughout the process of searching your collection and building a deck, with a couple subtle changes introduced when card crafting. This consistency makes it easy to keep track of multiple things, such as what card types and mana costs are currently filtered. Hearthstone achieves this by utilizing pulsing and glowing visual cues, such as a blue outline around the crafting button and lighting up the currently-filtered mana cost.

4

3

Also, when in card crafting, cards you currently have and can afford to craft are shown and outlined in blue, whereas cards you do not have, but can craft, are faded but highlighted in a similar glow. Cards you do not have and cannot currently afford to craft are grayed out to closely match the background, clearly indicating that this option is unobtainable at the moment.

IMG_0579

Error prevention in card crafting

Confirmation messages can be a great option for UI design to prevent errors. Hearthstone introduces error prevention during card crafting by presenting the player with a confirmation dialog box when attempting to disenchant a card that would leave them with less than 2 of said card, which is the maximum allowed per deck for certain cards. This message gives players a moment to reflect on their current action and potentially prevent them from accidentally disenchanting a super-powerful legendary card.

IMG_0570

Recognition rather than recall in card crafting

The card-crafting screen is the only slight deviation from the main collection screen. Here, players are presented with a pane with instructions and additional filters for card crafting. The inclusion of this pane in place of the deck pane on the main screen presents players with a well-located reminder about the general idea behind card crafting, which is ultimately more beneficial than accessing decks during this portion.

5

Other UI usability features worth noting:

  • The interactive UI stands out from the background, as evidenced by the design of the buttons, as well as using concepts that players can relate to real life, such as searching through an organized physical card collection.
  • Throughout, there is consistency in font sizes, colors, styles, and icons, which can be read clearly even though they are small.
  • There are subtle aspects of the UI that help keep the player informed during the entire collection/deck-building process, such as indicators on cards in your collection when you have the max amount of that card allowed in the deck or if you have none left. Again, this allows the player to recognize rather than recall what cards are in the deck.IMG_056411

Final thoughts

Not only does Hearthstone have deep, yet simplistic and accessible gameplay, it also seems to have implemented that philosophy into its UI design. This is an incredibly important inclusion, as it results in a more complete package that is not only accessible to younger players, inexperienced card game players, and inexperienced Warcraft players, but also the most hardcore card game, such as Magic: The Gathering, players. Ultimately, regardless of experience, the UI is not a (usability) barrier to entry and is very intuitive and easy to use throughout. However, building dominant decks on a budget and remaining competitively ranked is another story. 🙂

Dead Star tutorial think-aloud

dead star

Dead Star is a twin-stick space-shooter with hybrid MOBA and RPG elements. Here, it was reviewed in the format of a think-aloud, which is a form of usability testing where a researcher observes a participant “thinking aloud” and verbally expressing feelings, experiences, actions or issues encountered while playing the game (Desurvire & Seif El-Nasr, 2013). This think-aloud was conducted with me as the researcher and a female in her mid-twenties, who does not self-identify as someone who plays video games, as the participant. The tutorial of the PS4 version of Dead Star was played, which is comprised of three sections: Loadout, Piloting, and Conquest, and lasted about 30 minutes. The initial tutorial screen can be seen below.

Dead Star™_20160405153001

The player proceeded to complete the tutorial in the order in which the sections were presented, beginning with Loadout, which can be seen below.

Dead Star™_20160409130320

Dead Star™_20160409130320

As seen above, the tutorial commences with Mags Mu’zeta speaking instructions to the player along with the written instructions presented in the top-left corner of the screen. Once the player noticed the option to read the text and skip ahead by pressing “X,” she progressed through these subsections at an expedited pace, not waiting for Mags Mu’zeta to finish reading the introduction and instructions aloud.

Dead Star™_20160409130450

At the heart of the Loadout section of the tutorial, it presents the player with the three categories of space ships that can be used during gameplay, as seen above. The player was instructed to navigate to the Estari Warden ship and swap it out for the Marksman. Although the ship currently selected is outlined in a yellow square, as well as labeled with the ship’s name in the box, the player became confused as to why she could not swap the Warden out for the correct ship, the Marksman. The issue was simply that the player was unaware that she was currently selecting the Bulldog instead of the Warden, which was rectified after a few seconds of focusing on her options. She proceeded to correctly highlight the Warden and swap it out of the loadout for the Marksman.

Dead Star™_20160405151748

The player then proceeded to navigate, without issue, back to the main tutorial screen to select the Piloting tutorial. This was where the player learned how to perform basic controls, including navigating the ship, shooting, and collecting loot. First, the player was instructed to collect 50 ore and return to the home base, which was instructed verbally by Mags Mu’zeta, as well as with a small text indication at the top of the screen (see screenshots below). When instructed to shoot the ship’s gun for the first time, the player verbally expressed confusion about how to control the dual-stick firing system (right stick to aim and RT to fire). After some troubleshooting and time spent looking at the controller, she was able to successfully destroy and collect the ore rocks.

Dead Star™_20160409130735

Dead Star™_20160405152003

The current amount of ore the player has is indicated by HUD on the bottom right-half of the screen (see below). Here, you can see the current amount in the player’s inventory, as well as a progression bar indicating how close the player is to capacity. While not explained at this point in the tutorial, different types of ships have different ore-carrying capacities. At this point, the player was controlling the Estari Marksman, which has a capacity of 50 ore and, as seen in the screenshots, the player currently has 40. However, she was not aware of this HUD element, even though there is a brief text pop-up explaining this feature. Once she had reached the capacity of 50 and was unable to collect the remaining available ore, she asked “why can’t I collect more ore?” and “do I have too much?” After some time spent flying the ship around the remaining ore and failing to pick it up, she continued to her objective by navigating the ship in the direction of the yellow directional reticle.

Dead Star™_20160405152003

Upon returning to the home base, the player was instructed to transfer the ore in her inventory to the base as supplies for upgrades. The UI for this can be seen in the screenshot below. Similar to other instructional segments throughout the tutorial, there are text instructions at the top of the screen, as well as button prompts at the bottom (e.g. X Transfer Ore). When presented with this screen, the player demonstrated a facial expression of confusion. However, after a few seconds of looking up and down the screen a number of times, she was able to successfully transfer the ore.

Dead Star™_20160405152038

After upgrading the home base, the player was instructed to investigate a nearby cargo salvage. Before proceeding to said cargo, the player was instructed to recruit nearby drone ships to fly with her. She quickly correctly recruited two drone ships; however, she attempted to circle the larger, stationary, object thinking it was another drone and asked “are they supposed to come with me?” The similar size and shape of this object was mistaken as a drone ship (see screenshot below).

Dead Star™_20160409130835

She then proceeded to navigate the ship to the correct location of the cargo salvage. Once she arrived, two text pop ups were presented on the screen simultaneously, although only one was relevant to the current objective, as seen in the screenshot below. She verbally expressed frustration with this scenario, saying “how am I supposed to read both at the same time?!”

Dead Star™_20160405152308

Once the player destroyed the cargo and collected the artifact, she was instructed to level up and was presented with the screen below. Here, there were visual cues on the screen as well as a repeating “upgrade available” as an auditory cue. She expressed confusion about the prompt to press the PS4 controller’s touchpad, saying “I don’t know what button that is.”

Dead Star™_20160405152330

After examining the controller for a brief moment, she was able to reach the level-up screen, which can be seen below. The player was instructed to level up the missile launcher, which was when she asked, “how do I choose the missile launcher?” She then navigated to the right and the player’s current selection was highlighted in yellow, as seen below. She proceeded to press X to upgrade the missile launcher; however, the two indicators of successful leveling up, which were the white bar filling in half of the left portion of the highlighted diamond and the “1” under “Upgrade Points” turning to a “0,” were not noticed by the player. Therefore, she hesitated once upgrading the missile launcher and, shortly after, realized she needed to press “O” to back out of the level-up screen. She then proceeded to try her new weapon against incoming enemies, however, her ship was destroyed, resulting in a respawn at the home base.

Dead Star™_20160409130930

Upon respawning, the player was urged to destroy any remaining enemies and then navigate to the enemy’s controlled station. Upon arrival, she successfully destroyed the enemy ships and began the process of taking over the base. At this point, the player was instructed to stay within range (indicated by a blue pulsating aura) of the base and the time it takes to complete the takeover was indicated by a timer within the base (see below). Once remaining enemies were destroyed, she navigated away from the base and asked, “are there any remaining enemies?” While there was text for the current objective at the top of the screen, the player stated, “I don’t know what I am supposed to be doing right now,” and “it’s not clear what I’m supposed to be doing.” After several minutes of navigating around the home base, but not within range to complete the takeover, the player was prompted by the researcher to “look to the top of the screen for your objective.” The player read the objective aloud and further asked, “what am I supposed to do?” The researcher then clarified that the player was out of range and that she had to stay within the blue circular aura for the appropriate amount of time.

Dead Star™_20160409131157

Once the takeover was complete, the player proceeded to follow the level-up prompt. Again, she displayed frustration with this system and the verification that she had correctly upgraded her intended option. Lastly, the player completed the Conquest tutorial, which consisted of attacking and overtaking an enemy outpost using one of the largest ships available, the Vindicator. The player expressed no verbal frustrations or obvious issues with this part of the tutorial and completed it in a timely fashion.

Summary

While inexperience with games in general was evident throughout this playtest, it also revealed some potential usability issues in Dead Star. For one, during this think-aloud, the player was tripped up by the leveling-up system, suggesting that the indication for successfully leveling up may be too subtle for some players. Secondly, as a game with MOBA-like elements, there can be a lot going on on screen; therefore, it may be beneficial that stationary, non-interactive, objects in the game world not resemble objects that can be interacted with, as seen in Dead Star with commandable drones. Thirdly, the simultaneous presentation of two instructional text boxes (one with additional auditory presentation) can confuse the player and potentially direct their attention in an unintended direction, ultimately significantly affecting the experience.

Additional thoughts on usability

During my time with Dead Star, I noticed some usability issues not previously mentioned in this review. Although I played through it on PS4, I played it on a PC monitor, so I was only ~3 feet away from the screen. However, some of the HUD and pop ups, including words, numbers, and button prompts are small and I wonder how this holds up when playing on a TV, where the player is significantly farther from the display. Lastly, as seen in the screenshot below, it can be easy to lose track of the aiming reticle (bright green triangle) when aiming at the lower half of the screen. It not only clashes with the colors of the shield and health bars, but also collectible ore and the background of certain maps. A different color choice for this reticle or the option to change the color along with certain maps would have been beneficial.

Dead Star™_20160405162352

References:

Desurvire, H. & Seif El-Nasr, M., “Methods for Game User Research: Studying Player Behavior to Enhance Game Design,” IEEE Computer Graphics and Applications, vol. 33, no. 4, pp. 82-87, July-Aug. 2013, doi:10.1109/MCG.2013.61

The relationship between game sales, marketing and usability budgets – mock analyses

Company X wanted to predict how certain variables affect how many copies of their game sell. They believed a specific, single, variable would be the strongest predictor of game sales, so they compiled data related to the marketing budget for their last 100 games (yeah, they’ve been busy) and calculated a simple linear regression to predict the number of game sales (in millions) based on budget allotted to marketing (in millions).

linear regression marketing game sales

A significant regression was found F(1, 98) =  46.662, = .000, with an R² of .323, which tells us that marketing budget can account for about 32% of the variation in game sales, meaning that their model cannot explain 68% of the variation, which is explained by other variables. The following equation can be derived from the analysis: the predicted amount of game sales is equal to 9.342 + .391 (marketing budget) million when the marketing budget is measured in millions. Average game sales increased .391 million for each million allotted to the marketing budget. Additionally, as seen in the graph above, there was a strong, positive correlation (= .568) between marketing budget and game sales, indicating that, on average, the more Company X spent on funding marketing for their games, the more copies were sold.

While Company X was pleased to see that putting money into marketing their games had a positive relationship with game sales, they were interested in incorporating another variable into their model in an attempt to explain more than the 32% of variability in game sales. Therefore, they added the budget allotted to usability testing for each of the 100 games and, theoretically, thought that the amount of money allotted to usability testing, which provided a better overall user experience, might impact how many copies were ultimately sold.

game sales and usability

A significant multiple regression was found F(1, 98) =  90.465, = .000, with an R² of .651, which tells us that, together, marketing and usability budgets can account for about 65% of the variance in game sales, meaning that adding the usability budgets to the model explained about 33% (R² change = .328) more of the variability in game sales. Therefore, the predicted amount of game sales is equal to 4.382 + .194 (marketing) + .408 (usability), where marketing is measured in millions and usability is measured in thousands. Game sales increased .194 million for each million spent on marketing and .408 million for each thousand spent on usability testing. Additionally, as seen in the graph above, there was a strong, positive correlation (= .767) between usability budget and game sales, indicating that, on average, the more Company X spent on funding usability testing for their games, the more copies were sold.

Both marketing budget (= .000) and usability budget (p = .000) were significant predictors of game sales. Importantly, the adjusted  (.644) was very similar to that of the model, indicating that, if the model were derived from the population rather than this sample, it would account for approximately .7% less variance in game sales. For comparison, below, you can see a combination of the previous two graphs with trendlines for each predictor.

game sales marketing and usability

Conclusion

It is important to keep in mind that these are mock analyses and the data is purposely fabricated to show specific results. Additionally, there are multiple ways of conducting multiple regression analyses, and one important manipulation is how predictors are entered into the model, which can greatly affect the outcome. Here, Company X began by conducting a simple linear regression to investigate the relationship between game sales and marketing budget. Although it was a significant predictor of game sales, the marketing budget of their games only explained about 32% of the variance in game sales. So, they added another predictor variable they thought might have a positive relationship with game sales, usability budget, and found that, when combined with marketing budgets, usability budgets explained 65% of the variance in game sales. This is a much better explanation of the variance compared to the 32% explained solely by marketing budgets. Therefore, while Company X realizes the importance of marketing their games, they now know there is a significant, positive, relationship between how much of their budget is allotted to usability testing and will continue these practices with future games in hopes of continuing their impressive sales record.