Lag University

Everything you ever wanted to know about lag.  And more.

This article starts with the basic concepts and works its way to highly technical.

 

What is lag?

Lag, also called latency, is a measure of time delay in response.  Most of you are already familiar with the concept of ping, which is network-related lag.  The ping on your Fortnite screen is a number followed by the letters “ms”.  “ms” stands for milliseconds.   A millisecond is a thousandth of a second.  For example 500ms is equal to half a second.  100ms is a tenth of a second.

Why does it matter?

Many games are won by quickness of a player’s reflexes.  The game’s intent is that the quicker player should win.  However, if the quicker player is on a slower computer or gaming system, they may lose.  Let’s consider a simple case: a player drops into a confined space holding a pump shotgun, where another player is waiting for him, also holding a pump shotgun.  Both players are scanning the screen with their eyes, trying to find the enemy player’s head on the screen.  Once their eyes locate it, they try to very quickly move their mouse in the correct direction and distance so the head moves to the crosshairs of their weapon, and they fire it just at it arrives.

Now, imagine player#1 has a very slow computer/gaming rig.  The information has arrived from the server via his/her internet connection, indicating where the enemy’s head is located, but the computer does not tell the player this immediately.  Instead, it waits a while.  Meanwhile, player#2 has a fast computer and he has already seen the location of his enemy on the screen and is already rotating.  He fires, wondering why the other player is just standing there, doing nothing.  The reason is that Player #2’s click of the mouse comes before Player #1 has even seen where the enemy is.  The server computer judges that Player#2 has killed Player#1.

Exactly the same concepts apply to any reaction-time-sensitive game, like Overwatch, League of Legends, Smash Bros, etc.  A good player with quick reflexes but a slow computer is made equivalent to a bad player with slow reflexes.

How is computer lag different than ping?

Essentially, they add together.  If you have 40ms of ping and 50ms of computer lag, then you are playing with 90ms of total lag.  This sum, 90ms, is really what determines the overall reaction time handicap added by your gear.

 

I don’t have a feel for “millisecond”–how many milliseconds actually make a difference?

A critical question.  To give a rough sense, an eyeblink of an average person is 100ms from when their lid starts moving down to when it’s fully open again.  But that’s average, including the elderly, etc.  Many younger people may blink significantly faster than this, perhaps in the 60-80ms range?

How much lag matters?  A good reference is in first-person-view drone racing, where lag is carefully tracked, and racers often cite they can detect a 2ms difference in their rig’s lag.  Within video games, the exact same conditions apply as in drone racing.  Humans must judge a 3-D scene based on what they see on a monitor, and then apply control inputs accordingly.   People who are genuinely great at video games are very, very fast.    If you’re against a really good gamer, if he appears during your eyeblink, you’re at a severe disadvantage.  Imagine two very good players meet in the arena.  Player#1 is 19 years old and has a top-class reaction time of 87 milliseconds.  This guy is quick.  But.  Player #2 is 24 years old and is in fact, slightly, slightly faster at 82 milliseconds.  Obviously, in some cases, one has the drop on the other, but let’s assume an exactly fair fight.  The server resolves the time differences with sub-millisecond accuracy, and so determines that Player #1 is defeated by 5 ms.  If, however, Player #1 has just bought a new low-lag display, or new low-lag mouse, or new GPU card, so that his rig has overall lag 8 ms lower than Player #2, than Player #2 is defeated, even though his actual reflexes are better.  In such a case, a 5 ms difference is enough to swing the result.

People often use an illogic that says: “well, if my reaction time is 82 milliseconds, why should I worry about a 30 millisecond lag?”  The whole point is that your opponent also may have a reaction time near 82 milliseconds, and you’re therefore nearly evenly matched, so why would you suddenly make him into someone substantially faster than you by lagging yourself to effectively 112 milliseconds?

The other illogic often-cited goes: my mouse is 10ms, my computer/GPU is 40ms, my display is 25ms–all numbers well under my reaction time, so aren’t they good enough?”.  The point is the sum 10ms + 40ms + 25ms is 75ms.  It all adds up.  When you move your mouse, it takes 10ms to tell your computer about the action.  The computer doesn’t even start reacting until the mouse gets around to telling it.  The computer then takes an additional 40ms to send updated information on the HDMI port to the display.  The display waits a while after getting this information before changing the image on its screen.  This (win-stealing) cascading of sequential lag events means that all lags are additive towards your total lag.

 

What are the various components of total lag?

For a test to be accurate, you must define what you’re measuring.  These are our definitions for our tests of the various components of lag, and where we define each lag sub-interval’s start and end points:

Lag component interval Starting event Ending Event
Mouse Lag Mouse-Movement USB-HID-packet
CPU/GPU lag USB-HID-packet HDMI-frame-release
Display lag HDMI-frame-release Displayed-image-change

Where Mouse Lag + CPU/GPU lag + Display lag = total system lag

Here are our definitions of the exact time of those start/end events:

Event Definition
Mouse-Movement When a mouse, previously at rest, physically travels far enough from the rest position to equal at least one pixel (minimum) of mouse movement
USB-HID-packet When the final edge of the final bit of a USB packet of HID (Human Interface Device) format carrying information of a mouse movement or keypress, enters the USB cable
HDMI-frame-release When the leading edge of the first bit of the vertical blanking period enters the HDMI cable following a pixel frame whose image differs from the static image of previous frames.
Displayed-image-change When an area of a display exits the dark cycle between frames and brightens, reaching 50% of the maximum brightness it will achieve in a new frame whose image is different from the static image of previous frames

Let’s organize our thinking by imagining the above scenario.  A packet is sent from the server, indicating your enemy’s position.  After the internet delay (the incoming portion of “ping” lag), your CPU becomes aware of his position.  The CPU then takes time to interpret and process this information (the CPU lag).  It sends an updated x,y,z coordinate of the enemy to your GPU card through the PCIexpress bus.  The GPU then begins the process of calculating what a new picture would look like, from your point of view, if your enemy is at this new location.  After an *enormous* number of calculations, it figures out exactly what every one of 2 million pixels (for a 1920×1080 display) should look like in color and brightness, and sends this frame out the HDMI port.  The time to do all these calculations is the GPU lag.  The HDMI signal traverses the cable in an amount of time so small that it’s ignorable (for you hardcore techies, typically  0.00001ms for a 2 meter cable with dielectric constant of 2.3)

The display then receives the HDMI (or DisplayPort) digital information, waits until it accumulates all 2 million pixels, applies some amount of image processing, passing this frame to other circuitry which sends the correct control signals to the thin film transistors distributed at every one of the 2 million pixels of your screen.  Those transistors then generate a voltage which polarizes the liquid crystals suspended in fluid.  The crystals physically rotate, allowing light of various colors to pass through or not, forming the picture.  The time for all these steps is the display lag.

You, a human, observe the light coming from the display. (the propagation of the light is another 0.000003 ms which we’ll ignore)  The input from your eye causes a reaction in your finger after some delay (the player reaction time).

The movement in your finger actuates a button on the keyboard or mouse, or moves the mouse.  In the case of a button-press, this is button/keyboard lag.  The switch moves through some range of motion before internal contacts touch, the dead-zone lag of the switch (which will vary depending on how fast an impulse is delivered by the finger).  The electronics of the mouse/keyboard then becomes aware of this button-push, and generates a USB packet, which it sends (when queried by the host computer).

In the case of a mouse movement, the tiny camera under your mouse is constantly photographing the fibers of your mousepad.  A tiny onboard special-purpose computation engine does a frame-to-frame difference calculation between each photo.  It finds similar features in sequential frames and then calculates a most-likelihood estimate of what amount of shift in the mouse position could explain the difference in pictures.  It then generates a USB packet describing the x,y dimension of this movement, which it sends (when queried by the host computer).  The time from movement to sending this packet is the mouse-movement lag.

The computer processes and interprets this USB packet, and after some delay (the second portion of the CPU-lag) generates an internet packet to send towards the server.    (It also sends updated orientation info to the GPU).  The packet then propagates through the internet to reach the server; this is the outgoing portion of the “ping” lag.

So, as you can see, we’ve come full circle, back at the server.  It’s important to understand that this is a complete loop–a feedback loop, and that all we really care about is the total round-trip delay through this entire loop.  The total loop delay is the sum of all the lag components we’ve just enumerated.  We like to think that only one of these really matters: the player reaction time, and so good players just win.  But in reality, all the other components of the loop are added to the player reaction time, and the advantage is determined entirely by the total loop delay.

This is where it gets interesting–because sooo many things are summed, even small improvements over many parts can add up to a substantial fraction of player reaction time, essentially making an average player into a very fast player (or vice versa).  For example, just a 16ms improvement in each of these: mouse lag, CPU lag, GPU lag, display lag would make a 156ms reaction-time-player equivalent to a 92ms player.

 

How to quantify lag?

This is where we at Goose Enterprises thrive.  We don’t like approximate, hand-wavy answers.  We don’t like imprecision.  If there’s lag, we want it well-defined and exactly measured to known error brackets.  If you’re trying to measure something in milliseconds, we like instrumentation that is calibrated to 0.1 milliseconds.  Or, better, 0.01 milliseconds.

Measurement in 0.001 millisecond increments is straightforward.  For the slowest microcontroller with 8 MHz clock, there are 8 instruction cycles per 0.001 millisecond.  GPIO inputs have a lag less than a clock cycle, so even with a 20 or 30 instruction cycle, capturing time intervals with much less than 0.01 millisecond of micro-controller-induced lag is simple, and not the area of difficulty.

The challenge is in accurately defining and capturing boundaries of a lag measurement.  For example, to detect the instant of keypress or mouse movement will inherently use some form of environmental interface, which is essentially what a Human Interface Device is, so there is a risk that the test equipment will induce a lag of comparable magnitude to the device under test, which would torpedo any hopes of 0.01 millisecond accuracy.

Goose Enterprises’ proprietary technique allows independently capturing the moment of a user-induced keypress or mouse movement to under 0.001 milliseconds, giving an extremely precise and repeatable start point for the measurement of lag.  If the metal tape is applied flat to a button or mouse surface, it has a typical thickness of 0.1millimeter, and a typical compressibility of 0.01 millimeter.  The stainless steel probe has a compressibility of <<0.0001 millimeter.  For an impact speed of 1 meter/second, the compression of the tape and probe occurs within 0.01 milliseconds; therefore, the button or mouse is in motion within 0.01 milliseconds of the initiation of electrical contact between the tape and the probe.  The electrical contact causes a voltage change detectable by the LagMeter within an extraordinarily short time, under one clock cycle (<< 0.0002 millisecond), essentially eliminating any uncertainty about the timing of the starting event.

Goose Enterprises’ proprietary technique for display image capture allows capturing the moment of display image change within well under 0.1 millisecond accuracy.

Thus the start and end points of each lag measurement are well-defined to measure total system lag from user-input to display response with well under 0.1 millisecond accuracy; our LagMeter in USB mode is tested against an oscilloscope to confirm an accuracy of +/- 0.02 milliseconds.

 

Component Lag Times

Measuring your total system lag from user input to display image response is interesting, but it doesn’t highlight where improvement can be found, except maybe to say your whole system’s slow and you should replace every item with a more expensive, hopefully faster component.  Here, the LagMeter’s key ability is to separate out the individual components and evaluate their individual contributions, which summed up, equal the total system lag.  This can highlight weak areas, i.e. components which are making an overly large contribution to system lag, thus targeting your equipment budget at problem areas.

Optimize setup and settings

Also, this component-by-component lag analysis capability allows intelligent adjustment of settings and environmental parameters such as mouse poll rate, keyboard poll rate, display resolution, GPU frame rate, mouse pad surface.  In the past, such adjustments were made in the blind, but with the LagMeter their exact, precise effects on lag can be quantified and weighed against other playability benefits.  Do you prefer the playability of wireless mouse, but are fearful of the lag effects?  You can now exactly quantify the difference, in your rig, with your mousepad and surface, of your mouse choice.  In many cases the lag difference is less than you might expect.  Do you like the beauty of ultra-high-resolution settings, but are concerned about how much your gameplay and lag is impacted?  You can now quantify the effects down to the millisecond and make an informed decision.

Detecting problems

If you finish a game and have a sense that something was not right, the LagMeter allows you to troubleshoot and locate possible problems.  For example, if your wireless mouse battery is low, if the lens of your mouse needs cleaning and the blurred images are hurting mouse frame recognition speed, or you have components that are aging, you may wonder if this is affecting lag or is a non-issue.  Alternatively, you may wonder if some other processes installed on your system are consuming cycles and slowing your CPU lag.   A LagMeter can measure any such drift in performance with high accuracy, and identify which subcomponents have changed and which are performing exactly the same as when you first bought them.

 

Mouse-only or keyboard-only lag test

This test allows separation of the lag of the mouse or keyboard alone, separate from the CPU/GPU lag and the display lag.  The starting event is the button-press or mouse movement (as described above, accurate to <0.001 millisecond).  The ending event is the issuing of a USB packet from the mouse or keyboard.  These are typically short packets of under 8 byte payload, with host-generated packets before and after it, to request, then acknowledge the update.  With sync, PID, and other overhead bytes, the 3 packets, including idle intervals between the three packets, is under 17 bytes, or 140 bits.  Most mice still operate at USB full-speed, or 12 Megabit/second, or 0.00008 milliseconds per bit, giving a packet length of 0.012 milliseconds.  Thus, the time duration of the packet is a relatively small contributor.  Typically, the image-recognition system of the mouse and the general processing delays in generating USB packets are a much larger contributor to mouse lag.  This lag may not be consistent from test-to-test; sometime images are not clearly understood by the mouse image-recognition and are skipped; sometimes there is a substantial and varying interval between image captures.  The packets are “Interrupt” transfers, as defined by the USB standard, meaning they are only issued in response to a polling packet from the host.

In Mouse-only or keyboard-only mode, the LagMeter is acting as USB host with a polling rate of 1 kHz, meaning the LagMeter sends a polling packet to the mouse/keyboard every 1.00 milliseconds.  If the mouse is moved or a button pressed, the mouse will queue and hold this message, waiting to be polled.  Thus, this can add a variation to the measurement; if the mouse has a packet ready immediately after it was just polled, the message may sit there, growing stale, for up to 1.0 milliseconds before the next poll.  By taking repeated measurements, a sense of the average and spread can be obtained.

How does an optical mouse work?

The sensor chips are made by relatively few manufacturers, and sent to many different mice vendors, who incorporate them on their mice’s circuit boards.  For example, the Pixart PMW3366 is popular in E-sports.  This is a CCD–basically a camera like in a phone, which is a silicon integrated circuit with a 2-D array of light-sensitive transistors, looking down, placed behind a lens to act like a microscope.  An LED (typically red) is mounted beside the CCD to illuminate the scene.  What is the “scene” the camera is looking at?  The little hairs, fibers, or surface roughness of your mousepad.  You’ve probably noticed the red light is tilted at an extreme angle; by lighting the scene from the side, it creates shadows on any surface roughness and brings out features of the mousepad with more contrast.  This is analogous to how aerial photos of canyons are more dramatic at sunset, when the angled light causes deep shadows in the canyons but brightly light the rim for high contrast.

The camera then takes photos very rapidly and performs an image-correlation algorithm; each frame will have very similar features to the frame before, but with some XY shift, e.g. the same fiber of the pad is still present but at a different location in the image frame.  The algorithm finds the XY offset which gives the highest correlation and reports this XY offset as motion of the mouse.  The image-correlation is performed by a special-purpose processor inside the sensor IC.

Now, here’s the interesting part: if, for example, the mouse looks at a frame and the features are blurred or distorted, or the shadows have changed enough to not be recognizable, or it can’t decide between two possible interpretations of the movement, it will just punt and wait for the next frame.  This doesn’t happen rarely–it happens all the time.  If it decides to skip a frame or many frames in a row, let’s face it, no one will know.  Also, filtering and smoothing algorithms can include time-delaying effects such as integration which can increase the lag.  For example, if a mouse is “noisy” and the cursor jitters, an easy fix is to add low-pass filtering on the position reported.  This will eliminate the noise, but it will make the mouse much slower.  It essentially gives the mouse “intertia”, making it slow to respond to changes in position.

Overlaid on this frame-recognition cycle is the USB timing.  Mice use “full-speed” USB at 12 Mbit/sec.  Nearly all mice use the “Boot Interface” HID device type defined for USB.  In this, the mouse position updates are sent by Interrupt Transfers.  Interesting factoid: with Interrupt Transfers, the mouse can never just tell the computer host anything on its own timing.  It must wait to be polled by the host.  The Interrupt Transfer comprises three packets: (1) An “IN” packet from the host of 32 bits, addressing a particular device and asking if it has anything to report, then [after an inter-packet delay of roughly 10 bit-periods] (2) a DATA packet response from the mouse, containing four bytes (32 bits) of information (e.g. a relative movement of -127 to +128 pixels in X and Y), plus 35 overhead bits, then [after another inter-packet delay of roughly 10 bit-periods] (3) an “ACK” or acknowledgement packet of 19 bits from the host.  In most mice, this three-packet sequence is around 138 bits duration and so takes 0.012 milliseconds.

Polling rate

The USB standard defines a frame (USB frame, not optical sensor frame) as 1.0 millisecond.  During initialization, the host asks the mouse for its parameters, and the mouse tells the host how frequently it would like to be polled.  The mouse communicates this via the parameter bInterval, which is how many frame counts it would like the host to wait between polls.  bInterval is an integer value from 1 to 255, so the lowest it can be is 1.  Therefore, the USB-mandated minimum polling period is 1.00 millisecond, or a maximum polling frequency of 1.00 kilohertz, i.e. 1000 Hertz or 1000 times per second.

If the host has just polled the mouse and then a sudden movement occurs, the mouse just holds that information, waiting to be asked.  It has no way to raise its hand and wave it frantically to get the host’s attention.  So, while the mouse waits to be polled, that information it has gets stale.  During a lag test, the striking of the mouse may randomly fall anywhere within the 1 millisecond USB frame, either just after the last polling (unfavorable to lag), or just before the next poll (favorable to lag).  This inherently adds a 1.0 ms variation from test-to-test in lag.

It should, in theory, be possible, with an optical-correlation-frame rate of 12000 per second (i.e. 0.08 milliseconds) and a USB frame rate of 1.00 milliseconds, to achieve a worst-case lag of 1.08 milliseconds.  In our experience, this does not happen, and mice add some extra delay in the post-processing or in missed correlation frames, and so variation within the 3 to 6 millisecond range is among the best lag times we’ve measured.

Lag Variation in mice

This is key; contrary to what you might expect, for mouse sensors (unlike displays or keyboards), the lag varies significantly from test to test.  The image-recognition algorithm is highly scene-dependent.  For example, a regular patterned array of dots is bad if the amount of mouse movement from one frame to the next is close to the dot spacing, because it will look like the mouse did not move at all.  Generally, a chaotic surface is better, but with lots of high-contrast features randomly, non-periodically distributed.  You can imagine that sometimes, even with a surface of good material, for a given frame you just get unlucky, and the frame in that particular micro-location has no significant detail, or the frame has too much repetitive detail which repeats in the direction of motion.  In these unlucky frames, the algorithm can’t make a positive or high-probability correlation.  I think we’ve become inured to it from years of familiarity with optical mice, but this image-correlation trick is hard.  Really it’s quite a near-miraculous thing how it can sense quantitative motion on almost any surface.  It’s a challenge even on the best of surfaces.

In sequential lag tests, the same mouse on the same surface will give varying lag depending on how “lucky” the starting image is in its suitability for good image-correlation, and to some degree on the strength of the impulse applied.  We would like a mouse-mousepad combination to behave well whatever the starting location and whatever the mouse speed.  A given mouse pad will have some micro-locations that are optimal for recognition, and others that are poor.  The recognition will also depend on the mouse movement rate (i.e. the strength of impulse of the strike during the LagMeter test) and direction.

If the mouse’s first few frames after the strike do not give good images, it will report no movement until the mouse moves to another micro-area of the pad that is more suitable, and as a result, the lag in that particular test will be longer.  This exact effect is continuously happening during your gaming as the mouse moves to different micro-locations on the surface at different speeds and directions, but, playing, without instrumentation, it is hard to distinguish when a mouse that generally performs great suddenly misses a series of frames (right when you least want it to).  You may attribute the bad result to your own play, feel you hand jerked or was slow, and not realize that a transient mouse-mousepad performance degradation may be at fault.

The ability to find the mouse-mousepad combination with the right balance of surface detail and shadow-generation without it being too regular depends a great deal on the particular mouse, its LED and orientation and how that interacts with the mouse pad surface fibers.  Also, wear, dirt, color, reflectivity of the mouse pad, flatness of the table beneath the pad, etc. will have an effect, as will other factors like aging of the sensor/LED and even humidity, moisture, or dust on your lens.

From a gaming point of view, variation in lag is, in many ways, worse than consistent lag.  With consistent, predictable lag, the human brain is able to adapt to it and nullify many of its ill effects by reflexive compensation.  But variable lag is unpredictable and so cannot be compensated.

So, we became quite interested in this phenomenon and optimized the LagMeter  to be able to test repeatedly and quickly; about ten tests per minute.  This proved an excellent tool for exploring the repeat-ability of a mouse-mousepad combination, discovering missing optical-recognition frames, and a mouse with, say, 4 millisecond lag would occasionally, rarely, lag 30 milliseconds after it started moving before issuing its first “position-update” packet.  Quite disturbing.  We found that some surfaces are far more repeatable for certain mice.  A LagMeter is the perfect tool for finding the right combination of mouse, mousepad, and other factors to get consistent, low lag, time after time.

Wireless Mice

Wireless mice have, at long last, arrived.  Their lag is now very much in-line with wired mice.  From a gaming point of view, this is very nice.  That cable can get caught or push tiny resistance against your fingers right when you least want it.  But.  There are some caveats.  The first is RF (Radio Frequency) environment.  Man, the 2.4 GHz band is crowded with junk.  Your phone, your computer, your bluetooth speaker, your wifi router, your Alexa dot, your wireless headphones, your Roku/FireStick/Chromecast, your BB8 minibot that you control from your phone, drone copter, etc. etc. all are sharing this little 80 MHz sliver of spectrum.  Here again, the problem is intermittent.  One test is not enough, because everything is packet-based.  If, during your one test, the wireless mouse transmits during an instant of time where all other items happen to be in a silent period, the result will be perfect.  But, for unlicensed radio spectrum, the method of sharing is called collision-detection (not collision-avoidance).  Two transmitters, may, and constantly do, accidentally transmit packets right on top of each other.  Either the louder RF transmitter wins, or both are corrupted.  Rest assured the little tiny wireless mouse with the tiny battery that it’s trying to conserve for 24 hour life is NOT the most powerful transmitter in your house.  Collisions will occur.  And when they do, the mitigation method used within unlicensed RF standards is “collision-detection-and-retransmit”.  It happens all the time; a high fraction of packets are corrupted and re-sent later.  Key word there is later.  I.e. lag.  So your lag may vary from test to test and will be highly dependent on the specific activity at that moment (is someone streaming a FireStick 4k movie in the next room?), and on the physical layout of transmitters, and also, the local environment, including the real-time shape of reflectors/resonators (like wires) and absorbers (like your torso or arm) within the near-field of the transmitting or receiving antenna.

This is the problem, that it’s intermittent and so your system may perform perfectly well at times but then cost you in the middle of a critical game.  A LagMeter is an excellent way to discover dropped packets and then perform repeated tests, monitoring mouse performance under varying conditions, turning on and off various nearby equipment and RF functions, testing re-arrangement of routers, bluetooth transmitters, etc. etc. to different places in the room, finding nulls and generally optimizing the RF environment for your mouse.

Also, battery condition becomes an issue with wireless mice.  If you have a suspicion that performance may degrade as the mouse gets low on battery, you can verify/disprove that with the LagMeter, and set for yourself a limit of how low you will allow the battery to go and still play competitively with it.  (Conversely you may be recharging the mouse more than you need to and incurring inconvenience that you may prove with the LagMeter to be of no performance benefit).  Again, the idea is a quantitative measurement and hard data for questions where previously you had to just go by feel (and hope).

 

Display-only lag test

This measurement requires the GooseEgg accessory, which provides a time-calibrated HDMI test signal.  The GooseEgg accessory emits an HDMI test pattern, wherein certain areas of the screen are dark then suddenly change bright in the next frame.  As the changed frame is output to the HDMI port, an electronic pulse is simultaneously generated by the GooseEgg.  The rise/fall time of this pulse is <<0.0001 milliseconds, i.e. very, very fast, and the pulse is precisely aligned with the beginning of the vertical blanking interval of the outgoing frame.  This pulse is fed to the LagMeter, indicating the exact start time of the lag timer within less than a 0.01 millisecond uncertainty.

The photodetector captures the moment of change in the display image, when the key areas change from dark to white.

Note that, in fact, to produce a 240 Hz fps signal of 1920×1080 requires pretty advanced hardware, and the current GooseEgg accessory is limited to only 120 Hz at 1920×1080 on HDMI.  For slower monitors, the accessory will automatically slow frame rate and/or lower resolution to find the best compatible mode available from the display.  Alternative (more costly) higher-horsepower variants of the GooseEgg accessory will soon be available from Goose Enterprises, which will offer higher frame rates.  Please check back here for future updates.

 

Repeatability of measurement due to frame rate

Each display operates at a frame rate or frames-per-second (fps).  This will vary depending on the instantaneous load on your GPU; the GPU takes a certain amount of time to make all the ray-tracing type calculations needed for 2 million pixels.  The difficulty of those calculations will depend on what’s happening on the screen; what objects is it trying to portray, what are the textures on those objects, how many objects are there.  Some images with more detail and more movement are much harder to calculate, and the calculations take longer for each frame, slowing the frame rate.

These frames mean the display does not change continuously, but in discrete intervals.  For 60 Hz, these frames are every 16.6 ms.  For 120 Hz, these frames are every 8.3 ms. For 240 Hz, these frames are every 4.2 ms.  Thus, a full-system lag measurement will have some variation from measurement-to-measurement depending on how the timing of when the test is initiated aligns with the cycle of frames.  At 60 Hz fps, this effect can be quite pronounced, adding a 16.6ms uncertainty to the lag measurement.  Since the full-system lag measurement is initiated by a human striking the mouse or button, the timing will be random relative to the pulse of frames.  We suggest using the highest frame rates possible (lowering resolution can help this) but, on the other hand, this must be weighed against the fact that we’d like to measure under conditions identical to our preferred gaming set-up.  By taking a series of measurements and averaging while noting the spread, you can get a sense of the effects of this frame-alignment.

Note that when measuring display-only lag, this variation is entirely eliminated, since the starting pulses are now aligned with the vertical blanking interval of the frame, so the display lag measurements are typically highly repeatable.

 

Display flicker

As you may be aware, a static image on a display is not actually static.  Between every frame, the screen is completely dark for a few milliseconds, even if the previous frame and the next are identical.  Because of persistence of the eye, humans generally don’t notice this flicker, provided the dark interval is short enough.  During the “on” times, the display is actually a bit brighter than we perceive, and humans see essentially the average brightness of these dark and illuminated images.  However, the LagMeter is plenty fast enough to pick up these several-millisecond flickers (indeed, it can detect 0.1 millisecond variations in brightness).  When looking at a bright area, the LagMeter will detect every flicker to dark.

Therefore, our measurements are made at a transition from dark-to-light, not light-to-dark.  When looking for a light-to-dark transition, each flicker is prone to be interpreted as a display change.  However, when looking for a dark-to-light transition, there is no such ambiguity.  The screen is dark, and during the flickers, it remains dark.  When the frame changes to white, there is then an unambiguous first dark-to-white edge in brightness, and this is the LagMeter’s clearly-defined end point of the measured lag interval.

 

 

 

 

 

 

 

 

Advertisement