David Durst's Blog

TL;DR CSGO demo files don't explicitly track player models' head positions. In this blog post, I'll explain an analytical model that approximates a player model's head position based on data explicitly contained in demo files.

Demo Files Only Track General Player Position

The red boxes are the player positions tracked in a demo file and in the network traffic. The bottom box is the bottom, center of the player's AABB. This is the variable m_vecOrigin in the demo file. The upper box is the location of the player's camera. This is m_vecOrigin + m_vecViewOffset in the demo file. Each player's camera is behind their head. The player's camera isn't blocked by the back of their head since the player's model isn't rendered on their computer.

I prefer to only use serialized data communicated over the network during my CSGO analyses. The reasons differ depending on the type of analysis and the data sources available for each type. For post-match analyses like my reaction time analyses, I am unable to access non-serialized data. My main data source during these analyses is CSGO demo files. These files are recordings of the network traffic sent from servers to clients. The files store game state sent over the network (SendProps in Source Engine terminology, all values dumped here) and not values exclusively stored in the server's or client's memory without ever being communicated to another computer. I can't access non-serialized data during post-match analyses because it isn't available from the demo file.

For during match analyses like my my bots, it's possible but undesirable to use non-serialized data. The non-serialized data is much less organized. Developers must organize data with a consistent naming scheme during serialization, so I can write analysis code once using that naming scheme and trust that my code will continue to work for months or years. Non-serialized values aren't explicitly organized in memory for external analysis [1], so they are much harder for me to access during runtime. Reading them requires hardcoding compiled function names and virtual memory offsets. Even if I determine these values correctly once, I will need to continuously update the values after each CSGO update because program recompilation may change the values.

Head position is an important instance of a non-serialized value. The image on the right shows the serialized player positions: their camera above their shoulders and the bottom-center of their axis-aligned bounding box (AABB). The network traffic also contains values for computing head position using animation state (m_AnimOverlay and m_flPoseParameter), but I don't want to reimplement the engine's animation code to compute head position. That is way too much work for just getting head position.

The Analytical Model For Head Position

The top of the torso/bottom of the neck (the red box) is a static offset from the player's camera position regardless of where the player looks. Not shown in the video, the offset depends on whether the player is standing or crouching. I handle this by having a standing offset, crouching offset, and linearly interpolating between them using m_flDuckAmount.
The final head position model. The model accounts for different view angles and crouching/standing.
This image displays different player model head positions overlayed on top of each other. By making the images transparent, the image shows that all head positions are fixed distance from the top-of-shoulder origin. The offsets are at angles that depending on the player's view angle.

My analytical model is an approximation of head position. The approximation means the values aren't perfect, but it enables the implementation to be simple and fast. The model works in two steps.

The first step is finding an origin point. The origin point must satisfy two conditions. First, it must be a constant offset of a value in the network traffic, so I can easily compute it. Second, it must be a fixed distance from all head positions (regardless of where the player is looking), so that I can easily compute a vector from the origin to the head position based on where the player is currently looking (their view angles). As demonstrated by the first video on the right, the top of the torso/bottom of the neck satisfies both of these conditions.

The second step is computing the vector from the origin point to the head position. The neck is a constant length and is (approximately) angled at the same position relative to the origin as the player's view angle. These factors mean that the origin-to-head vector is a constant length and just needs an angle derived from the player's view angle. Unfortunately, the neck angle and view angles aren't exactly the same (and the relationship changes depending on if the player is standing or crouching), so I need to add some offsets to account for these issues. I linearly interpolate between offsets using m_flDuckAmount to handle the transitions between ducking and standing. The second video on the right shows that these two steps are sufficient to approximately model head position.

The last image on the right is a cool visualization of the entire model in one image.

Request For Feedback

If you have questions or comments about this analysis, please email me at durst@stanford.edu.


  1. DataMaps - DataMaps are an exception to this networked and serialized/non-networked and non-serialized dichotomy. They are serialized but not networked. I believe they are serialized to ensure they can be saved to a file and restored on the same computer. The serialization means they are well organized and easily accessed by programs running with the server. However, the DataMaps are a small subset of game state (i.e. don't contain player head position) and can't be used by post-match demo file-based analyses. These limitations ensure DataMaps are only a footnote for this post. Here is a dump of all DataMaps.