Demo Files Only Track General Player Position
I prefer to only use serialized data communicated over the network during my CSGO analyses. The reasons differ depending on the type of analysis and the data sources available for each type. For post-match analyses like my reaction time analyses, I am unable to access non-serialized data. My main data source during these analyses is CSGO demo files. These files are recordings of the network traffic sent from servers to clients. The files store game state sent over the network (SendProps in Source Engine terminology, all values dumped here) and not values exclusively stored in the server's or client's memory without ever being communicated to another computer. I can't access non-serialized data during post-match analyses because it isn't available from the demo file.
For during match analyses like my my bots, it's possible but undesirable to use non-serialized data. The non-serialized data is much less organized. Developers must organize data with a consistent naming scheme during serialization, so I can write analysis code once using that naming scheme and trust that my code will continue to work for months or years. Non-serialized values aren't explicitly organized in memory for external analysis , so they are much harder for me to access during runtime. Reading them requires hardcoding compiled function names and virtual memory offsets. Even if I determine these values correctly once, I will need to continuously update the values after each CSGO update because program recompilation may change the values.
Head position is an important instance of a non-serialized value. The image on the right shows the serialized player positions: their camera above their shoulders and the bottom-center of their axis-aligned bounding box (AABB). The network traffic also contains values for computing head position using animation state (m_AnimOverlay and m_flPoseParameter), but I don't want to reimplement the engine's animation code to compute head position. That is way too much work for just getting head position.
The Analytical Model For Head Position
My analytical model is an approximation of head position. The approximation means the values aren't perfect, but it enables the implementation to be simple and fast. The model works in two steps.
The first step is finding an origin point. The origin point must satisfy two conditions. First, it must be a constant offset of a value in the network traffic, so I can easily compute it. Second, it must be a fixed distance from all head positions (regardless of where the player is looking), so that I can easily compute a vector from the origin to the head position based on where the player is currently looking (their view angles). As demonstrated by the first video on the right, the top of the torso/bottom of the neck satisfies both of these conditions.
The second step is computing the vector from the origin point to the head position. The neck is a constant length and is (approximately) angled at the same position relative to the origin as the player's view angle. These factors mean that the origin-to-head vector is a constant length and just needs an angle derived from the player's view angle. Unfortunately, the neck angle and view angles aren't exactly the same (and the relationship changes depending on if the player is standing or crouching), so I need to add some offsets to account for these issues. I linearly interpolate between offsets using m_flDuckAmount to handle the transitions between ducking and standing. The second video on the right shows that these two steps are sufficient to approximately model head position.
The last image on the right is a cool visualization of the entire model in one image.
Request For Feedback
If you have questions or comments about this analysis, please email me at email@example.com.