feelinnice
New member
Deadlock crashes cleanly to desktop with no error dialog on an RTX 5090. Crashes are intermittent but
frequent — sometimes during the first match after launch, sometimes after 2-3 stable matches. No reliable
predictor. Once the first crash occurs, subsequent matches are significantly more likely to crash
(cascading pattern) until deadlock.exe is fully closed and relaunched.
Full-memory dumps captured via Sysinternals ProcDump (-ma -e -t -w) confirm the crashes are inside the
Source 2 engine, not the NVIDIA driver. Two dumps on separate days landed at exactly the same offset in
materialsystem2.dll, indicating a deterministic bug rather than random memory corruption.
Faulting locations (across 6 analyzed full-memory dumps)
┌─────────────────────┬───────────┬──────────────────────────┬────────────────────────────────────────┐
│ Module │ Offset │ Exception │ Pattern │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ materialsystem2.dll │ +0x26E6E │ 0xC0000005, Param1 = 0x0 │ NULL pointer dereference │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ materialsystem2.dll │ +0x26E6E │ 0xC0000005, Param1 = 0x0 │ NULL pointer dereference (reproduction │
│ │ │ │ confirmed) │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ panorama.dll │ +0x6F85F │ 0xC0000005, Param1 = │ Read from invalid address │
│ │ │ 0xFFFF...FFFF │ │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ scenesystem.dll │ +0x5F1630 │ 0xC0000005, execute bit │ DEP violation (jump to non-executable │
│ │ │ │ page) │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ scenesystem.dll │ +0x740C4 │ 0xC0000005, Param1 = │ Read from invalid address │
│ │ │ 0xFFFF...FFFF │ │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ nvwgf2umx.dll │ +0x45202C │ 0xC0000005 │ (pre-fix — obsolete, unrelated │
│ │ │ │ hardware issue now resolved) │
└─────────────────────┴───────────┴──────────────────────────┴────────────────────────────────────────┘
The same materialsystem2.dll +0x26E6E reproducing at the exact same offset across two separate sessions is
strong evidence of a specific, identifiable null-pointer bug in the material system — likely a pointer
that wasn't properly initialized or was freed mid-render.
Crashes in scenesystem.dll and panorama.dll in adjacent sessions suggest a shared root cause manifesting
through multiple engine subsystems (possibly a common asset-load or reference-counting code path).
Reproduction pattern
1. Crashes are intermittent but frequent — sometimes during the first match after launch, sometimes after
2-3 stable matches. No reliable pre-crash indicator.
2. Three consecutive crashes in one session occurred during the first match of a fresh launch with zero
prior playtime — ruling out a simple "memory accumulates over N matches" hypothesis in isolation.
3. Once the first crash occurs, subsequent relaunches are more prone to crash within 1-5 minutes of lane
phase. Cascading failure pattern.
4. Fully closing deadlock.exe (not just returning to lobby) and relaunching partially resets the crash
frequency but doesn't prevent early-match crashes.
5. Crashes happen most often during or immediately after lane phase loading / first heavy combat (peak
material/shader streaming moment).
6. Game exits silently (exit code 0) — engine's internal crash handler catches the exception before
Windows error reporting sees it.
Implication for root cause
The same materialsystem2.dll +0x26E6E NULL-deref reproducing from a completely fresh process state (not
after accumulated leaked memory) points toward something more fundamental than a pure memory leak:
- Race condition in material loading / binding
- Timing-dependent null pointer on a specific asset-load path
- Or a data-dependent bug triggered by specific hero compositions, maps, or abilities
It's not just "play long enough and crash" — it's a deterministic code path that hits null under specific
in-game conditions. The cascading pattern after the first crash suggests the engine's state recovery isn't
fully cleaning up after the fault, making subsequent crashes more likely.
System (verified stable on all other games)
- CPU: Intel i9-13900KS, microcode 0x12F, BIOS-tuned per Intel guidance (Vcore DVID -0.080V, PL1/PL2
150W/253W, ICCMAX 307A)
- GPU: Gigabyte RTX 5090 Gaming OC, installed directly in motherboard PCIe x16 slot (no riser)
- Motherboard: Gigabyte Z790 Aorus Xtreme (BIOS F13)
- RAM: 32GB DDR5 at JEDEC 5600 MT/s (XMP off for maximum stability)
- Storage: Samsung 990 Pro 2TB NVMe
- PSU: 1600W Titanium
- OS: Windows 11 Build 26200
- NVIDIA driver: 596.21 WHQL Game Ready (DDU clean install)
- Renderer: DX11 (forced via boot.vcfg — Vulkan has a separate documented crash on RTX 5090)
- In-game settings: DLSS Quality, TAA, 240 FPS cap, native 1440p
Ruled out (via diagnostic sweep)
-
No WHEA events in Windows System log (hardware confirmed healthy)
-
No PCIe AER errors (GPU PCIe link clean after removing vertical mount riser)
-
No BSODs, no kernel panics
-
CPU temps max at 60°C during gameplay (no thermal throttling)
-
RAM tested stable at JEDEC 5600
-
No Riot Vanguard, ExpressVPN, or other kernel-mode software
-
No in-game overlays (Discord/NVIDIA/Steam/Xbox GameBar all disabled)
-
Deadlock game files verified via Steam integrity check, full reinstall also tried
-
All other games (CS2, etc.) run perfectly stable with identical system config
Additional context
Based on community reports (https://forums.playdeadlock.com/threads/rtx-5090-constant-crashing-and-unable-
to-launch-in-middle-of-game.117215/), this appears to affect multiple RTX 5090 users and at least some RTX
5090 + Ryzen 9800X3D combinations. The 5090-specific pattern suggests an interaction between Source 2's
material system and how NVIDIA's 50-series drivers allocate or sequence GPU resources — a specific
timing/memory-layout interaction that exposes a latent race condition or null-check gap in the material
system.
Dumps
Six full-memory dumps were captured and analyzed during diagnosis (the offsets above are from those).
Dumps were deleted after analysis due to storage constraints (~100GB total). I can trivially reproduce and
capture a fresh dump on request — I have ProcDump -ma -e -t -w configured and crashes reproduce reliably.
Happy to:
- Capture a fresh full-memory dump the next time it crashes (likely within 1-2 play sessions)
- Upload to Dropbox / Google Drive / private S3 link
- Or re-capture in smaller minidump format (-mp, ~500MB) if that's easier for triage
A fresh dump hitting materialsystem2.dll +0x26E6E again would directly confirm the deterministic offset
and give Source 2 engineers everything they need for source mapping.
Contact
Reply here or DM — happy to run additional diagnostic builds, capture more dumps, re-test on specific
driver versions, or test patches.
Thanks — love the game, happy to help debug.
frequent — sometimes during the first match after launch, sometimes after 2-3 stable matches. No reliable
predictor. Once the first crash occurs, subsequent matches are significantly more likely to crash
(cascading pattern) until deadlock.exe is fully closed and relaunched.
Full-memory dumps captured via Sysinternals ProcDump (-ma -e -t -w) confirm the crashes are inside the
Source 2 engine, not the NVIDIA driver. Two dumps on separate days landed at exactly the same offset in
materialsystem2.dll, indicating a deterministic bug rather than random memory corruption.
Faulting locations (across 6 analyzed full-memory dumps)
┌─────────────────────┬───────────┬──────────────────────────┬────────────────────────────────────────┐
│ Module │ Offset │ Exception │ Pattern │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ materialsystem2.dll │ +0x26E6E │ 0xC0000005, Param1 = 0x0 │ NULL pointer dereference │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ materialsystem2.dll │ +0x26E6E │ 0xC0000005, Param1 = 0x0 │ NULL pointer dereference (reproduction │
│ │ │ │ confirmed) │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ panorama.dll │ +0x6F85F │ 0xC0000005, Param1 = │ Read from invalid address │
│ │ │ 0xFFFF...FFFF │ │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ scenesystem.dll │ +0x5F1630 │ 0xC0000005, execute bit │ DEP violation (jump to non-executable │
│ │ │ │ page) │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ scenesystem.dll │ +0x740C4 │ 0xC0000005, Param1 = │ Read from invalid address │
│ │ │ 0xFFFF...FFFF │ │
├─────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────┤
│ nvwgf2umx.dll │ +0x45202C │ 0xC0000005 │ (pre-fix — obsolete, unrelated │
│ │ │ │ hardware issue now resolved) │
└─────────────────────┴───────────┴──────────────────────────┴────────────────────────────────────────┘
The same materialsystem2.dll +0x26E6E reproducing at the exact same offset across two separate sessions is
strong evidence of a specific, identifiable null-pointer bug in the material system — likely a pointer
that wasn't properly initialized or was freed mid-render.
Crashes in scenesystem.dll and panorama.dll in adjacent sessions suggest a shared root cause manifesting
through multiple engine subsystems (possibly a common asset-load or reference-counting code path).
Reproduction pattern
1. Crashes are intermittent but frequent — sometimes during the first match after launch, sometimes after
2-3 stable matches. No reliable pre-crash indicator.
2. Three consecutive crashes in one session occurred during the first match of a fresh launch with zero
prior playtime — ruling out a simple "memory accumulates over N matches" hypothesis in isolation.
3. Once the first crash occurs, subsequent relaunches are more prone to crash within 1-5 minutes of lane
phase. Cascading failure pattern.
4. Fully closing deadlock.exe (not just returning to lobby) and relaunching partially resets the crash
frequency but doesn't prevent early-match crashes.
5. Crashes happen most often during or immediately after lane phase loading / first heavy combat (peak
material/shader streaming moment).
6. Game exits silently (exit code 0) — engine's internal crash handler catches the exception before
Windows error reporting sees it.
Implication for root cause
The same materialsystem2.dll +0x26E6E NULL-deref reproducing from a completely fresh process state (not
after accumulated leaked memory) points toward something more fundamental than a pure memory leak:
- Race condition in material loading / binding
- Timing-dependent null pointer on a specific asset-load path
- Or a data-dependent bug triggered by specific hero compositions, maps, or abilities
It's not just "play long enough and crash" — it's a deterministic code path that hits null under specific
in-game conditions. The cascading pattern after the first crash suggests the engine's state recovery isn't
fully cleaning up after the fault, making subsequent crashes more likely.
System (verified stable on all other games)
- CPU: Intel i9-13900KS, microcode 0x12F, BIOS-tuned per Intel guidance (Vcore DVID -0.080V, PL1/PL2
150W/253W, ICCMAX 307A)
- GPU: Gigabyte RTX 5090 Gaming OC, installed directly in motherboard PCIe x16 slot (no riser)
- Motherboard: Gigabyte Z790 Aorus Xtreme (BIOS F13)
- RAM: 32GB DDR5 at JEDEC 5600 MT/s (XMP off for maximum stability)
- Storage: Samsung 990 Pro 2TB NVMe
- PSU: 1600W Titanium
- OS: Windows 11 Build 26200
- NVIDIA driver: 596.21 WHQL Game Ready (DDU clean install)
- Renderer: DX11 (forced via boot.vcfg — Vulkan has a separate documented crash on RTX 5090)
- In-game settings: DLSS Quality, TAA, 240 FPS cap, native 1440p
Ruled out (via diagnostic sweep)
-
-
-
-
-
-
-
-
-
Additional context
Based on community reports (https://forums.playdeadlock.com/threads/rtx-5090-constant-crashing-and-unable-
to-launch-in-middle-of-game.117215/), this appears to affect multiple RTX 5090 users and at least some RTX
5090 + Ryzen 9800X3D combinations. The 5090-specific pattern suggests an interaction between Source 2's
material system and how NVIDIA's 50-series drivers allocate or sequence GPU resources — a specific
timing/memory-layout interaction that exposes a latent race condition or null-check gap in the material
system.
Dumps
Six full-memory dumps were captured and analyzed during diagnosis (the offsets above are from those).
Dumps were deleted after analysis due to storage constraints (~100GB total). I can trivially reproduce and
capture a fresh dump on request — I have ProcDump -ma -e -t -w configured and crashes reproduce reliably.
Happy to:
- Capture a fresh full-memory dump the next time it crashes (likely within 1-2 play sessions)
- Upload to Dropbox / Google Drive / private S3 link
- Or re-capture in smaller minidump format (-mp, ~500MB) if that's easier for triage
A fresh dump hitting materialsystem2.dll +0x26E6E again would directly confirm the deterministic offset
and give Source 2 engineers everything they need for source mapping.
Contact
Reply here or DM — happy to run additional diagnostic builds, capture more dumps, re-test on specific
driver versions, or test patches.
Thanks — love the game, happy to help debug.