This is a known issue that we cannot overcome. Some CUDA workloads take 100% of the GPU resource which slows down the streaming pipeline. If you wait for the workload to complete you can reconnect to the session. Alternatively you can use the Multi GPU setting to assign more than 1 GPU to a VM. If using Multi GPU, be sure to only allocate n-1 GPUs for the heavy CUDA workload so that the remaining GPU can be used for streaming.