How To Restrict Tensorflow Gpu Memory Usage?
Solution 1:
Solution
Try with gpu_options.allow_growth = True
to see how much default memory is consumed in tf.Session
creation. That memory will be always allocated regardless of values.
Based on your result, it should be somewhere less than 500MB. So if you want each process to truly have 1GB of memory each, calculate:
(1GB minus default memory)/total_memory
Reason
When you create a tf.Session
, regardless of your configuration, Tensorflow device is created on GPU. And this device requires some minimum memory.
import tensorflow as tfconf= tf.ConfigProto()
conf.gpu_options.allow_growth=Truesession= tf.Session(config=conf)
Given allow_growth=True
, there should be no gpu allocation. However in reality, it yields:
2019-04-05 18:44:43.460479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15127 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:03:00.0, compute capability: 6.0)
which occupies small fraction of memory (in my past experience, the amount differs by gpu models). NOTE: setting allow_growth
occupies almost same memory as setting per_process_gpu_memory=0.00001
, but latter won't be able to create session properly.
In this case, it is 345MB :
That is the offset you are experiencing. Let's take a look in case of per_process_gpu_memory
:
conf = tf.ConfigProto()
conf.gpu_options.per_process_gpu_memory_fraction=0.1session = tf.Session(config=conf)
Since the gpu has 16,276MB of memory, setting per_process_gpu_memory_fraction = 0.1
probably makes you think only about 1,627MB will be allocated. But the truth is:
1,971MB is allocated, which however coincides with sum of default memory (345MB) and expected memory (1,627MB).
Post a Comment for "How To Restrict Tensorflow Gpu Memory Usage?"