CUDA_VISIBLE_DEVICES=0 python your_file.py # 指定GPU集群中第一块GPU使用,其他的屏蔽掉
CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible CUDA_VISIBLE_DEVICES="0,1" Same as above, quotation marks are optional 多GPU一起使用 CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked CUDA_VISIBLE_DEVICES="" No GPU will be visible
2.在Python代码中指定GPU
import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" #指定第一块gpu
3.设置定量的GPU使用量
config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.9 # 占用GPU90%的显存 session = tf.Session(config=config)
4.设置最小的GPU使用量
config = tf.ConfigProto() config.gpu_options.allow_growth = True session = tf.Session(config=config)