Multi-GPU systems are in continuous development to deal with the challenges of intensive computational big data
problems. On the one hand, parallel architectures provide a tremendous computation capacity and outstanding scalability.
On the other hand, the production path in multi-user environments faces several roadblocks since they do not grant root
privileges to the users. Containers provide flexible strategies for packing, deploying and running isolated application
processes within multi-user systems and enable scientific reproducibility. This paper describes the usage and advantages
that the uDocker container tool offers for the development of deep learning models in the described context. The experimental results show that uDocker is more transparent to deploy for less tech-savvy researchers and allows the application to achieve processing time with negligible overhead compared to an uncontainerized environment.