With sample code and demos, this blog post highlights major topics covered at a recent TensorFlow webinar: what it takes to train a recurrent / convolutional neural network, four unique object types, meta-frameworks, etc.
At IBM Edge 2016, a team of developers and data scientists presented a practical study that evaluated the efficiency of training a TensorFlow model in a distributed mode. A use case featured high-resolution images of lymph nodes used for possible cancer detection.
Relying on a distributed model of TensorFlow and high-performing nature of the OpenPOWER infrastructure, the demonstrated system can accelerate medical data analysis—depending on the number of GPUs and nodes in its cluster. The particular subjects of the research were how training time decreases when the cluster grows and whether the accuracy of the results is affected by the distributed nature of the computations. Read this post for brief results and technical details.