Using TensorFlow and Long Short-Term Memory for Visualized Learning

by Sophia TurolApril 18, 2016
This blog post discusses visualized learning, features a language modeling scenario, and looks into the usage of long short-term memory in TensorFlow.

tensorflow-meetup-in-new-york-march-2016

Below are the videos from the TensorFlow New York meetup—sponsored and organized by Altoros on March 8, 2016.

 

TensorFlow essentials

In his session, Rafal Jozefowicz, a researcher at Google Brain, provided an overview of TensorFlow, focusing on the following:

  • The solution’s key features
  • TensorFlow core abstractions
  • How to assign devices to Ops with TensorFlow
  • Predefined / neural net specific Ops
  • Visualizing learning with TensorBoard
  • How to run a model in production with TensorFlow Serving
  • Case study: language modeling

 

 

 

Beyond LSTMs and visualized learning

Keith Davis of Metro-North Railroad provided the hitchhiker’s guide to TensorFlow. He mainly talked about image recognition, reinforcement learning, and Kohonen (self-organizing) maps. He also demonstrated how to implement recurrent neural networks and long short-term memory (LSTM) architecture in TensorFlow.

 

 

Fireside chat: TensorFlow adoption

After the talks delivered, Rafal Jozefowicz, Keith Davis, and Brandon Johnson shared their opinion on the following topics:

  • What makes TensorFlow stand out in a crowd as a tool?
  • How is TensorFlow applied within Google? How can it be used in other organizations?
  • How can the community push TensorFlow as a project?
  • How to attract more interest to TensorFlow?
  • Recommendations for those getting started with TensorFlow

 

 

 

Join our group to stay tuned with the upcoming meetups!

 

Further reading

 

About the speakers

Rafal Jozefowicz is Researcher at Google Brain. His main area of expertise is solving natural language processing problems using neural networks. Rafal graduated from Jagiellonian University with a degree in computer science and was hired by Microsoft as Software Development Engineer. Before joining Google, he was Quantitative Software Engineer at Two Sigma Investments, where he worked on equity derivatives and global macro strategies using artificial intelligence, machine learning, and distributed computing.

 

Keith Davis is Data Scientist at Metro-North Railroad. He holds a bachelor’s degree in civil engineering from Rensselaer Polytechnic Institute and a master’s degree in computer science from the University of Helsinki. Keith was first exposed to predictive modeling and machine learning at Rensselaer Polytechnic Institute, while studying traffic flow patterns.

 

Brandon Johnson is an independent software engineer and researcher. He is currently studying neuroscience, applied mathematics, and computer science at New York University. Brandon also took a few courses in interactive data visualization, big data analytics, and machine learning with big data. As Aerospace Engineer Intern, he was educated in how to build, maintain, and further develop aircraft systems on both hardware and software sides.