A new deep learning library design by Google’s brain team to help researchers push the boundaries to achieve what’s possible by trying new combinations of models, data-sets and other parameters. The fast pace of developments with the number of variables in AI research makes it pretty difficult to test run 2 different settings on to a match. This is the current problem faced by researchers.
Tensor2Tensor comes pre-equipped with required tools such as hyper-parameters, data-sets, model architectures and learning rate decay schemes. This makes it easier for researchers to maintain best practices while conducting AI research.
Best thing about Tensor2Tensor is that you can swap in or out any of the components in a modular fashion without completely destroying everything, this also means through this we can achieve new models and data sets at any point of time – a pretty much simpler process than an ordinary approach.
Google isn’t alone in its pursuits to help make research more reproducible outside the lab. Facebook recently open sourced ParlAI, its tool to facilitate dialog research that comes prepackaged with commonly used data-sets.
Similarly, Google’s Tensor2Tensor comes with models from recent Google research projects like “Attention is all you need” and “One model to learn them all.” Everything is available on Github, so you can start training your own deep learning-powered tools.