Researchers require enormous computational resources to train the machine learning (ML) models that have delivered recent breakthroughs in medical imaging, neural machine translation, game playing, and many other domains. We believe that significantly larger amounts of computation will make it possible for researchers to invent new types of ML models that will be even more accurate and useful.
To accelerate the pace of open machine-learning research, we are introducing the TensorFlow Research Cloud (TFRC), a cluster of 1,000 Cloud TPUs that will be made available free of charge to support a broad range of computationally-intensive research projects that might not be possible otherwise.
- Access to Google's all-new Cloud TPUs that accelerate both training and inference
- Up to 180 teraflops of floating-point performance per Cloud TPU
- 64 GB of ultra-high-bandwidth memory per Cloud TPU
- Familiar TensorFlow programming interfaces
The TensorFlow Research Cloud program is not limited to academia — we recognize that people with a wide range of affiliations, roles, and expertise are making major machine learning research contributions, and we especially encourage those with non-traditional backgrounds to apply. Access will be granted to selected individuals for limited amounts of compute time, and researchers are welcome to apply multiple times with multiple projects.
- Share their TFRC-supported research with the world through peer-reviewed publications, open-source code, blog posts, or other open media
- Share concrete, constructive feedback with Google to help us improve the TFRC program and the underlying Cloud TPU platform over time
- Imagine a future in which ML acceleration is abundant and develop new kinds of machine learning models in anticipation of that future
- Accelerating training of proprietary ML models; models that take weeks to train on other hardware can be trained in days or even hours on Cloud TPUs
- Accelerating batch processing of industrial-scale datasets: images, videos, audio, unstructured text, structured data, etc.
- Processing live requests in production using larger and more complex ML models than ever before