Your browser does not support JavaScript

Computational Network Toolkit

Production-quality, Open Source, Multi-machine, Multi-GPU,
Highly efficient RNN training,
Speech, Image, Text


For mission critical AI research, we believe efficiency and performance are important criteria. CNTK was designed for peak performance for not only CPUs but also single-GPU, multi-GPU, and multi-machine-multi-GPU scenarios. Additionally, Microsoft’s 1-bit compression technique dramatically reduced communication costs -- enabling highly scalable parallel training on a large number of GPUs spanning multiple machines.


CNTK is highly flexible. Arbitrary computation graphs are easy to create from a high-level description language and most training parameters are easily configurable. Popular network types like FNN, CNN, LSTM, and RNN are fully supported with state of the art parallel training performance. A full suite of training algorithms (like AdaGrad, RmsProp, etc…) are built into the toolkit. You can easily experiment with a wide range of architectures and training recipes with no long compilation cycles involved.


In addition to a wide variety of built-in computation nodes, CNTK provides a plug-in architecture allowing users to define their own computation nodes. So if your workload requires special customization, CNTK makes that easy to do. Readers are also fully customizable allowing support for arbitrary input formats.

Many applications

Speech Recognition

Machine Translation

Image Recognition

Image Captioning

Text Processing and Relevance

Language Understanding

Language Modeling

Basic set-up

You can download pre-built CNTK binaries and get started right away.


Advanced set-up

More advanced users can download and build the source code for the ultimate flexibility and extensibility.


Looking for a pre-built solution?

Latest news about CNTK


Distributed Compute

CNTK Licenses