Update: The deadline has been extended to 10/17
We’ve previously discussed why for fast.ai our first goal is to provide a way for any coder to become a deep learning expert. Until we deal with the huge shortage of deep learning expertise, it will be very difficult to fix all of the other problems in deep learning that hold it back from helping solve society’s most challenging problems.
For coders who wish to learn to use deep learning effectively, there is no obvious path available. Doing a PhD requires many years, and you can’t even start until you have an appropriate CV to get admitted. Programs like the Deep Learning Summer School and the Insight Data Science Fellows require a PhD to even get accepted. Most blog posts assume that you’re already an expert, and those that don’t, do little to make you an expert.
For those who make it through all of those obstacles, they then have to deal with the difficulty that deep learning is generally taught as a mathematical discipline – and, as we’ll discuss in our next post, mathematical disciplines have a particularly impractical learning path. For instance, Oxford University’s graduate level course (available online), requires a high level of mathematical proficiency to understand the material, and does very little to teach the important practical skills involved in practical deep learning coding. The deep learning book by Ian Goodfellow et al has similar issues. (Extraordinarily, the book contains no code whatsoever, and very few mentions of practical computing issues.) Given that these are considered perhaps the strongest existing deep learning training materials, you can imagine what the average quality ones look like! (To clarify - for those people looking to enter academia, or experts looking to better understand research issues, these are excellent resources.) Rachel found that even for a mathematician who wants to build practical tools with deep learning, these resources aren’t very helpful.
In 2013, Rachel heard Ilya Sutskever (then a newly minted PhD working at Google, now director of Open AI) speak at a meetup. She was less interested in the theory, and primarily wanted to be able to implement a neural net at home (Caffe, the first open source deep learning framework, wasn’t released until Jan 2014). During the Q&A at the end, she asked how he initialized his network, and he said that was part of a dirty bag of tricks that nobody published. How could anyone do this at their own organizations when nobody was sharing the practical info? In this course, we want to give you practical tips on how to preprocess your data, which architecture to use when, and yes, how to initialize your weights.
Our first step towards resolving these issues is to provide a series of courses designed to bring coders all the way to the cutting-edge of deep learning research. On October 24, we will begin part one of the Data Institute deep learning certificate. This course will be (as far as we are aware) the first university accredited, open access, in person deep learning certificate in the world.
Applications to attend need to be in by October 12 17 – so if you are interested, and are based in the San Francisco Bay area, please apply right away by emailing your resume to [email protected]! If you want to get a sense of the teaching style, take a look at the link above - about 30 mins in to the talk I provide a introduction to convolutional neural networks. (The actual course will of course be paced and run very differently however - the talk above was a brief introduction as part of the launch of the Data Institute.)
To learn more about what will be covered in the certificate, please see our article What We Will Cover.
Read the official USF Data Institute description of our upcoming deep learning course on Monday evenings and send your resume to [email protected] by Oct 12 17 to apply.