For those of you considering joining our deep learning certificate, I’m sure you’d like to hear more about what we will be covering. This first course if part 1 of a two part series, which have the following high level goals:
- Part 1: Get you to the point where you can successfully implement and debug best practice deep learning techniques in the most widely used current areas, such as computer vision, and natural language
- Part 2: Take you right up to the cutting edge of current research, and beyond, including applications in robotics and self-driving cars, time series analysis (such as for financial, marketing, and production applications), and large-scale imaging (including 3d imaging for medicine, and analysis of satellite images).
Here’s what we’re planning to cover in part 1 of the course:
- The opportunities and constraints in applying deep learning to solving a wide range of problems, including how deep learning is being applied today
- How to quickly get up and running using popular deep learning libraries such as Keras
- How to test that a model is working correctly
- Just enough linear algebra, probability theory, and calculus to understand how deep learning works
- The role of each key component of deep learning: input, architecture, output, loss function, optimization, regularization, and testing
- The key techniques used for each of these components, why they are used, and how to apply them using popular deep learning libraries
- How each of these techniques are applied to achieve state of the art results in computer vision, and natural language processing
- Recent advances in deep learning for improving model training outcomes
- Techniques for getting good results even with smaller datasets
We’ll be covering these topics in a very different way to what you’ll be used to if you’ve taken any university level math or CS courses in the past. We’ll be telling you all about our teaching philosophy in our next post. Our approach will be code-heavy and math-light, so we do ask that participants already have at least a year or two of solid coding experience. We’ll be using Python (via the wonderful Jupyter Notebook) for our examples, so if you’re not already familiar with Python, we’d strongly suggest going through a quick introduction to Python and to Jupyter (formerly known as IPython)
More Details
Here’s some more detail on what topics we will be covering. For convolutional neural networks (CNNs), primarily used for image classification, we will teach:
- Basics of image convolutions
- Introduction to the CNN architecture
- Going beyond basic SGD
- Regularization with dropout and weight decay
- Image classification in Theano
To learn more, you may be interested this great visual explanation of image kernels
For recursive neural networks (RNNs), used for natural language processing (NLP) and times series data, we will cover:
- Basics of NLP
- Introduction to RNNs
- Introduction to the LSTM architecture
- Char-rnn in Theano
To find out more now, you can read this excellent post by Andrej Karpathy.
One of our primary goals for this course is to teach you practical techniques for training better models such as:
- Batch normalization
- Resnets
- Testing and Visualization
Check out this helpful advice on babysitting your learning process and Chris Olah’s illuminating visualizations of language representations
There is a dangerous myth that you need huge data sets to effectively use deep learning. This is false, and we will teach you to deal with data shortages, such as through:
- Data augmentation
- Unsupervised learning and autoencoders
- Semi-supervised learning
- Transfer learning
Background & Preparation
To participate, you should either have some familiarity with matrix multiplication, basic differentiation, and the chain rule, or be willing to study them before the course starts. If you need a refresher on these concepts, we recommend the Khan Academy videos on matrix multiplication and the chain rule).
We will make significant use of list comprehensions in Python - here is a useful introduction. It would also be very helpful to know your way around the basic python data science tools: numpy, scipy, scikit-learn, pandas, jupyter notebook, and matplotlib. The best guide I know of to these tools is Python For Data Analysis. For those with no python experience, you may want to prepare by reading Learn Python The Hard Way.
Read the official USF Data Institute description of our upcoming deep learning course on Monday evenings and send your resume to [email protected] by Oct 12 to apply.