资源说明:Chapter 1, Theano Basics, helps the reader to reader learn main concepts of Theano
to write code that can compile on different hardware architectures and optimize
automatically complex mathematical objective functions.
Chapter 2, Classifying Handwritten Digits with a Feedforward Network, will introduce
a simple, well-known and historical example which has been the starting proof
of superiority of deep learning algorithms. The initial problem was to recognize
handwritten digits.
Chapter 3, Encoding word into Vector, one of the main challenge with neural nets is to
connect the real world data to the input of a neural net, in particular for categorical
and discrete data. This chapter presents an example on how to build an embedding
space through training with Theano.
Such embeddings are very useful in machine translation, robotics, image captioning,
and so on because they translate the real world data into arrays of vectors that can be
processed by neural nets.
Chapter 4, Generating Text with a Recurrent Neural Net, introduces recurrency in neural
nets with a simple example in practice, to generate text.
Recurrent neural nets (RNN) are a popular topic in deep learning, enabling more
possibilities for sequence prediction, sequence generation, machine translation,
connected objects. Natural Language Processing (NLP) is a second field of interest
that has driven the research for new machine learning techniques.
Chapter 5, Analyzing Sentiments with a Bidirectional LSTM, applies embeddings and
recurrent layers to a new task of natural language processing, sentiment analysis.
It acts as a kind of validation of prior chapters.
In the meantime, it demonstrates an alternative way to build neural nets on Theano,
with a higher level library, Keras.
Chapter 6, Locating with Spatial Transformer Networks, applies recurrency to image,
to read multiple digits on a page at once. This time, we take the opportunity to
rewrite the classification network for handwritten digits images, and our recurrent
models, with the help of Lasagne, a library of built-in modules for deep learning
with Theano.
Lasagne library helps design neural networks for experimenting faster. With this
help, we'll address object localization, a common computer vision challenge, with
Spatial Transformer modules to improve our classification scores.
Chapter 7, Classifying Images with Residual Networks, classifies any type of images
at the best accuracy. In the mean time, to build more complex nets with ease, we
introduce a library based on Theano framework, Lasagne, with many already
implemented components to help implement neural nets faster for Theano.
Chapter 8, Translating and Explaining through Encoding – decoding Networks, presents
encoding-decoding techniques: applied to text, these techniques are heavily used
in machine-translation and simple chatbots systems. Applied to images, they serve
scene segmentations and object localization. Last, image captioning is a mixed,
encoding images and decoding to texts.
This chapter goes one step further with a very popular high level library, Keras,
that simplifies even more the development of neural nets with Theano.
Chapter 9, Selecting Relevant Inputs or Memories with the Mechanism of Attention, for
solving more complicated tasks, the machine learning world has been looking for
higher level of intelligence, inspired by nature: reasoning, attention and memory.
In this chapter, the reader will discover the memory networks on the main purpose
of artificial intelligence for natural language processing (NLP): the language
understanding.
Chapter 10, Predicting Times Sequence with Advanced RNN, time sequences are an
important field where machine learning has been used heavily. This chapter
will go for advanced techniques with Recurrent Neural Networks (RNN),
to get state-of-art results.
Chapter 11, Learning from the Environment with Reinforcement, reinforcement learning
is the vast area of machine learning, which consists in training an agent to behave in
an environment (such as a video game) so as to optimize a quantity (maximizing the
game score), by performing certain actions in the environment (pressing buttons on
the controller) and observing what happens.
Reinforcement learning new paradigm opens a complete new path for designing
algorithms and interactions between computers and real world.
Chapter 12, Learning Features with Unsupervised Generative Networks, unsupervised
learning consists in new training algorithms that do not require the data to be labeled
to be trained. These algorithms try to infer the hidden labels from the data, called
the factors, and, for some of them, to generate new synthetic data.
Unsupervised training is very useful in many cases, either when no labeling exists,
or when labeling the data with humans is too expensive, or lastly when the dataset
is too small and feature engineering would overfit the data. In this last case, extra
amounts of unlabeled data train better features as a basis for supervised learning.
Chapter 13, Extending Deep Learning with Theano, extends the set of possibilities in
Deep Learning with Theano. It addresses the way to create new operators for the
computation graph, either in Python for simplicity, or in C to overcome the Python
overhead, either for the CPU or for the GPU. Also, introduces the basic concept of
parallel programming for GPU. Lastly, we open the field of General Intelligence,
based on the first skills developped in this book, to develop new skills, in a gradual
way, to improve itself one step further.
本源码包内暂不包含可直接显示的源代码文件,请下载源码包。