Mastering AI and Machine Learning with Python: From Fundamentals to Advanced Deep Learning Vol- II
商品資訊
ISBN13:9798283669786
出版社:Independently published
作者:Anshuman Mishra
出版日:2025/05/12
裝訂:平裝
規格:27.9cm*21.6cm*1.4cm (高/寬/厚)
重量:635克
商品簡介
商品簡介
Chapter 9: Convolutional Neural Networks (CNNs)
This chapter likely begins by revisiting the fundamental concepts of convolutional operations. It would meticulously explain how convolution works, including the roles of filters (kernels), strides, padding, and activation functions in extracting meaningful features from image data. The concept of feature maps, which represent the output of applying filters at different layers, would be thoroughly discussed, emphasizing how these maps capture hierarchical representations of visual information.
The chapter would then transition into exploring various influential CNN architectures.
This chapter would shift focus to sequential data and how Recurrent Neural Networks (RNNs) are designed to process it. The fundamental concept of how RNNs maintain an internal state (memory) to handle sequences would be explained, along with the challenges associated with training vanilla RNNs, such as the vanishing and exploding gradient problems.
This chapter likely begins by revisiting the fundamental concepts of convolutional operations. It would meticulously explain how convolution works, including the roles of filters (kernels), strides, padding, and activation functions in extracting meaningful features from image data. The concept of feature maps, which represent the output of applying filters at different layers, would be thoroughly discussed, emphasizing how these maps capture hierarchical representations of visual information.
The chapter would then transition into exploring various influential CNN architectures.
- LeNet: This pioneering CNN architecture, designed for handwritten digit recognition, would be presented as a foundational example, illustrating the basic building blocks of a CNN. Its layers, including convolutional layers, pooling layers (like average pooling), and fully connected layers, would be explained in detail. The historical significance of LeNet in the development of modern CNNs would also likely be highlighted.
- AlexNet: This groundbreaking architecture, which achieved remarkable success in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), would be analyzed for its key innovations. These include the use of ReLU activation functions, dropout for regularization, and the utilization of multiple GPUs for training. The impact of AlexNet on the field of computer vision and the resurgence of deep learning would be emphasized.
- VGG (Visual Geometry Group): The chapter would delve into the VGG networks, known for their deep and uniform architectures consisting of small convolutional filters stacked together. The concepts of VGG16 and VGG19, along with their consistent use of 3? convolutional kernels, would be explained. The advantages and limitations of VGG networks, such as their depth and large number of parameters, would likely be discussed.
- ResNet (Residual Network): This architecture, which addressed the vanishing gradient problem in very deep networks through the introduction of residual connections (skip connections), would be thoroughly examined. The concept of identity mappings and how they facilitate the training of extremely deep networks would be explained. Different ResNet variants (e.g., ResNet-50, ResNet-101) and their performance benefits would likely be covered.
- Image Classification: This fundamental task of assigning a label to an entire image based on its content would be discussed. Different loss functions (e.g., cross-entropy) and evaluation metrics (e.g., accuracy, F1-score) used in image classification would be explained.
- Object Detection: This more complex task of identifying and localizing multiple objects within an image using bounding boxes would be introduced. Early object detection architectures and the fundamental challenges involved would likely be discussed, setting the stage for more advanced techniques covered in later chapters.
This chapter would shift focus to sequential data and how Recurrent Neural Networks (RNNs) are designed to process it. The fundamental concept of how RNNs maintain an internal state (memory) to handle sequences would be explained, along with the challenges associated with training vanilla RNNs, such as the vanishing and exploding gradient problems.
主題書展
更多
主題書展
更多書展購物須知
外文書商品之書封,為出版社提供之樣本。實際出貨商品,以出版社所提供之現有版本為主。部份書籍,因出版社供應狀況特殊,匯率將依實際狀況做調整。
無庫存之商品,在您完成訂單程序之後,將以空運的方式為你下單調貨。為了縮短等待的時間,建議您將外文書與其他商品分開下單,以獲得最快的取貨速度,平均調貨時間為1~2個月。
為了保護您的權益,「三民網路書店」提供會員七日商品鑑賞期(收到商品為起始日)。
若要辦理退貨,請在商品鑑賞期內寄回,且商品必須是全新狀態與完整包裝(商品、附件、發票、隨貨贈品等)否則恕不接受退貨。

