2017年6月30日 星期五

[ Gradle IA ] Ch1 - Introduction to project automation

Preface 
This chapter covers 
■ Understanding the benefits of project automation
■ Getting to know different types of project automation
■ Surveying the characteristics and architecture of build tools
■ Exploring the pros and cons of build tool implementations

Tom and Joe work as software developers for Acme Enterprises, a startup company that offers a free online service for finding the best deals in your area. The company recently received investor funding and is now frantically working toward its first official launch. Tom and Joe are in a time crunch. By the end of next month, they’ll need to present a first version of the product to their investors. The chief technology officer (CTO) pats them on the back; life is good. However, the manual and error-prone build and delivery process slows them down significantly. As a result, the team has to live with sporadic compilation issues, inconsistently built software artifacts, and failed deployments. This is where build tools come in

This chapter will give you a gentle introduction into why it’s a good idea to automate your project and how build tools can help get the job done. We’ll talk about the benefits that come with sufficient project automation, the types and characteristics of project automation, and the tooling that enables you to implement an automated process. 

Two traditional build tools dominate Java-based projects: Ant and Maven. We’ll go over their main features, look at some build code, and talk about their shortcomings. Lastly, we’ll discuss the requirements for a build tool that will fulfill the needs of modernday project automation. 

Life without project automation 
Going back to Tom and Joe’s predicament, let’s go over why project automation is such a no-brainer. Believe it or not, lots of developers face the following situations. The reasons are varied, but probably sound familiar. 
- My IDE does the job 
At Acme, developers do all their coding within the IDE, from navigating through the source code, implementing new features, and compiling and refactoring code, to running unit and integration tests. Whenever new code is developed, they press the Compile button. If the IDE tells them that there’s no compilation error and the tests are passing, they check the code into version control so it can be shared with the rest of the team. The IDE is a powerful tool, but every developer will need to install it first with a standardized version to be able to perform all of these tasks, a lesson Joe learns when he uses a new feature only supported by the latest version of the compiler.

- It works on my box. 
Staring down a ticking clock, Joe checks out the code from version control and realizes that it doesn’t compile anymore. It seems like one of the classes is missing from the source code. He calls Tom, who’s puzzled that
the code doesn’t compile on Joe’s machine. After discussing the issue, Tom realizes that he probably forgot to check in one of his classes, which causes the compilation process to fail. The rest of the team is now blocked and can’t continue their work until Tom checks in the missing source file.

- The code integration is a complete disaster. 
Acme has two different development groups, one specializing in building the web-based user interface and the other working on the server-side backend code. Both teams sit together at Tom’s computer to run the compilation for the whole application, build a deliverable, and deploy it to a web server in a test environment. The first cheers quickly fade when the team sees that some of the functionality isn’t working as expected. Some of the URLs simply don’t resolve or result in an error. Even though the team wrote some functional tests, they didn’t get exercised regularly in the IDE.

- The testing process slows to a crawl. 
The quality assurance (QA) team is eager to get their hands on a first version of the application. As you can imagine, they aren’t too happy about testing low-quality software. With every fix the development team puts into place, they have to run through the same manual process. The team stops to check new changes into version control, a new version is built from an IDE, and the deliverable is copied to the test server. Each and every time, a developer is fully occupied and can’t add any other value to the company. After weeks of testing and a successful demo to the investor, the QA team says the application is ready for prime time.

- Deployment turns into a marathon. 
From experience, the team knows that the outcome of deploying an application is unpredictable due to unforeseen problems. The infrastructure and runtime environment has to be set up, the database has to be prepared with seed data, the actual deployment of the application has to happen, and initial health monitoring needs to be performed. Of course, the team has an action plan in place, but each of the steps has to be executed manually.

The product launch is a raving success. The following week, the CTO swings by the developers’ desks; he already has new ideas to improve the user experience. A friend has told him about agile development, a time-boxed iterative approach for implementing and releasing software. He proposes that the team introduces two-week release cycles. Tom and Joe look at each other, both horrified at the manual and repetitive work that lies ahead. Together, they plan to automate each step of the implementation and delivery process to reduce the risk of failed builds, late integration, and painful deployments

Benefits of project automation 
This story makes clear how vital project automation is for team success. These days, time to market has become more important than ever. Being able to build and deliver software in a repeatable and consistent way is key. Let’s look at the benefits of automating your project. 

Prevents manual intervention 
Having to manually perform steps to produce and deliver software is time-consuming and error-prone. Frankly, as a developer and system administrator, you have better things to do than to handhold a compilation process or to copy a file from directory A to directory B. We’re all human. Not only can you make mistakes along the way, manual intervention also takes away from the time you desperately need to get your actual work done. Any step in your software development process that can be automated should be automated. 

Creates repeatable builds 
The actual building of your software usually follows predefined and ordered steps. For example, you compile your source code first, then run your tests, and lastly assemble a deliverable. You’ll need to run the same steps over and over again—every day. This should be as easy as pressing a button. The outcome of this process needs to be repeatable for everyone who runs the build. 

Makes builds portable 
You’ve seen that being able to run a build from an IDE is very limiting. First of all, you’ll need to have the particular product installed on your machine. Second, the IDE may only be available for a specific operating system. An automated build shouldn’t require a specific runtime environment to work, whether this is an operating system or an IDE. Optimally, the automated tasks should be executable from the command line, which allows you to run the build from any machine you want, whenever you want. 

Types of project automation 
You saw at the beginning of this chapter that a user can request a build to be run. A user can be any stakeholder who wants to trigger the build, like a developer, a QA team member, or a product owner. Our friend Tom, for example, pressed the Compile button in his IDE whenever he wanted the code to be compiled. On-demand automation is only one type of project automation. You can also schedule your build to be executed at predefined times or when a specific event occurs. 

On-demand builds 
The typical use case for on-demand automation is when a user triggers a build on his or her machine, as shown in figure 1.1. It’s common practice that a version control system (VCS) manages the versioning of the build definition and source code files. In most cases, the user executes a script on the command line that performs tasks in a predefined order—for example, compiling source code, copying a file from directory A to directory B, or assembling a deliverable. Usually, this type of automation is executed multiple times per day. 

Triggered builds 
If you’re practicing agile software development, you’re interested in receiving fast feedback about the health of your project. You’ll want to know if your source code can be compiled without any errors or if there’s a potential software defect indicated by a failed unit or integration test. This type of automation is usually triggered if code was checked into version control, as shown in figure 1.2. 

Scheduled builds 
Think of scheduled automation as a time-based job scheduler (in the context of a Unix-based operation system, also known as a cron job). It runs in particular intervals or at concrete times—for example, every morning at 1:00 a.m. or every 15 minutes. As with all cron jobs, scheduled automation generally runs on a dedicated server. Figure 1.3 shows a scheduled build that runs every morning at 5:00 a.m. This kind of automation is particularly useful for generating reports or documentation for your project. 

The practice that implements scheduled and triggered builds is commonly referred to as Continuous Integration (CI). You’ll learn more about CI in chapter 13. After identifying the benefits and types of project automation, it’s time to discuss the tools that allow you to implement this functionality. 

Build tools 
Naturally, you may ask yourself why you’d need another tool to implement automation for your project. You could just write the logic as an executable script, such as a shell script. Think back to the goals of project automation we discussed earlier. You want a tool that allows you to create a repeatable, reliable, and portable build without manual intervention. A shell script wouldn’t be easily portable from a UNIX-based system into a Windows-based system, so it doesn’t meet your criteria. 

What’s a build tool? 
What you need is a programming utility that lets you express your automation needs as executable, ordered tasks. Let’s say you want to compile your source code, copy the generated class files into a directory, and assemble a deliverable that contains the class files. A deliverable could be a ZIP file, for example, that can be distributed to a runtime environment. Figure 1.4 shows the tasks and their execution order for the described scenario. 

Each of these tasks represents a unit of work—for example, compilation of source code. The order is important. You can’t create the ZIP archive if the required class files haven’t been compiled. Therefore, the compilation task needs to be executed first. 

DIRECTED ACYCLIC GRAPH 
Internally, tasks and their interdependencies are modeled as a Directed Acyclic Graph (DAG). A DAG is a data structure from computer science and contains the following two elements: 
- Node: A unit of work; in the case of a build tool, this is a task (for example, compiling source code).
- Directed edge: A directed edge, also called an arrow, representing the relationship between nodes. In our situation, the arrow means depends on. If a task defines dependent tasks, they’ll need to execute before the task itself can be executed. Often this is the case because the task relies on the output produced by another task. Here’s an example: to execute the task “assemble deliverable,” you’ll need to run its dependent tasks “copy class files to directory” and “compile source code.”

Each node knows about its own execution state. A node—and therefore the task—can only be executed once. For example, if two different tasks depend on the task “source code compilation,” you only want to execute it once. Figure 1.5 shows this scenario as a DAG. 

As a developer, you won’t have to deal directly with the DAG representation of your build. This job is done by the build tool. Later in this chapter, you’ll see how some Java-based build tools use these concepts in practice. 

Anatomy of a build tool 
It’s important to understand the interactions among the components of a build tool, the actual definition of the build logic, and the data that goes in and out. Let’s discuss each of the elements and their particular responsibilities. 

BUILD FILE 
The build file contains the configuration needed for the build, defines external dependencies such as third-party libraries, and contains the instructions to achieve a specific goal in the form of tasks and their interdependencies. The tasks we discussed in the scenario earlier—compiling source code, copying files to a directory, and assembling a ZIP file—would be defined in the build file. Oftentimes, a scripting language is used to express the build logic. That’s why a build file is also referred to as a build script

BUILD INPUTS AND OUTPUTS 
A task takes an input, works on it by executing a series of steps, and produces an output. Some tasks may not need any input to function correctly, nor is creating an output considered mandatory. Complex task dependency graphs may use the output of a dependent task as input. Figure 1.7 demonstrates the consumption of inputs and the creation of outputs in a task graph. 

I already mentioned an example that follows this workflow. We took a bunch of source code files as input, compiled them to classes, and assembled a deliverable as output. The compilation and assembly processes each represent one task. The assembly of the deliverable only makes sense if you compiled the source code first. Therefore, both tasks need to retain their order. 

BUILD ENGINE 
The build file’s step-by-step instructions or rule set must be translated into an internal model the build tool can understand. The build engine processes the build file at runtime, resolves dependencies between tasks, and sets up the entire configuration needed to command the execution, as shown in figure 1.8. 

DEPENDENCY MANAGER 
The dependency manager is used to process declarative dependency definitions for your build file, resolve them from an artifact repository (for example, the local file system, an FTP, or an HTTP server), and make them available to your project. A dependency is generally an external, reusable library in the form of a JAR file (for example, Log4J for logging support). The repository acts as storage for dependencies, and organizes and describes them by identifiers, such as name and version. A typical repository can be an HTTP server or the local file system. Figure 1.9 illustrates how the dependency manager fits into the architecture of a build tool. 

Many libraries depend on other libraries, called transitive dependencies. The dependency manager can use metadata stored in the repository to automatically resolve transitive dependencies as well. A build tool is not required to provide a dependency management component. 

Java build tools 
In this section, we look at two popular, Java-based build tools: Ant and Maven. We’ll discuss their characteristics, see a sample script in action, and outline the shortcomings of each tool. 

Apache Ant 
Apache Ant (Another Neat Tool) is an open source build tool written in Java. Its main purpose is to provide automation for typical tasks needed in Java projects, such as compiling source files to classes, running unit tests, packaging JAR files, and creating Javadoc documentation. Additionally, it provides a wide range of predefined tasks for file system and archiving operations. If any of these tasks don’t fulfill your requirements, you can extend the build with new tasks written in Java. While Ant’s core is written in Java, your build file is expressed through XML, which makes it portable across different runtime environments. Ant does not provide a dependency manager, so you’ll need to manage external dependencies yourself. However, Ant integrates well with another Apache project called Ivy, a full-fledged, standalone dependency manager. Integrating Ant with Ivy requires additional effort and has to be done manually for each individual project. (More

SHORTCOMINGS 
Despite all this flexibility, you should be aware of some shortcomings: 
* Using XML as the definition language for your build logic results in overly large and verbose build scripts compared to build tools with a more succinct definition language.
* Complex build logic leads to long and unmaintainable build scripts. Trying to define conditional logic like if-then/if-then-else statements becomes a burden when using a markup language.
* Ant doesn’t give you any guidelines on how to set up your project. In an enterprise setting, this often leads to a build file that looks different every time.
* You want to know how many classes have been compiled or how many tasks have been executed in a build. Ant doesn’t expose an API that lets you query information about the in-memory model at runtime.
* Using Ant without Ivy makes it hard to manage dependencies. Oftentimes, you’ll need to check your JAR files into version control and manage their organization manually.


Apache Maven 
Using Ant across many projects within an enterprise has a big impact on maintainability. With flexibility comes a lot of duplicated code snippets that are copied from one project to another. The Maven team realized the need for a standardized project layout and unified build lifecycle. Maven picks up on the idea of convention over configuration, meaning that it provides sensible default values for your project configuration and its behavior. The project automatically knows what directories to search for source code and what tasks to perform when running the build. You can set up a full project with a few lines of XML as long as your project adheres to the default values. As an extra, Maven also has the ability to generate HTML project documentation that includes the Javadocs for your application. 

Maven’s core functionality can be extended by custom logic developed as plugins. The community is very active, and you can find a plugin for almost every aspect of build support, from integration with other development tools to reporting. If a plugin doesn’t exist for your specific needs, you can write your own extension. (More

SHORTCOMINGS 
As with Ant, be aware of some of Maven’s shortcomings: 
* Maven proposes a default structure and lifecycle for a project that often is too restrictive and may not fit your project’s needs.
* Writing custom extensions for Maven is overly cumbersome. You’ll need to learn about Mojos (Maven’s internal extension API), how to provide a plugin descriptor (again in XML), and about specific annotations to provide the data needed in your extension implementation.
* Earlier versions of Maven (< 2.0.9automatically try to update their own core plugins; for example, support for unit tests to the latest version. This may cause brittle and unstable builds.


Requirements for a next-generation build tool 
In the last section, we examined the features, advantages, and shortcomings of the established build tools Ant and Maven. It became clear that you often have to compromise on the supported functionality by choosing one or the other. Either you choose full flexibility and extensibility but get weak project standardization, tons of boilerplate code, and no support for dependency management by picking Ant; or you go with Maven, which offers a convention over configuration approach and a seamlessly integrated dependency manager, but an overly restrictive mindset and cumbersome plugin system. 

Wouldn’t it be great if a build tool could cover a middle ground? Here are some features that an evolved build tool should provide: 
■ Expressive, declarative, and maintainable build language.
■ Standardized project layout and lifecycle, but full flexibility and the option to fully configure the defaults.
■ Easy-to-use and flexible ways to implement custom logic.
■ Support for project structures that consist of more than one project to build deliverable.
■ Support for dependency management.
■ Good integration and migration of existing build infrastructure, including the ability to import existing Ant build scripts and tools to translate existing Ant/Maven logic into its own rule set.
■ Emphasis on scalable and high-performance builds. This will matter if you have long-running builds (for example, two hours or longer), which is the case for some big enterprise projects.

This book will introduce you to a tool that does provide all of these great features: Gradle. Together, we’ll cover a lot of ground on how to use it and exploit all the advantages it provides.

2017年6月29日 星期四

[Toolkit] Keras - MNIST 手寫數字辨識資料集介紹

Source From Here (Ch6-Ch7) 
下載 Mnist 資料 
我們將建立以下 Keras 程式, 下載並讀取 mnist 資料

STEP1. 匯入 Keras 及相關模組 
首先匯入 Keras 及相關模組: 
  1. import numpy as np  
  2. import pandas as pd  
  3. from keras.utils import np_utils  # 用來後續將 label 標籤轉為 one-hot-encoding  
  4.   
  5. np.random.seed(10)  
STEP2. 下載 mnist 資料 
  1. from keras.datasets import mnist  
Mnist 資料的下載路徑在 ~/.keras/datasets/mnist.npz (npz is a simple zip archive, which contains numpy files.) 

STEP3. 讀取與查看 mnist 資料 
  1. (X_train_image, y_train_label), (X_test_image, y_test_label) = mnist.load_data()  
  2. print("\t[Info] train data={:7,}".format(len(X_train_image)))  
  3. print("\t[Info] test  data={:7,}".format(len(X_test_image)))  
到目前為止的執行結果為: 
Using TensorFlow backend.
[Info] train data= 60,000
[Info] test data= 10,000

由上可以知道 training data 共有 60,000 筆; testing data 共有 10,000 筆. 

查看訓練資料 
接著我們來看載入資料的長相與格式. 

STEP1. 訓練資料是由 images 與 labels 所組成 
  1. print("\t[Info] Shape of train data=%s" % (str(X_train_image.shape)))  
  2. print("\t[Info] Shape of train label=%s" % (str(y_train_label.shape)))  
執行結果: 
[Info] Shape of train data=(60000, 28, 28)
[Info] Shape of train label=(60000,)

訓練資料是由 images 與 labels 所組成共有 60,000 筆, 每一筆代表某個數字的影像為 28x28 pixels. 

STEP2. 定應 plot_image 函數顯示數字影像 
  1. import matplotlib.pyplot as plt  
  2. def plot_image(image):  
  3.     fig = plt.gcf()  
  4.     fig.set_size_inches(2,2)  
  5.     plt.imshow(image, cmap='binary') # cmap='binary' 參數設定以黑白灰階顯示.  
  6.     plt.show()  
STEP3. 執行 plot_image 函數查看第 0 筆數字影像與 label 資料 
以下程式呼叫 plot_image 函數, 傳入 X_train_image[0], 也就是順練資料集的第 0 筆資料, 顯示結果可以看到這是一個數字 5 的圖形: 
>>> plot_image(X_train_image[0])


>>> y_train_label[0]
5 

查看多筆訓練資料 images 與 labels 
接下來我們將建立 plot_images_labels_predict 函數, 可以顯示多筆資料的影像與 label. 

STEP1. 建立 plot_images_labels_predict() 函數 
因為後續我們希望能很方便查看數字圖形, 真實的數字與預測結果, 所以我們建立了以下函數: 
  1. def plot_images_labels_predict(images, labels, prediction, idx, num=10):  
  2.     fig = plt.gcf()  
  3.     fig.set_size_inches(1214)  
  4.     if num > 25: num = 25  
  5.     for i in range(0, num):  
  6.         ax=plt.subplot(5,51+i)  
  7.         ax.imshow(images[idx], cmap='binary')  
  8.         title = "l=" + str(labels[idx])  
  9.         if len(prediction) > 0:  
  10.             title = "l={},p={}".format(str(labels[idx]), str(prediction[idx]))  
  11.         else:  
  12.             title = "l={}".format(str(labels[idx]))  
  13.         ax.set_title(title, fontsize=10)  
  14.         ax.set_xticks([]); ax.set_yticks([])  
  15.         idx+=1  
  16.     plt.show()  
STEP2. 查看訓練資料的前 10 筆資料 
>>> plot_images_labels_predict(X_train_image, y_train_label, [], 0, 10)


多層感知器模型資料前處理 
接下來我們建立 多層感知器模型 (MLP), 我們必須先將 images 與 labels 的內容進行前處理, 才能餵進去 Keras 預期的資料結構. 

STEP1. features (數字影像的特徵值) 資料前處理 
首先將 image 以 reshape 轉換為二維 ndarray 並進行 normalization (Feature scaling): 
  1. x_Train = X_train_image.reshape(6000028*28).astype('float32')  
  2. x_Test = X_test_image.reshape(1000028*28).astype('float32')  
  3. print("\t[Info] xTrain: %s" % (str(x_Train.shape)))  
  4. print("\t[Info] xTest: %s" % (str(x_Test.shape)))  
  5.   
  6. # Normalization  
  7. x_Train_norm = x_Train/255  
  8. x_Test_norm = x_Test/255  
STEP2. labels (影像數字真實的值) 資料前處理 
label 標籤欄位原本是 0-9 數字, 而為了配合 Keras 的資料格式, 我們必須進行 One-hot-encoding 將之轉換為 10 個 0 或 1 的組合, 例如數字 7 經過 One-hot encoding 轉換後是 0000001000, 正好對應到輸出層的 10 個神經元. 下面簡單測試過程: 
>>> from ch6_1 import * // 載之前的代碼
>>> y_TrainOneHot = np_utils.to_categorical(y_train_label) // 將 training 的 label 進行 one-hot encoding
>>> y_TestOneHot = np_utils.to_categorical(y_test_label) // 將測試的 labels 進行 one-hot encoding
>>> y_train_label[0] // 檢視 training labels 第一個 label 的值
5
>>y_TrainOneHot[:1] // 檢視第一個 label 在 one-hot encoding 後的結果, 會在第六個位置上為 1, 其他位置上為 0
array([[ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]])

建立模型 
我們將建立以下多層感知器 Multilayer Perceptron 模型, 輸入層 (x) 共有 28x28=784 個神經元, Hidden layers (h) 共有 256 層; 輸出層 (y) 共有 10 個 神經元: 

對應代碼: 
  1. from keras.models import Sequential  
  2. from keras.layers import Dense  
  3.   
  4. model = Sequential()  # Build Linear Model  
  5.   
  6. model.add(Dense(units=256, input_dim=784, kernel_initializer='normal', activation='relu')) # Add Input/hidden layer  
  7. model.add(Dense(units=10, kernel_initializer='normal', activation='softmax')) # Add Hidden/output layer  
  8. print("\t[Info] Model summary:")  
  9. model.summary()  
  10. print("")  
Summary 的相關說明: 

進行訓練 
當我們建立深度學習模型後, 就可以使用 Backpropagation 進行訓練. 

STEP1. 定義訓練方式 
在訓練模型之前, 我們必須先使用 compile 方法, 對訓練模型進行設定, 代碼如下: 
  1. model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])  
參數說明如下: 
* loss: 設定 loss function, 在深度學習通常使用 cross_entropy (Cross entropy) 交叉摘順練效果較好.
* optimizer: 設定訓練時的優化方法, 在深度學習使用 adam 可以讓訓練更快收斂, 並提高準確率.
* metrics: 設定評估模型的方式是 accuracy (準確率)

STEP2. 開始訓練 
執行訓練的程式碼如下: 
  1. train_history = model.fit(x=x_Train_norm, y=y_TrainOneHot, validation_split=0.2, epochs=10, batch_size=200, verbose=2)  
上面訓練過程會儲存於 train_history 變數中, 參數說明如下: 
x=x_Train_norm: features 數字的影像特徵值 (60,000 x 784 的陣列).
y=y_Train_OneHot: label 數字的 One-hot encoding 陣列 (60,000 x 10 的陣列)
validation_split = 0.2設定訓練資料與 cross validation 的資料比率. 也就是說會有 0.8 * 60,000 = 48,000 作為訓練資料; 0.2 * 60,000 = 12,000 作為驗證資料.
epochs = 10: 執行 10 次的訓練週期.
batch_size = 200: 每一批次的訓練筆數為 200
verbose = 2: 顯示訓練過程. 共執行 10 次 epoch (訓練週期), 每批 200 筆, 也就是每次會有 240 round (48,000 / 200 = 240). 每一次的 epoch 會計算 accuracy 並記錄在 train_history 中.

某次的執行過程如下: 

STEP3. 建立 show_train_history 顯示訓練過程 
之前訓練步驟會將每一個訓練週期的 accuracy 與 loss 記錄在 train_history 變數. 我們可以使用下面程式碼讀取 train_history 以圖表顯示訓練過程: 
  1. import matplotlib.pyplot as plt  
  2. def show_train_history(train_history, train, validation):  
  3.     plt.plot(train_history.history[train])  
  4.     plt.plot(train_history.history[validation])  
  5.     plt.title('Train History')  
  6.     plt.ylabel(train)  
  7.     plt.xlabel('Epoch')  
  8.     plt.legend(['train''validation'], loc='upper left')  
  9.     plt.show()  
執行結果如下: 
  1. show_train_history(train_history, 'acc''val_acc')  

如果 "acc 訓練的準確率" 一直提升, 但是 "val_acc 的準確率" 卻一直沒有增加, 就有可能是 Overfitting 的現象 (更多說明請參考 Bias, Variance, and Overfitting). 在完成所有 (epoch) 訓練週期後, 在後面還會使用測試資料來評估模型準確率, 這是另外一組獨立的資料, 所以計算準確率會更客觀. 接著我們來看 loss 誤差的執行結果: 
  1. show_train_history(train_history, 'loss''val_loss')  
總共執行 10 個 Epoch 訓練週期, 可以發現: 
* 不論訓練與驗證, 誤差越來越低.
* 在 Epoch 訓練後期, "loss 訓練的誤差" 比 "val_loss 驗證的誤差" 小.

以測試資料評估模型準確率與預測 
我們已經完成訓練模型, 現在要使用 test 測試資料來評估模型準確率. 

STEP1. 評估模型準確率 
使用下面代碼評估模型準確率: 
  1. scores = model.evaluate(x_Test_norm, y_TestOneHot)  
  2. print()  
  3. print("\t[Info] Accuracy of testing data = {:2.1f}%".format(scores[1]*100.0))  
執行結果: 
[Info] Accuracy of testing data = 97.6%

STEP2. 進行預測 
前面我們建立模型並於訓練後達成可以接受的 97% 準確率, 接著我們將使用此模型進行預測. 
  1. print("\t[Info] Making prediction to x_Test_norm")  
  2. prediction = model.predict_classes(x_Test_norm)  # Making prediction and save result to prediction  
  3. print()  
  4. print("\t[Info] Show 10 prediction result (From 240):")  
  5. print("%s\n" % (prediction[240:250]))  
  6.   
  7. if isDisplayAvl():  
  8.     plot_images_labels_predict(X_test_image, y_test_label, prediction, idx=240)  
  9.   
  10. print("\t[Info] Error analysis:")  
  11. for i in range(len(prediction)):  
  12.     if prediction[i] != y_test_label[i]:  
  13.         print("\tAt %d'th: %d is with wrong prediction as %d!" % (i, y_test_label[i], prediction[i]))  
執行後預測結果如下: 

上面可以發現有個預測結果為 2, 但實際 label 為 4. 

顯示混淆矩陣 (Confusion matrix) 
如果我們想要進一步知道建立的模型中, 那些數字預測準確率最高, 那些數字最容易混淆, 此時可以使用混淆矩陣 (Confusion matrix). 在機器學習領域, 特別是統計分類的問題, 混淆矩陣 (也稱為 error matrix) 是一種特定的表格顯示方式, 可以讓我們以視覺化的方式, 了解 Supervisored Learning 的結果, 看出訓練出來的模型在各個類別的表現狀況. 

STEP1. 使用 pandas crosstab 建立混淆矩陣 (Confusion matrix) 
  1. print("\t[Info] Display Confusion Matrix:")  
  2. import pandas as pd  
  3. print("%s\n" % pd.crosstab(y_test_label, prediction, rownames=['label'], colnames=['predict']))  
執行結果: 

由上可以發現: 
* 對角線是預測結果正確的數字, 我們發現類別 "1" 的預測準確率最高共有 1,125 筆; 類別 "5" 的準確率最低共有 852 筆.
* 其他非對角線的數字, 代表將某一類別預測成其他類別的錯誤. 例如將類別 "5" 預測成 "3" 共發生 12 次.

STEP2. 建立真實與預測的 dataframe 
我們希望找出那些 label 結果為 "5" 的結果被預測成 "3" 的資料, 所以建立的下面的 dataframe: 
>>> df = pd.DataFrame({'label':y_test_label, 'predict':prediction})
>>> df[:2] // 顯示前兩筆資料
label predict
0 7 7
1 2 2

STEP3. 查詢 label=5; prediction=3 的資料 
Pandas Dataframe 可以讓你很方便的查詢資料: 
>>> out = df[(df.label==5) & (df.predict==3)] // 查詢 label=5; predict=3 的 records
>>> out.__class__ // 輸出是另一個 DataFrame

>>> out
label predict
340 5 3
1003 5 3
1393 5 3
2035 5 3
2526 5 3
2597 5 3
2810 5 3
3117 5 3
4271 5 3
4355 5 3
4360 5 3
5937 5 3
5972 5 3

STEP4. 查看第 340 筆資料 
>>> plot_images_labels_predict(X_test_image, y_test_label, prediction, idx=340, num=1)



隱藏層增加為 1000 個神經元 
為了增加準確率, 我們將 Hidden layers 的數目從 256 提升到 1000 個神經元: 

STEP1. 修改模型 
  1. from keras.models import Sequential  
  2. from keras.layers import Dense  
  3.   
  4. model = Sequential()  # Build Linear Model  
  5.   
  6. model.add(Dense(units=1000, input_dim=784, kernel_initializer='normal', activation='relu')) # Modify hidden layer from 256 -> 1000  
  7. model.add(Dense(units=10, kernel_initializer='normal', activation='softmax'))   
  8. print("\t[Info] Model summary:")  
  9. model.summary()  
  10. print("")  
STEP2. 檢視執行結果 
先來看看模型的 summary: 

在最後一輪的 Epoch 得到的結果: 
Epoch 10/10
4s - loss: 0.0064 - acc: 0.9993 - val_loss: 0.0701 - val_acc: 0.9807

從下面的 "accuracy" vs "validation accuracy" 的圖可以看出兩者差距拉大 (training accuracy > validation accuracy), 說明 Overfitting 問題變嚴重: 

最後在 testing data 上的 accuracy 有微微上升: 97.6% -> 97.9%: 
[Info] Accuracy of testing data = 97.9%

多層感知器加入 DropOut 功能以避免 Overfitting 
為了解決 Overfitting 問題, 接下來會加入 Dropout 功能, 以避免 Overfitting, 關於 Dropout 的簡單說明如下: 
Dropout 是指在模型訓練時隨機讓網絡某些隱含層節點的權重不工作,不工作的那些節點可以暫時認為不是網絡結構的一部分,但是它的權重得保留下來(只是暫時不更新而已),因為下次樣本輸入時它可能又得工作了. 更多說明可以參考 "How does the dropout method work in deep learning?".

STEP1. 修改隱藏層加入 DropOut 功能 
  1. ...  
  2. from keras.models import Sequential  
  3. from keras.layers import Dense  
  4. from keras.layers import Dropout  # ***** Import DropOut mooule *****  
  5.   
  6. model = Sequential()    
  7.   
  8. model.add(Dense(units=1000, input_dim=784, kernel_initializer='normal', activation='relu'))   
  9. model.add(Dropout(0.5))  # ***** Add DropOut functionality *****  
  10. model.add(Dense(units=10, kernel_initializer='normal', activation='softmax'))   
  11. print("\t[Info] Model summary:")  
  12. model.summary()  
  13. print("")  
  14. ...  
STEP2. 進行訓練並察看結果 
模型摘要: 

最後一個 Epoch 的執行結果可以發現 acc 與 val_acc 接近許多, 說明 Overfitting 問題有被解決: 
Epoch 10/10
4s - loss: 0.0380 - acc: 0.9882 - val_loss: 0.0666 - val_acc: 0.9807

這也反應在 accuracy 的圖表上: 

testing data 的 accuracy 也上到了 98%: 
[Info] Accuracy of testing data = 98.0%

建立多層感知器模型 (包含兩個 Hidden Layers) 
為了進一步提升準確率, 我們打算提升多元感知器 Hidden layer 的層數. 

STEP1. 變更模型使用兩個 Hidden Layers 並加入 DropOut 功能 
  1. ...  
  2. from keras.models import Sequential  
  3. from keras.layers import Dense  
  4. from keras.layers import Dropout  # Import DropOut mooule  
  5.   
  6. model = Sequential()  # Build Linear Model  
  7.   
  8. model.add(Dense(units=1000, input_dim=784, kernel_initializer='normal', activation='relu')) # Add Input/ first hidden layer  
  9. model.add(Dropout(0.5))  # Add DropOut functionality  
  10. model.add(Dense(units=1000, kernel_initializer='normal', activation='relu')) # Add second hidden layer  
  11. model.add(Dropout(0.5))  # Add DropOut functionality  
  12. model.add(Dense(units=10, kernel_initializer='normal', activation='softmax')) # Add Hidden/output layer  
  13. print("\t[Info] Model summary:")  
  14. model.summary()  
  15. print("")  
  16. ...  
STEP2. 進行訓練並察看結果 
由 accuracy 圖可以看出 training accuracy 與 validation accuracy 已經相當接近, 說明 Overfitting 的影響又被改善了: 

最後一輪 Epoch 的結果與 testing data 的 accuracy 如下: 
Epoch 10/10
9s - loss: 0.0523 - acc: 0.9828 - val_loss: 0.0786 - val_acc: 0.9791
...
[Info] Accuracy of testing data = 97.9%

完整代碼連結如下: 
ch6_1.py: 只有一層 Hidden layer
ch6_2.py: 兩層 Hidden layer


Supplement 
Matplotlib - Basic Introduction For ML/DataScience 
Deep learning:四十一(Dropout簡單理解)

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...