%pip install fastapi
%pip install uvicorn
%pip install librosa
%pip install xgboost
%pip install pandas
import pandas as pd
import numpy as np
import datetime
import string
import random
from pycaret.classification import *
# data = pd.read_csv('data/data_feature_eng.csv')
data = pd.read_csv('data/feature_ori.csv')
# print(data)
m_setup = setup(data=data, target='label', normalize=False,
feature_interaction=False,
feature_ratio=False,
trigonometry_features=False,
use_gpu=False)
Description | Value | |
---|---|---|
0 | session_id | 233 |
1 | Target | label |
2 | Target Type | Binary |
3 | Label Encoded | 0.0: 0, 1.0: 1 |
4 | Original Data | (64800, 11) |
5 | Missing Values | 0 |
6 | Numeric Features | 8 |
7 | Categorical Features | 2 |
8 | Ordinal Features | 0 |
9 | High Cardinality Features | 0 |
10 | High Cardinality Method | None |
11 | Transformed Train Set | (45359, 19) |
12 | Transformed Test Set | (19441, 19) |
13 | Shuffle Train-Test | True |
14 | Stratify Train-Test | False |
15 | Fold Generator | StratifiedKFold |
16 | Fold Number | 10 |
17 | CPU Jobs | -1 |
18 | Use GPU | 0 |
19 | Log Experiment | 0 |
20 | Experiment Name | clf-default-name |
21 | USI | 0b4d |
22 | Imputation Type | simple |
23 | Iterative Imputation Iteration | None |
24 | Numeric Imputer | mean |
25 | Iterative Imputation Numeric Model | None |
26 | Categorical Imputer | constant |
27 | Iterative Imputation Categorical Model | None |
28 | Unknown Categoricals Handling | least_frequent |
29 | Normalize | 0 |
30 | Normalize Method | None |
31 | Transformation | 0 |
32 | Transformation Method | None |
33 | PCA | 0 |
34 | PCA Method | None |
35 | PCA Components | None |
36 | Ignore Low Variance | 0 |
37 | Combine Rare Levels | 0 |
38 | Rare Level Threshold | None |
39 | Numeric Binning | 0 |
40 | Remove Outliers | 0 |
41 | Outliers Threshold | None |
42 | Remove Multicollinearity | 0 |
43 | Multicollinearity Threshold | None |
44 | Remove Perfect Collinearity | 1 |
45 | Clustering | 0 |
46 | Clustering Iteration | None |
47 | Polynomial Features | 0 |
48 | Polynomial Degree | None |
49 | Trignometry Features | 0 |
50 | Polynomial Threshold | None |
51 | Group Features | 0 |
52 | Feature Selection | 0 |
53 | Feature Selection Method | classic |
54 | Features Selection Threshold | None |
55 | Feature Interaction | 0 |
56 | Feature Ratio | 0 |
57 | Interaction Threshold | None |
58 | Fix Imbalance | 0 |
59 | Fix Imbalance Method | SMOTE |
# plot_model(tuned_dt, plot='auc')
mdl = load_model('tuned_xgboost_orifeature_0323')
Transformation Pipeline and Model Successfully Loaded
# plot_model(mdl, plot='auc')
# evaluate_model(mdl)
new_prediction = predict_model(mdl, data=data)
from pycaret.utils import check_metric
check_metric(new_prediction['label'], new_prediction['label'], metric = 'Accuracy')
1.0
create_api(mdl, 'my_api')
# !python my_api.py
API sucessfully created. This function only creates a POST API, it doesn't run it automatically. To run your API, please run this command --> !python my_api.py
create_docker('my_api')
# !docker image build -f "Dockerfile" -t my_api:latest .
Writing requirements.txt Writing Dockerfile Dockerfile and requirements.txt successfully created. To build image you have to run --> !docker image build -f "Dockerfile" -t IMAGE_NAME:IMAGE_TAG .