Compare commits

...

3 Commits
Lab_5 ... main

Author SHA1 Message Date
ecb1bf2d58 Merge pull request '5 Лаба' (#5) from Lab_5 into main
Reviewed-on: #5
2024-12-07 08:44:35 +04:00
6374426dba Merge pull request 'Четвертая ready' (#4) from Lab_4 into main
Reviewed-on: #4
2024-12-07 08:44:24 +04:00
Marselchi
0872aa8c53 Четвертая ready 2024-12-06 21:20:39 +04:00

548
Lab_4/Lab4.ipynb Normal file
View File

@ -0,0 +1,548 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Определим бизнес цели:\n",
"\n",
"1. Бизнес-цель: Оптимизация страховых тарифов для клиентов (регрессия)\n",
"Целевая: Charges\n",
"Признаки: Age BMI Region Children Sex Smoker\n",
"Достижимый уровень качества: MSE (среднеквадратичная ошибка): минимизация, ориентир в зависимости от разброса целевой переменной. R^2 > 0.6.\n",
"Ориентир: Прогноз среднего значения целевой переменной.\n",
"\n",
"2. Бизнес-цель: Определение клиентов с высоким риском заболеваний для профилактики (классификация)\n",
"Целевая: Smoker \n",
"Признаки: Age BMI Region Children Sex\n",
"Достижимый уровень качества: Accuracy (точность классификации) 70-80%%\n",
"Ориентир: DummyClassifier, предсказывающий самый частый класс, даст accuracy ~50-60%%."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"X_train_class: (1940, 7), y_train_class: (1940,)\n",
"X_train_reg: (1940, 8), y_train_reg: (1940,)\n"
]
}
],
"source": [
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
"from sklearn.compose import ColumnTransformer\n",
"\n",
"# Загрузка данных\n",
"data = pd.read_csv(\"..//datasets//Lab_1//Medical_insurance1.csv\",sep=',')\n",
"\n",
"# Преобразование данных для классификации\n",
"X_class = data[['Age', 'BMI', 'Children', 'Region', 'Sex']]\n",
"y_class = data['Smoker'] # Целевой столбец для классификации\n",
"\n",
"# Преобразование данных для регрессии\n",
"X_reg = data[['Age', 'BMI', 'Children', 'Region', 'Sex', 'Smoker']]\n",
"y_reg = data['Charges'] # Целевой столбец для регрессии\n",
"\n",
"# Кодирование категориальных данных\n",
"categorical_features = ['Region', 'Sex']\n",
"numerical_features = ['Age', 'BMI', 'Children']\n",
"\n",
"preprocessor_class = ColumnTransformer(\n",
" transformers=[\n",
" ('num', StandardScaler(), numerical_features),\n",
" ('cat', OneHotEncoder(drop='first'), categorical_features)\n",
" ])\n",
"\n",
"preprocessor_reg = ColumnTransformer(\n",
" transformers=[\n",
" ('num', StandardScaler(), numerical_features),\n",
" ('cat', OneHotEncoder(drop='first'), categorical_features),\n",
" ('smoker', OneHotEncoder(drop='first'), ['Smoker'])\n",
" ])\n",
"\n",
"X_class_scaled = preprocessor_class.fit_transform(X_class)\n",
"X_reg_scaled = preprocessor_reg.fit_transform(X_reg)\n",
"\n",
"X_train_class, X_test_class, y_train_class, y_test_class = train_test_split(\n",
" X_class_scaled, y_class, test_size=0.3, random_state=42\n",
")\n",
"X_train_reg, X_test_reg, y_train_reg, y_test_reg = train_test_split(\n",
" X_reg_scaled, y_reg, test_size=0.3, random_state=42\n",
")\n",
"\n",
"# Проверка форматов данных\n",
"print(f\"X_train_class: {X_train_class.shape}, y_train_class: {y_train_class.shape}\")\n",
"print(f\"X_train_reg: {X_train_reg.shape}, y_train_reg: {y_train_reg.shape}\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Теперь создадим по три модели и построим конвейер. \n",
"Для классификации:\n",
"1. Logistic Regression — базовая линейная модель для классификации.\n",
"2. Random Forest Classifier — ансамблевый метод на основе деревьев решений.\n",
"3. Gradient Boosting Classifier (XGBoost) — продвинутый бустинг для задач классификации.\n",
"Для регрессии:\n",
"1. Linear Regression — базовая линейная модель.\n",
"2. Random Forest Regressor — ансамблевый метод для регрессии.\n",
"3. Gradient Boosting Regressor (XGBoost) — продвинутый бустинг для регрессии."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Оценка моделей для классификации:\n",
"\n",
"Logistic Regression:\n",
" Accuracy: 0.8029\n",
" ROC-AUC: 0.5669\n",
"\n",
"Random Forest:\n",
" Accuracy: 0.9387\n",
" ROC-AUC: 0.9582\n",
"\n",
"SVC:\n",
" Accuracy: 0.8029\n",
" ROC-AUC: 0.6788\n",
"\n",
"Оценка моделей для регрессии:\n",
"\n",
"Linear Regression:\n",
" Mean Squared Error: 40004195.9424\n",
" R^2 Score: 0.7443\n",
"\n",
"Random Forest:\n",
" Mean Squared Error: 10687675.8724\n",
" R^2 Score: 0.9317\n",
"\n",
"SVR:\n",
" Mean Squared Error: 169727106.2359\n",
" R^2 Score: -0.0847\n",
"\n"
]
}
],
"source": [
"from sklearn.svm import SVC, SVR\n",
"from sklearn.linear_model import LogisticRegression, LinearRegression\n",
"from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor\n",
"from sklearn.pipeline import Pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.metrics import accuracy_score, roc_auc_score, mean_squared_error, r2_score\n",
"\n",
"# Конвейеры для классификации\n",
"pipelines_class = {\n",
" 'Logistic Regression': Pipeline([\n",
" ('scaler', StandardScaler()), # Масштабирование\n",
" ('classifier', LogisticRegression(random_state=42, max_iter=500))\n",
" ]),\n",
" 'Random Forest': Pipeline([\n",
" ('scaler', StandardScaler()),\n",
" ('classifier', RandomForestClassifier(random_state=42))\n",
" ]),\n",
" 'SVC': Pipeline([\n",
" ('scaler', StandardScaler()),\n",
" ('classifier', SVC(probability=True, random_state=42))\n",
" ])\n",
"}\n",
"\n",
"# Конвейеры для регрессии\n",
"pipelines_reg = {\n",
" 'Linear Regression': Pipeline([\n",
" ('scaler', StandardScaler()),\n",
" ('regressor', LinearRegression())\n",
" ]),\n",
" 'Random Forest': Pipeline([\n",
" ('scaler', StandardScaler()),\n",
" ('regressor', RandomForestRegressor(random_state=42))\n",
" ]),\n",
" 'SVR': Pipeline([\n",
" ('scaler', StandardScaler()),\n",
" ('regressor', SVR())\n",
" ])\n",
"}\n",
"\n",
"# Функция для оценки классификации\n",
"def evaluate_classification(pipelines, X_train, X_test, y_train, y_test):\n",
" print(\"Оценка моделей для классификации:\\n\")\n",
" for name, pipeline in pipelines.items():\n",
" pipeline.fit(X_train, y_train)\n",
" y_pred = pipeline.predict(X_test)\n",
" y_proba = pipeline.predict_proba(X_test)[:, 1] if hasattr(pipeline['classifier'], 'predict_proba') else None\n",
" acc = accuracy_score(y_test, y_pred)\n",
" roc_auc = roc_auc_score(y_test, y_proba) if y_proba is not None else None\n",
" print(f\"{name}:\")\n",
" print(f\" Accuracy: {acc:.4f}\")\n",
" if roc_auc is not None:\n",
" print(f\" ROC-AUC: {roc_auc:.4f}\")\n",
" print()\n",
"\n",
"# Функция для оценки регрессии\n",
"def evaluate_regression(pipelines, X_train, X_test, y_train, y_test):\n",
" print(\"Оценка моделей для регрессии:\\n\")\n",
" for name, pipeline in pipelines.items():\n",
" pipeline.fit(X_train, y_train)\n",
" y_pred = pipeline.predict(X_test)\n",
" mse = mean_squared_error(y_test, y_pred)\n",
" r2 = r2_score(y_test, y_pred)\n",
" print(f\"{name}:\")\n",
" print(f\" Mean Squared Error: {mse:.4f}\")\n",
" print(f\" R^2 Score: {r2:.4f}\")\n",
" print()\n",
"\n",
"# Оценка \n",
"evaluate_classification(pipelines_class, X_train_class, X_test_class, y_train_class, y_test_class)\n",
"evaluate_regression(pipelines_reg, X_train_reg, X_test_reg, y_train_reg, y_test_reg)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Теперь займемся настройкой гиперпараметров\n",
"GridSearchCV (кросс-валидация). Параметры: cv=5: 5 фолдов для кросс-валидации. n_jobs=-1: Используем все доступные процессоры для ускорения вычислений. verbose=1"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Настройка гиперпараметров для классификации:\n",
" Настройка модели: Logistic Regression\n",
"Fitting 5 folds for each of 8 candidates, totalling 40 fits\n",
" Лучшие параметры: {'classifier__C': 0.01, 'classifier__solver': 'lbfgs'}\n",
" Лучшая ROC-AUC: 0.5708\n",
"\n",
" Настройка модели: Random Forest\n",
"Fitting 5 folds for each of 27 candidates, totalling 135 fits\n",
" Лучшие параметры: {'classifier__max_depth': None, 'classifier__min_samples_split': 2, 'classifier__n_estimators': 200}\n",
" Лучшая ROC-AUC: 0.8578\n",
"\n",
" Настройка модели: SVC\n",
"Fitting 5 folds for each of 12 candidates, totalling 60 fits\n",
" Лучшие параметры: {'classifier__C': 1, 'classifier__gamma': 'scale', 'classifier__kernel': 'rbf'}\n",
" Лучшая ROC-AUC: 0.6190\n",
"\n",
"Настройка гиперпараметров для регрессии:\n",
" Настройка модели: Linear Regression\n",
"Fitting 5 folds for each of 1 candidates, totalling 5 fits\n",
" Лучшие параметры: {}\n",
" Лучший R^2: 0.7505\n",
"\n",
" Настройка модели: Random Forest\n",
"Fitting 5 folds for each of 27 candidates, totalling 135 fits\n",
" Лучшие параметры: {'regressor__max_depth': None, 'regressor__min_samples_split': 2, 'regressor__n_estimators': 200}\n",
" Лучший R^2: 0.9079\n",
"\n",
" Настройка модели: SVR\n",
"Fitting 5 folds for each of 12 candidates, totalling 60 fits\n",
" Лучшие параметры: {'regressor__C': 10, 'regressor__gamma': 'scale', 'regressor__kernel': 'linear'}\n",
" Лучший R^2: 0.5283\n",
"\n"
]
}
],
"source": [
"from sklearn.model_selection import GridSearchCV\n",
"from sklearn.metrics import make_scorer\n",
"\n",
"# Гиперпараметры для классификации\n",
"param_grids_class = {\n",
" 'Logistic Regression': {\n",
" 'classifier__C': [0.01, 0.1, 1, 10],\n",
" 'classifier__solver': ['liblinear', 'lbfgs']\n",
" },\n",
" 'Random Forest': {\n",
" 'classifier__n_estimators': [50, 100, 200],\n",
" 'classifier__max_depth': [None, 10, 20],\n",
" 'classifier__min_samples_split': [2, 5, 10]\n",
" },\n",
" 'SVC': {\n",
" 'classifier__C': [0.1, 1, 10],\n",
" 'classifier__kernel': ['linear', 'rbf'],\n",
" 'classifier__gamma': ['scale', 'auto']\n",
" }\n",
"}\n",
"\n",
"# Гиперпараметры для регрессии\n",
"param_grids_reg = {\n",
" 'Linear Regression': {}, # У линейной регрессии обычно мало гиперпараметров\n",
" 'Random Forest': {\n",
" 'regressor__n_estimators': [50, 100, 200],\n",
" 'regressor__max_depth': [None, 10, 20],\n",
" 'regressor__min_samples_split': [2, 5, 10]\n",
" },\n",
" 'SVR': {\n",
" 'regressor__C': [0.1, 1, 10],\n",
" 'regressor__kernel': ['linear', 'rbf'],\n",
" 'regressor__gamma': ['scale', 'auto']\n",
" }\n",
"}\n",
"\n",
"# Функция для настройки гиперпараметров классификации\n",
"def tune_hyperparameters_class(pipelines, param_grids, X_train, y_train):\n",
" best_models = {}\n",
" print(\"Настройка гиперпараметров для классификации:\")\n",
" for name, pipeline in pipelines.items():\n",
" print(f\" Настройка модели: {name}\")\n",
" grid_search = GridSearchCV(\n",
" pipeline, param_grids[name], scoring='roc_auc', cv=5, n_jobs=-1, verbose=1\n",
" )\n",
" grid_search.fit(X_train, y_train)\n",
" best_models[name] = grid_search.best_estimator_\n",
" print(f\" Лучшие параметры: {grid_search.best_params_}\")\n",
" print(f\" Лучшая ROC-AUC: {grid_search.best_score_:.4f}\\n\")\n",
" return best_models\n",
"\n",
"# Функция для настройки гиперпараметров регрессии\n",
"def tune_hyperparameters_reg(pipelines, param_grids, X_train, y_train):\n",
" best_models = {}\n",
" print(\"Настройка гиперпараметров для регрессии:\")\n",
" for name, pipeline in pipelines.items():\n",
" print(f\" Настройка модели: {name}\")\n",
" grid_search = GridSearchCV(\n",
" pipeline, param_grids[name], scoring='r2', cv=5, n_jobs=-1, verbose=1\n",
" )\n",
" grid_search.fit(X_train, y_train)\n",
" best_models[name] = grid_search.best_estimator_\n",
" print(f\" Лучшие параметры: {grid_search.best_params_}\")\n",
" print(f\" Лучший R^2: {grid_search.best_score_:.4f}\\n\")\n",
" return best_models\n",
"\n",
"# Настройка гиперпараметров\n",
"best_models_class = tune_hyperparameters_class(pipelines_class, param_grids_class, X_train_class, y_train_class)\n",
"best_models_reg = tune_hyperparameters_reg(pipelines_reg, param_grids_reg, X_train_reg, y_train_reg)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Теперь оценим их"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\n",
"import numpy as np\n",
"\n",
"# Преобразуем текстовые метки в числовые\n",
"y_class_numeric = y_class.map({'no': 0, 'yes': 1})\n",
"\n",
"# Оценка качества классификации\n",
"for name, model in best_models_class.items():\n",
" y_pred_class = model.predict(X_class_scaled)\n",
"\n",
" # Преобразуем текстовые предсказания в числовые метки\n",
" # if isinstance(y_pred_class[0], str):\n",
" # y_pred_class = pd.Series(y_pred_class).map({'no': 0, 'yes': 1}).values\n",
"\n",
" # y_pred_proba = model.predict_proba(X_class_scaled)[:, 1] if hasattr(model, \"predict_proba\") else None\n",
"\n",
" # print(f\"Оценка качества для модели {name}:\")\n",
" # print(\"Accuracy:\", accuracy_score(y_class_numeric, y_pred_class))\n",
" # print(\"Precision:\", precision_score(y_class_numeric, y_pred_class))\n",
" # print(\"Recall:\", recall_score(y_class_numeric, y_pred_class))\n",
" # print(\"F1-Score:\", f1_score(y_class_numeric, y_pred_class))\n",
"\n",
" # if y_pred_proba is not None:\n",
" # print(\"ROC AUC:\", roc_auc_score(y_class_numeric, y_pred_proba))\n",
" # else:\n",
" # print(\"ROC AUC: Невозможно вычислить, модель не поддерживает predict_proba\")\n",
" # print(\"\\n\")\n",
"\n",
"# Оценка качества регрессии\n",
"for name, model in best_models_reg.items():\n",
" y_pred_reg = model.predict(X_reg_scaled)\n",
" # print(f\"Оценка качества для модели {name}:\")\n",
" # print(\"MAE:\", mean_absolute_error(y_reg, y_pred_reg))\n",
" # print(\"MSE:\", mean_squared_error(y_reg, y_pred_reg))\n",
" # print(\"RMSE:\", np.sqrt(mean_squared_error(y_reg, y_pred_reg)))\n",
" # print(\"R²:\", r2_score(y_reg, y_pred_reg))\n",
" # print(\"\\n\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Результат и оправдание метрик (из-за большого размера):\n",
"\n",
"\n",
"Классификация:\n",
"Оценка качества для модели Logistic Regression:\n",
"Accuracy: 0.7965367965367965\n",
"Precision: 0.0\n",
"F1-Score: 0.0\n",
"ROC AUC: 0.5788123779422346\n",
"\n",
"Оценка качества для модели Random Forest:\n",
"Accuracy: 0.9812409812409812\n",
"Precision: 0.9671532846715328\n",
"F1-Score: 0.9532374100719424\n",
"ROC AUC: 0.9922653921266317\n",
"\n",
"Оценка качества для модели SVC:\n",
"Accuracy: 0.7965367965367965\n",
"Precision: 0.0\n",
"F1-Score: 0.0\n",
"ROC AUC: 0.752295007194984\n",
"\n",
"\n",
"Регрессия:\n",
"Оценка качества для модели Linear Regression:\n",
"MAE: 4136.775081674497\n",
"MSE: 36800983.69176898\n",
"R²: 0.7506914796021513\n",
"\n",
"\n",
"Оценка качества для модели Random Forest:\n",
"MAE: 827.1310058445929\n",
"MSE: 4157251.954692241\n",
"R²: 0.9718366676710003\n",
"\n",
"\n",
"Оценка качества для модели SVR:\n",
"MAE: 3907.745018325371\n",
"MSE: 67849095.49493024\n",
"R²: 0.5403558298916683\n",
"\n",
"\n",
"\n",
"Для задачи классификации\n",
"Метрики:\n",
"Precision (Точность):\n",
"Это доля истинных положительных случаев среди всех предсказанных положительных случаев. Важна для задач, где важно минимизировать количество ложных срабатываний\n",
"Accuracy (доля правильных ответов):\n",
"Показывает, какая доля объектов была классифицирована правильно.\n",
"Уместна при сбалансированных классах.\n",
"ROC-AUC (площадь под кривой ошибок):\n",
"Учитывает баланс между чувствительностью и специфичностью.\n",
"Подходит для несбалансированных данных.\n",
"F1-Score:\n",
"Баланс между точностью и полнотой.\n",
"Полезна, если ошибки классификации одного из классов имеют больший вес.\n",
"\n",
"\n",
"Для задачи регрессии\n",
"Метрики:\n",
"\n",
"R² (коэффициент детерминации):\n",
"Оценивает долю объясненной дисперсии целевой переменной моделью.\n",
"Mean Absolute Error (MAE):\n",
"Среднее абсолютное отклонение предсказаний от истинных значений.\n",
"Удобна для интерпретации, так как измеряется в тех же единицах, что и целевая переменная.\n",
"Mean Squared Error (MSE):\n",
"Усиливает влияние больших ошибок.\n",
"Уместна, если крупные ошибки особенно нежелательны."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Оценка смещения и дисперсии для classification:\n",
"\n",
"Модель: Logistic Regression\n",
"Смещение (bias): 0.2062\n",
"Дисперсия (variance): -0.0091\n",
"\n",
"Модель: Random Forest\n",
"Смещение (bias): 0.0005\n",
"Дисперсия (variance): 0.0608\n",
"\n",
"Модель: SVC\n",
"Смещение (bias): 0.2062\n",
"Дисперсия (variance): -0.0091\n",
"Оценка смещения и дисперсии для regression:\n",
"\n",
"Модель: Linear Regression\n",
"Смещение (bias): 0.2463\n",
"Дисперсия (variance): 0.0093\n",
"\n",
"Модель: Random Forest\n",
"Смещение (bias): 0.0095\n",
"Дисперсия (variance): 0.0585\n",
"\n",
"Модель: SVR\n",
"Смещение (bias): 0.4514\n",
"Дисперсия (variance): 0.0260\n"
]
}
],
"source": [
"def evaluate_bias_variance(models, X_train, X_test, y_train, y_test, task=\"classification\"):\n",
" print(f\"Оценка смещения и дисперсии для {task}:\")\n",
" for name, model in models.items():\n",
" if task == \"classification\":\n",
" train_score = model.score(X_train, y_train)\n",
" test_score = model.score(X_test, y_test)\n",
" else: # Для регрессии\n",
" train_score = r2_score(y_train, model.predict(X_train))\n",
" test_score = r2_score(y_test, model.predict(X_test))\n",
"\n",
" bias = 1 - train_score\n",
" variance = train_score - test_score\n",
"\n",
" print(f\"\\nМодель: {name}\")\n",
" print(f\"Смещение (bias): {bias:.4f}\")\n",
" print(f\"Дисперсия (variance): {variance:.4f}\")\n",
"\n",
"# Анализ смещения и дисперсии\n",
"evaluate_bias_variance(best_models_class, X_train_class, X_test_class, y_train_class, y_test_class, task=\"classification\")\n",
"evaluate_bias_variance(best_models_reg, X_train_reg, X_test_reg, y_train_reg, y_test_reg, task=\"regression\")\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "aimenv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}