178 KiB
Raw Permalink Blame History

In [1]:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("C://Users//annal//aim//static//csv//Forbes_Billionaires.csv")

Определим бизнес цели:

1- Прогнозирование места в рейтинге

2- Оценка факторов, влияющих на место в рейтинге

Определим цели технического проекта:

Построить модель, которая будет прогнозировать место в рейтинге на основе представленных данных об участнике

Провести анализ данных для выявления важнейших характеристик для прогнозирования

Проверим выбросы и усредним

In [2]:
numeric_columns = ['Networth', 'Age']
for column in numeric_columns:
    if pd.api.types.is_numeric_dtype(df[column]):  # Проверяем, является ли колонка числовой
        q1 = df[column].quantile(0.25)  # Находим 1-й квантиль (Q1)
        q3 = df[column].quantile(0.75)  # Находим 3-й квантиль (Q3)
        iqr = q3 - q1  # Вычисляем межквантильный размах (IQR)

        # Определяем границы для выбросов
        lower_bound = q1 - 1.5 * iqr  # Нижняя граница
        upper_bound = q3 + 1.5 * iqr  # Верхняя граница

        # Подсчитываем количество выбросов
        outliers = df[(df[column] < lower_bound) | (df[column] > upper_bound)]
        outlier_count = outliers.shape[0]

        # Устраняем выбросы: заменяем значения ниже нижней границы на саму нижнюю границу, а выше верхней — на верхнюю
        df[column] = df[column].apply(lambda x: lower_bound if x < lower_bound else upper_bound if x > upper_bound else x)

        print(f"Колонка {column}:")
        print(f"  Есть выбросы: {'Да' if outlier_count > 0 else 'Нет'}")
        print(f"  Количество выбросов: {outlier_count}")
        print(f"  Минимальное значение: {df[column].min()}")
        print(f"  Максимальное значение: {df[column].max()}")
        print(f"  1-й квантиль (Q1): {q1}")
        print(f"  3-й квантиль (Q3): {q3}\n")
Колонка Networth:
  Есть выбросы: Да
  Количество выбросов: 226
  Минимальное значение: 1.0
  Максимальное значение: 9.0
  1-й квантиль (Q1): 1.5
  3-й квантиль (Q3): 4.5

Колонка Age:
  Есть выбросы: Да
  Количество выбросов: 6
  Минимальное значение: 26.5
  Максимальное значение: 100.0
  1-й квантиль (Q1): 55.0
  3-й квантиль (Q3): 74.0

Превратим номинальные столбцы в числовые

In [ ]:
from sklearn.preprocessing import OneHotEncoder, LabelEncoder

# Определение категориальных признаков для преобразования
categorical_columns = ['Name']

# Инициализация OneHotEncoder
encoder = OneHotEncoder(sparse_output=False, drop="first")

# Применение OneHotEncoder к выбранным категориальным признакам
encoded_values = encoder.fit_transform(df[categorical_columns])

# Получение имен новых закодированных столбцов
encoded_columns = encoder.get_feature_names_out(categorical_columns)

# Преобразование в DataFrame
encoded_values_df = pd.DataFrame(encoded_values, columns=encoded_columns)

# Объединение закодированных значений с оригинальным DataFrame, исключив исходные категориальные столбцы
df = df.drop(columns=categorical_columns)
df = pd.concat([df.reset_index(drop=True), encoded_values_df.reset_index(drop=True)], axis=1)

# Применение Label Encoding для столбца 'Country', 'Source', 'Industry'
label_encoder = LabelEncoder()
df['Country'] = label_encoder.fit_transform(df['Country'])
df['Source'] = label_encoder.fit_transform(df['Source'])
df['Industry'] = label_encoder.fit_transform(df['Industry'])


print(df.head())
   Rank   Networth   Age  Country  Source  Industry  \
0      1       9.0  50.0       70     123         0   
1      2       9.0  58.0       70       5        15   
2      3       9.0  73.0       20      73         3   
3      4       9.0  66.0       70      81        15   
4      5       9.0  91.0       70      11         4   

   Name_Abdulla Al Futtaim & family   \
0                                0.0   
1                                0.0   
2                                0.0   
3                                0.0   
4                                0.0   

   Name_Abdulla bin Ahmad Al Ghurair & family   Name_Abdulsamad Rabiu   \
0                                          0.0                     0.0   
1                                          0.0                     0.0   
2                                          0.0                     0.0   
3                                          0.0                     0.0   
4                                          0.0                     0.0   

   Name_Abhay Firodia   ...  Name_Zhu Yan & family   Name_Zhu Yiming   \
0                  0.0  ...                     0.0               0.0   
1                  0.0  ...                     0.0               0.0   
2                  0.0  ...                     0.0               0.0   
3                  0.0  ...                     0.0               0.0   
4                  0.0  ...                     0.0               0.0   

   Name_Zhu Yiwen & family   Name_Zhuo Jun   Name_Ziv Aviram   \
0                       0.0             0.0               0.0   
1                       0.0             0.0               0.0   
2                       0.0             0.0               0.0   
3                       0.0             0.0               0.0   
4                       0.0             0.0               0.0   

   Name_Zong Qinghou   Name_Zong Yanmin   Name_Zugen Ni   Name_Zuowen Song   \
0                 0.0                0.0             0.0                0.0   
1                 0.0                0.0             0.0                0.0   
2                 0.0                0.0             0.0                0.0   
3                 0.0                0.0             0.0                0.0   
4                 0.0                0.0             0.0                0.0   

   Name_Zygmunt Solorz-Zak   
0                       0.0  
1                       0.0  
2                       0.0  
3                       0.0  
4                       0.0  

[5 rows x 2603 columns]

Создадим выборки данных по параметру места в рейтинге

In [4]:
from sklearn.model_selection import train_test_split

# Выделение признаков (X) и целевой переменной (y)
X = df.drop(columns=['Rank '])  # Признаки
y = df['Rank ']                 # Целевая переменная

# Разделение данных на обучающую и временную выборки
X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.4, random_state=42)

# Разделение временной выборки на контрольную и тестовую выборки
X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.5, random_state=42)

# Проверка размеров выборок
print(f"Размер обучающей выборки: {X_train.shape}")
print(f"Размер контрольной выборки: {X_val.shape}")
print(f"Размер тестовой выборки: {X_test.shape}")
Размер обучающей выборки: (1560, 2602)
Размер контрольной выборки: (520, 2602)
Размер тестовой выборки: (520, 2602)
In [5]:
import seaborn as sns
import matplotlib.pyplot as plt
# Функция для оценки распределения цены
def plot_distribution(y_data, title):
    plt.figure(figsize=(10, 6))
    sns.histplot(y_data, kde=True, bins=50)
    plt.title(title)
    plt.xlabel('Rank ')
    plt.ylabel('Frequency')
    plt.grid(True)
    plt.show()

# Оценка распределения цены в каждой выборке
plot_distribution(y_train, "Распределение места в обучающей выборке")
plot_distribution(y_val, "Распределение места в контрольной выборке")
plot_distribution(y_test, "Распределение места в тестовой выборке")
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

Применим min-max нормировку для улучшения качества работы модели

In [6]:
from sklearn.preprocessing import MinMaxScaler, StandardScaler

# Предполагаем, что вы уже выделили ваши признаки X
# Применение нормировки Min-Max к всем числовым признакам
min_max_scaler = MinMaxScaler()
X_normalized = pd.DataFrame(min_max_scaler.fit_transform(X), columns=X.columns)

# Применение стандартизации к всем числовым признакам
standard_scaler = StandardScaler()
X_standardized = pd.DataFrame(standard_scaler.fit_transform(X), columns=X.columns)

# Проверка первых 5 строк после нормировки
print("Нормированные данные:")
print(X_normalized.head())

# Проверка первых 5 строк после стандартизации
print("\nСтандартизированные данные:")
print(X_standardized.head())
Нормированные данные:
   Networth       Age   Country    Source  Industry  \
0       1.0  0.319728  0.945946  0.137584  0.000000   
1       1.0  0.428571  0.945946  0.005593  0.882353   
2       1.0  0.632653  0.270270  0.081655  0.176471   
3       1.0  0.537415  0.945946  0.090604  0.882353   
4       1.0  0.877551  0.945946  0.012304  0.235294   

   Name_Abdulla Al Futtaim & family   \
0                                0.0   
1                                0.0   
2                                0.0   
3                                0.0   
4                                0.0   

   Name_Abdulla bin Ahmad Al Ghurair & family   Name_Abdulsamad Rabiu   \
0                                          0.0                     0.0   
1                                          0.0                     0.0   
2                                          0.0                     0.0   
3                                          0.0                     0.0   
4                                          0.0                     0.0   

   Name_Abhay Firodia   Name_Abigail Johnson   ...  Name_Zhu Yan & family   \
0                  0.0                    0.0  ...                     0.0   
1                  0.0                    0.0  ...                     0.0   
2                  0.0                    0.0  ...                     0.0   
3                  0.0                    0.0  ...                     0.0   
4                  0.0                    0.0  ...                     0.0   

   Name_Zhu Yiming   Name_Zhu Yiwen & family   Name_Zhuo Jun   \
0               0.0                       0.0             0.0   
1               0.0                       0.0             0.0   
2               0.0                       0.0             0.0   
3               0.0                       0.0             0.0   
4               0.0                       0.0             0.0   

   Name_Ziv Aviram   Name_Zong Qinghou   Name_Zong Yanmin   Name_Zugen Ni   \
0               0.0                 0.0                0.0             0.0   
1               0.0                 0.0                0.0             0.0   
2               0.0                 0.0                0.0             0.0   
3               0.0                 0.0                0.0             0.0   
4               0.0                 0.0                0.0             0.0   

   Name_Zuowen Song   Name_Zygmunt Solorz-Zak   
0                0.0                       0.0  
1                0.0                       0.0  
2                0.0                       0.0  
3                0.0                       0.0  
4                0.0                       0.0  

[5 rows x 2602 columns]

Стандартизированные данные:
   Networth       Age   Country    Source  Industry  \
0  2.266803 -1.081352  1.173910 -1.505003 -1.701719   
1  2.266803 -0.475422  1.173910 -2.004526  1.339990   
2  2.266803  0.660697 -0.805574 -1.716665 -1.093377   
3  2.266803  0.130508  1.173910 -1.682800  1.339990   
4  2.266803  2.024040  1.173910 -1.979126 -0.890597   

   Name_Abdulla Al Futtaim & family   \
0                          -0.019615   
1                          -0.019615   
2                          -0.019615   
3                          -0.019615   
4                          -0.019615   

   Name_Abdulla bin Ahmad Al Ghurair & family   Name_Abdulsamad Rabiu   \
0                                    -0.019615               -0.019615   
1                                    -0.019615               -0.019615   
2                                    -0.019615               -0.019615   
3                                    -0.019615               -0.019615   
4                                    -0.019615               -0.019615   

   Name_Abhay Firodia   Name_Abigail Johnson   ...  Name_Zhu Yan & family   \
0            -0.019615              -0.019615  ...               -0.019615   
1            -0.019615              -0.019615  ...               -0.019615   
2            -0.019615              -0.019615  ...               -0.019615   
3            -0.019615              -0.019615  ...               -0.019615   
4            -0.019615              -0.019615  ...               -0.019615   

   Name_Zhu Yiming   Name_Zhu Yiwen & family   Name_Zhuo Jun   \
0         -0.019615                 -0.019615       -0.019615   
1         -0.019615                 -0.019615       -0.019615   
2         -0.019615                 -0.019615       -0.019615   
3         -0.019615                 -0.019615       -0.019615   
4         -0.019615                 -0.019615       -0.019615   

   Name_Ziv Aviram   Name_Zong Qinghou   Name_Zong Yanmin   Name_Zugen Ni   \
0         -0.019615           -0.019615          -0.019615       -0.019615   
1         -0.019615           -0.019615          -0.019615       -0.019615   
2         -0.019615           -0.019615          -0.019615       -0.019615   
3         -0.019615           -0.019615          -0.019615       -0.019615   
4         -0.019615           -0.019615          -0.019615       -0.019615   

   Name_Zuowen Song   Name_Zygmunt Solorz-Zak   
0          -0.019615                 -0.019615  
1          -0.019615                 -0.019615  
2          -0.019615                 -0.019615  
3          -0.019615                 -0.019615  
4          -0.019615                 -0.019615  

[5 rows x 2602 columns]

Приведём пример использования future tools

Попробую вынести страну в отдельную таблицу

In [7]:
pip install --upgrade featuretools
Collecting featuretoolsNote: you may need to restart the kernel to use updated packages.
[notice] A new release of pip is available: 24.2 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
  Downloading featuretools-1.31.0-py3-none-any.whl.metadata (15 kB)
Collecting cloudpickle>=1.5.0 (from featuretools)
  Downloading cloudpickle-3.1.0-py3-none-any.whl.metadata (7.0 kB)
Collecting holidays>=0.17 (from featuretools)
  Downloading holidays-0.59-py3-none-any.whl.metadata (25 kB)
Requirement already satisfied: numpy>=1.25.0 in c:\users\annal\aim\.venv\lib\site-packages (from featuretools) (2.1.1)
Requirement already satisfied: packaging>=20.0 in c:\users\annal\aim\.venv\lib\site-packages (from featuretools) (24.1)
Requirement already satisfied: pandas>=2.0.0 in c:\users\annal\aim\.venv\lib\site-packages (from featuretools) (2.2.2)
Requirement already satisfied: psutil>=5.7.0 in c:\users\annal\aim\.venv\lib\site-packages (from featuretools) (6.0.0)
Requirement already satisfied: scipy>=1.10.0 in c:\users\annal\aim\.venv\lib\site-packages (from featuretools) (1.14.1)
Collecting tqdm>=4.66.3 (from featuretools)
  Downloading tqdm-4.66.6-py3-none-any.whl.metadata (57 kB)
Collecting woodwork>=0.28.0 (from featuretools)
  Downloading woodwork-0.31.0-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: python-dateutil in c:\users\annal\aim\.venv\lib\site-packages (from holidays>=0.17->featuretools) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\annal\aim\.venv\lib\site-packages (from pandas>=2.0.0->featuretools) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\annal\aim\.venv\lib\site-packages (from pandas>=2.0.0->featuretools) (2024.1)
Requirement already satisfied: colorama in c:\users\annal\aim\.venv\lib\site-packages (from tqdm>=4.66.3->featuretools) (0.4.6)
Requirement already satisfied: scikit-learn>=1.1.0 in c:\users\annal\aim\.venv\lib\site-packages (from woodwork>=0.28.0->featuretools) (1.5.2)
Collecting importlib-resources>=5.10.0 (from woodwork>=0.28.0->featuretools)
  Downloading importlib_resources-6.4.5-py3-none-any.whl.metadata (4.0 kB)
Requirement already satisfied: six>=1.5 in c:\users\annal\aim\.venv\lib\site-packages (from python-dateutil->holidays>=0.17->featuretools) (1.16.0)
Requirement already satisfied: joblib>=1.2.0 in c:\users\annal\aim\.venv\lib\site-packages (from scikit-learn>=1.1.0->woodwork>=0.28.0->featuretools) (1.4.2)
Requirement already satisfied: threadpoolctl>=3.1.0 in c:\users\annal\aim\.venv\lib\site-packages (from scikit-learn>=1.1.0->woodwork>=0.28.0->featuretools) (3.5.0)
Downloading featuretools-1.31.0-py3-none-any.whl (587 kB)
   ---------------------------------------- 0.0/587.9 kB ? eta -:--:--
   ----------------- ---------------------- 262.1/587.9 kB ? eta -:--:--
   ---------------------------------------- 587.9/587.9 kB 1.5 MB/s eta 0:00:00
Downloading cloudpickle-3.1.0-py3-none-any.whl (22 kB)
Downloading holidays-0.59-py3-none-any.whl (1.1 MB)
   ---------------------------------------- 0.0/1.1 MB ? eta -:--:--
   --------- ------------------------------ 0.3/1.1 MB ? eta -:--:--
   ---------------------------- ----------- 0.8/1.1 MB 1.9 MB/s eta 0:00:01
   ---------------------------------------- 1.1/1.1 MB 2.2 MB/s eta 0:00:00
Downloading tqdm-4.66.6-py3-none-any.whl (78 kB)
Downloading woodwork-0.31.0-py3-none-any.whl (215 kB)
Downloading importlib_resources-6.4.5-py3-none-any.whl (36 kB)
Installing collected packages: tqdm, importlib-resources, cloudpickle, holidays, woodwork, featuretools
Successfully installed cloudpickle-3.1.0 featuretools-1.31.0 holidays-0.59 importlib-resources-6.4.5 tqdm-4.66.6 woodwork-0.31.0
In [8]:
pip install --upgrade setuptools
Collecting setuptools
  Downloading setuptools-75.3.0-py3-none-any.whl.metadata (6.9 kB)
Downloading setuptools-75.3.0-py3-none-any.whl (1.3 MB)
   ---------------------------------------- 0.0/1.3 MB ? eta -:--:--
   ---------------- ----------------------- 0.5/1.3 MB 3.4 MB/s eta 0:00:01
   ---------------------------------------- 1.3/1.3 MB 3.7 MB/s eta 0:00:00
Installing collected packages: setuptools
Successfully installed setuptools-75.3.0
Note: you may need to restart the kernel to use updated packages.
[notice] A new release of pip is available: 24.2 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
In [ ]:
import featuretools as ft
from woodwork.logical_types import Categorical, Integer
import pandas as pd
df = pd.read_csv("C://Users//annal//aim//static//csv//Forbes_Billionaires.csv")
df['id'] = pd.Series(range(len(df))) 
# Создание двух таблиц: одна с моделью, другая с остальными данными
country_df = df[['id', 'Country']].drop_duplicates().reset_index(drop=True)
other_df = df.drop(columns=['Country'])

# Создание уникального идентификатора для связи
country_df['country_id'] = country_df.index
other_df['country_id'] = other_df['id'].map(country_df.set_index('id')['country_id'])

es = ft.EntitySet(id="orders")
es = es.add_dataframe(
    dataframe_name="country_table",
    dataframe=country_df,
    index="country_id",  # Индекс для уникальной идентификации моделей
    logical_types={
        "Country": Categorical  # Определяем логический тип для модели
    },
)
es = es.add_dataframe(
    dataframe_name="other_about_billioner",
    dataframe=other_df,
    index="billioner_id",  # Индекс для уникальной идентификации миллиардеров
    logical_types={
        "Rank ": Integer,  # Целевая переменная (цена)
        "Networth": Integer,  
        "Age": Integer,
        "country_id": Integer,  
    },
)
es = es.add_relationship("country_table", "country_id", "other_about_billioner", "country_id")

feature_matrix, feature_defs = ft.dfs(
    entityset=es,
    target_dataframe_name="other_about_billioner"
)

feature_matrix
c:\Users\annal\aim\.venv\Lib\site-packages\featuretools\entityset\entityset.py:1733: UserWarning: index billioner_id not found in dataframe, creating new integer column
  warnings.warn(
c:\Users\annal\aim\.venv\Lib\site-packages\woodwork\type_sys\utils.py:33: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
  pd.to_datetime(
c:\Users\annal\aim\.venv\Lib\site-packages\woodwork\type_sys\utils.py:33: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
  pd.to_datetime(
c:\Users\annal\aim\.venv\Lib\site-packages\woodwork\type_sys\utils.py:33: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
  pd.to_datetime(
c:\Users\annal\aim\.venv\Lib\site-packages\woodwork\type_sys\utils.py:33: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
  pd.to_datetime(
c:\Users\annal\aim\.venv\Lib\site-packages\woodwork\type_sys\utils.py:33: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
  pd.to_datetime(
c:\Users\annal\aim\.venv\Lib\site-packages\woodwork\type_sys\utils.py:33: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
  pd.to_datetime(
c:\Users\annal\aim\.venv\Lib\site-packages\featuretools\computational_backends\feature_set_calculator.py:785: FutureWarning: The provided callable <function max at 0x000001952157A520> is currently using SeriesGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
  ).agg(to_agg)
c:\Users\annal\aim\.venv\Lib\site-packages\featuretools\computational_backends\feature_set_calculator.py:785: FutureWarning: The provided callable <function std at 0x000001952157B060> is currently using SeriesGroupBy.std. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "std" instead.
  ).agg(to_agg)
c:\Users\annal\aim\.venv\Lib\site-packages\featuretools\computational_backends\feature_set_calculator.py:785: FutureWarning: The provided callable <function sum at 0x0000019521579B20> is currently using SeriesGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
  ).agg(to_agg)
c:\Users\annal\aim\.venv\Lib\site-packages\featuretools\computational_backends\feature_set_calculator.py:785: FutureWarning: The provided callable <function min at 0x000001952157A660> is currently using SeriesGroupBy.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead.
  ).agg(to_agg)
c:\Users\annal\aim\.venv\Lib\site-packages\featuretools\computational_backends\feature_set_calculator.py:785: FutureWarning: The provided callable <function mean at 0x000001952157AF20> is currently using SeriesGroupBy.mean. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "mean" instead.
  ).agg(to_agg)
Out[ ]:
Rank Networth Age Industry id country_id country_table.id country_table.Country country_table.COUNT(other_about_billioner) country_table.MAX(other_about_billioner.Age) ... country_table.SKEW(other_about_billioner.Rank ) country_table.SKEW(other_about_billioner.id) country_table.STD(other_about_billioner.Age) country_table.STD(other_about_billioner.Networth) country_table.STD(other_about_billioner.Rank ) country_table.STD(other_about_billioner.id) country_table.SUM(other_about_billioner.Age) country_table.SUM(other_about_billioner.Networth) country_table.SUM(other_about_billioner.Rank ) country_table.SUM(other_about_billioner.id)
billioner_id
0 1 219 50 Automotive 0 0 0 United States 1 50.0 ... NaN NaN NaN NaN NaN NaN 50.0 219.0 1.0 0.0
1 2 171 58 Technology 1 1 1 United States 1 58.0 ... NaN NaN NaN NaN NaN NaN 58.0 171.0 2.0 1.0
2 3 158 73 Fashion & Retail 2 2 2 France 1 73.0 ... NaN NaN NaN NaN NaN NaN 73.0 158.0 3.0 2.0
3 4 129 66 Technology 3 3 3 United States 1 66.0 ... NaN NaN NaN NaN NaN NaN 66.0 129.0 4.0 3.0
4 5 118 91 Finance & Investments 4 4 4 United States 1 91.0 ... NaN NaN NaN NaN NaN NaN 91.0 118.0 5.0 4.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2595 2578 1 80 Healthcare 2595 2595 2595 Spain 1 80.0 ... NaN NaN NaN NaN NaN NaN 80.0 1.0 2578.0 2595.0
2596 2578 1 82 Fashion & Retail 2596 2596 2596 Philippines 1 82.0 ... NaN NaN NaN NaN NaN NaN 82.0 1.0 2578.0 2596.0
2597 2578 1 71 Fashion & Retail 2597 2597 2597 Philippines 1 71.0 ... NaN NaN NaN NaN NaN NaN 71.0 1.0 2578.0 2597.0
2598 2578 1 68 Fashion & Retail 2598 2598 2598 Philippines 1 68.0 ... NaN NaN NaN NaN NaN NaN 68.0 1.0 2578.0 2598.0
2599 2578 1 69 Food & Beverage 2599 2599 2599 Germany 1 69.0 ... NaN NaN NaN NaN NaN NaN 69.0 1.0 2578.0 2599.0

2600 rows × 35 columns