Skip to content
Snippets Groups Projects
Commit 674e9169 authored by sandcha's avatar sandcha
Browse files

Merge branch 'init-ci' into 'main'

Initialise la configuration de l'intégration continue et les templates GitLab

See merge request !4
parents 050a7482 bd348c9b
Branches
Tags 0.1.1-rc0
1 merge request!4Initialise la configuration de l'intégration continue et les templates GitLab
Pipeline #15811 passed
image: python:3.11
variables:
PIP_DOWNLOAD_DIR: ".pip"
stages:
- install
- test
# get a cache before each stage
cache:
key:
files:
- poetry.lock
paths:
- .venv/
policy: pull-push
# add the untracked files to the cache.zip
untracked: true
# run script before each job's script
before_script:
# install poetry (and collapse section)
- echo -e "\e[0Ksection_start:`date +%s`:poetry_install_section[collapsed=true]\r\e[0KInstalling poetry..."
- pip download --dest=${PIP_DOWNLOAD_DIR} poetry # STEP 1
- pip install --find-links=${PIP_DOWNLOAD_DIR} poetry # STEP 2
- echo -e "\e[0Ksection_end:`date +%s`:poetry_install_section\r\e[0K"
# set the virtual environment to a '.venv/' in the project directory
- poetry config virtualenvs.in-project true
install-repository:
stage: install
script:
- poetry install
check-style:
stage: test
needs: ["install-repository"]
script:
- poetry run flake8 `git ls-files | grep "\.py"`
test:
stage: test
needs: ["install-repository"]
script:
- poetry run pytest --log-cli-level=DEBUG
Hello hello !
Le calcul des dotations me passionne, mais je viens de rencontrer un problème.
### Qu'ai-je fait ? Dans quel contexte ?
### À quoi m'attendais-je ?
### Que s'est-il passé en réalité ?
### Voici des informations qui peuvent aider à reproduire le problème :
Merci de contribuer à leximpact-dotations-back ! Effacez cette ligne ainsi que, pour chaque ligne ci-dessous, les cas ne correspondant pas à votre contribution :)
* Ajout d'une fonctionnalité. | Évolution **non rétro-compatible**. | Amélioration technique. | Correction d'un crash. | Changement mineur.
* Période [des dotations] : toutes. | jusqu'au JJ/MM/AAAA. | à partir du JJ/MM/AAAA.
* Détails :
- Description de la fonctionnalité ajoutée ou du nouveau comportement adopté.
- Cas dans lesquels une erreur était constatée.
- Eventuel guide de migration pour les réutilisations.
- - - -
Ces changements (effacez les lignes ne correspondant pas à votre cas) :
- Modifient l'API de leximpact-dotations-back (par exemple changement du format de requête de l'API web).
- Ajoutent une fonctionnalité (par exemple ajout d'une nouvelle année de calcul des dotations).
- Corrigent ou améliorent un calcul déjà existant.
- Modifient des éléments non fonctionnels de ce dépôt (par exemple modification du README).
# CHANGELOG # CHANGELOG
### 0.1.1
* Amélioration technique.
* Détails :
* Configure les intéractions avec GitLab :
* Initie l'intégration continue pour l'installation, la vérification de style et le test
* Ajoute les templates d'ouverture d'issue et de merge request
* Configure les vérifications de style du code avec `flake8` et `autopep8`
* Permet la configuration de `flake8` dans `pyproject.toml` de poetry grâce à la librairie `flake8-pyproject`
* Corrige le style des fichiers Python
* Documente les commandes dans le `README`
## 0.1.0 ## 0.1.0
* Ajout d'une fonctionnalité. * Ajout d'une fonctionnalité.
......
...@@ -62,6 +62,24 @@ Par ailleurs, les traces sont gérées avec la librairie [logging](https://docs. ...@@ -62,6 +62,24 @@ Par ailleurs, les traces sont gérées avec la librairie [logging](https://docs.
poetry run pytest --log-cli-level=DEBUG poetry run pytest --log-cli-level=DEBUG
``` ```
## Ajuster le style du code
Vérifier le style du code avec [flake8](https://flake8.pycqa.org) :
```shell
poetry run flake8
```
Ou vérifier le style du code que vous avez ajouté avec :
```shell
poetry run flake8 `git ls-files | grep "\.py"`
```
Et le corriger automatiquement dans le répertoire courant et récursivement dans son contenu avec :
```shell
poetry run autopep8 .
```
La récursivité de cette commande n'est effective que lorsqu'elle est configuée pour [autopep8](https://pypi.org/project/autopep8/) dans le fichier `pyproject.toml`.
## Données ## Données
La Direction générale des collectivités locales (DGCL), publie en donnée ouverte les critères de répartition des dotations. La Direction générale des collectivités locales (DGCL), publie en donnée ouverte les critères de répartition des dotations.
......
...@@ -9,7 +9,7 @@ import logging ...@@ -9,7 +9,7 @@ import logging
from numpy import nan from numpy import nan
from os.path import dirname, join from os.path import dirname, join
from pandas import DataFrame, isna, read_csv, Series from pandas import DataFrame, read_csv, Series
from leximpact_dotations_back.configure_logging import formatter from leximpact_dotations_back.configure_logging import formatter
...@@ -38,13 +38,17 @@ def drop_separation_columns(data: DataFrame): ...@@ -38,13 +38,17 @@ def drop_separation_columns(data: DataFrame):
data_without_separation_columns = data.copy() data_without_separation_columns = data.copy()
# row 0 is empty, row 1 contains the main titles (dotation), row 2 contains the subtitles (critère) # row 0 is empty, row 1 contains the main titles (dotation), row 2 contains the subtitles (critère)
# but for separation columns where the cell on row 1 is NaN and the cell on row 2 (and next rows) is a hyphen # but for separation columns where the cell on row 1 is NaN and the cell
are_columns_to_drop = data.apply(lambda col: all(x in [nan, '-'] for x in col), axis=0) # on row 2 (and next rows) is a hyphen
are_columns_to_drop = data.apply(
lambda col: all(x in [nan, '-'] for x in col), axis=0)
data_without_separation_columns = data.loc[:, ~are_columns_to_drop] data_without_separation_columns = data.loc[:, ~are_columns_to_drop]
# INFO to check the 'dropped_columns_names' in DGCL .xlsx # INFO to check the 'dropped_columns_names' in DGCL .xlsx
# you can get a column number with this formula (knowing that you might already have deleted the 1st column): =COLUMN() # you can get a column number with this formula (knowing that you might
logger.debug(f"Dropped the following columns: {are_columns_to_drop[are_columns_to_drop].index.values}") # already have deleted the 1st column): =COLUMN()
logger.debug(
f"Dropped the following columns: {are_columns_to_drop[are_columns_to_drop].index.values}")
return data_without_separation_columns return data_without_separation_columns
...@@ -53,12 +57,15 @@ def clean_criteres_dgcl(year: int): ...@@ -53,12 +57,15 @@ def clean_criteres_dgcl(year: int):
Remove first empty rows and column and set one column title from the DGCL title and subtitle. Remove first empty rows and column and set one column title from the DGCL title and subtitle.
''' '''
if year != 2024: if year != 2024:
logger.warning(f"Applying '2024' data cleaning method on '{year}' data but there is nothing like dream to create the future :D") logger.warning(
f"Applying '2024' data cleaning method on '{year}' data but there is nothing like dream to create the future :D")
raw_criteres_dgcl_path = join(FILES_DIRPATH, CRITERES_FILENAME_PREFIX + str(year) + CRITERES_RAW_FILENAME_SUFFIX) raw_criteres_dgcl_path = join(
FILES_DIRPATH, CRITERES_FILENAME_PREFIX + str(year) + CRITERES_RAW_FILENAME_SUFFIX)
logger.info(f"Cleaning '{raw_criteres_dgcl_path}' file content...") logger.info(f"Cleaning '{raw_criteres_dgcl_path}' file content...")
criteres_dgcl_without_2rows: DataFrame = read_csv(raw_criteres_dgcl_path, decimal=",", low_memory=False, header=None, skiprows=2) criteres_dgcl_without_2rows: DataFrame = read_csv(
raw_criteres_dgcl_path, decimal=",", low_memory=False, header=None, skiprows=2)
logger.info("Loaded raw data extract without 2 top rows: ") logger.info("Loaded raw data extract without 2 top rows: ")
logger.info(criteres_dgcl_without_2rows.iloc[:, :5]) logger.info(criteres_dgcl_without_2rows.iloc[:, :5])
...@@ -66,17 +73,22 @@ def clean_criteres_dgcl(year: int): ...@@ -66,17 +73,22 @@ def clean_criteres_dgcl(year: int):
logger.debug("Data extract without first column: ") logger.debug("Data extract without first column: ")
logger.debug(criteres_dgcl_without_2rows_col1.iloc[:, :5]) logger.debug(criteres_dgcl_without_2rows_col1.iloc[:, :5])
criteres_dgcl_without_2rows_col1_sep = drop_separation_columns(criteres_dgcl_without_2rows_col1) criteres_dgcl_without_2rows_col1_sep = drop_separation_columns(
criteres_dgcl_without_2rows_col1)
# for each column, concatenate the values of the 1st and 2nd row and add a hyphen as separator # for each column, concatenate the values of the 1st and 2nd row and add a
concatenated_title_rows: Series = criteres_dgcl_without_2rows_col1_sep.agg('{0[0]} - {0[1]}'.format, axis=0) # hyphen as separator
concatenated_title_rows: Series = criteres_dgcl_without_2rows_col1_sep.agg(
'{0[0]} - {0[1]}'.format, axis=0)
logger.debug(concatenated_title_rows) logger.debug(concatenated_title_rows)
# use concatenated values as new title # use concatenated values as new title
renamed_criteres_dgcl = criteres_dgcl_without_2rows_col1_sep.set_axis(concatenated_title_rows, axis=1) renamed_criteres_dgcl = criteres_dgcl_without_2rows_col1_sep.set_axis(
concatenated_title_rows, axis=1)
# remove old titles (2 first rows), reset index and drop old one # remove old titles (2 first rows), reset index and drop old one
cleaned_criteres_dgcl = renamed_criteres_dgcl.tail(-2).reset_index(drop=True) cleaned_criteres_dgcl = renamed_criteres_dgcl.tail(
-2).reset_index(drop=True)
logger.info("Final and cleaned data extract: ") logger.info("Final and cleaned data extract: ")
logger.info(cleaned_criteres_dgcl.iloc[:, :5]) logger.info(cleaned_criteres_dgcl.iloc[:, :5])
...@@ -84,7 +96,8 @@ def clean_criteres_dgcl(year: int): ...@@ -84,7 +96,8 @@ def clean_criteres_dgcl(year: int):
def create_criteres_dgcl_csv(clean_criteres: DataFrame, year: int): def create_criteres_dgcl_csv(clean_criteres: DataFrame, year: int):
new_criteres_csv_path = join(FILES_DIRPATH, OUTPUT_RELATIVE_DIRPATH, CRITERES_FILENAME_PREFIX + str(year) + ".csv") new_criteres_csv_path = join(
FILES_DIRPATH, OUTPUT_RELATIVE_DIRPATH, CRITERES_FILENAME_PREFIX + str(year) + ".csv")
logger.info(f"Creating {new_criteres_csv_path} file...") logger.info(f"Creating {new_criteres_csv_path} file...")
clean_criteres.to_csv(new_criteres_csv_path, index=False) clean_criteres.to_csv(new_criteres_csv_path, index=False)
......
...@@ -17,4 +17,5 @@ class CustomFormatter(logging.Formatter): ...@@ -17,4 +17,5 @@ class CustomFormatter(logging.Formatter):
record.levelname = '🥊 ' + record.levelname record.levelname = '🥊 ' + record.levelname
return super().format(record) return super().format(record)
formatter = CustomFormatter('%(levelname)s') formatter = CustomFormatter('%(levelname)s')
...@@ -12,10 +12,12 @@ def read_root(): ...@@ -12,10 +12,12 @@ def read_root():
Pour en savoir plus, consulter la page /docs" Pour en savoir plus, consulter la page /docs"
} }
@app.get("/dependencies") @app.get("/dependencies")
def read_dependencies(): def read_dependencies():
# limit to a specific list of packages # limit to a specific list of packages
selected_dependencies = ["OpenFisca-Core", "OpenFisca-France-Dotations-Locales", "numpy", "fastapi"] selected_dependencies = [
"OpenFisca-Core", "OpenFisca-France-Dotations-Locales", "numpy", "fastapi"]
# get the distribution objects for all installed packages # get the distribution objects for all installed packages
dists = pkg_resources.working_set dists = pkg_resources.working_set
# extract the names and versions of the packages # extract the names and versions of the packages
......
This diff is collapsed.
[tool.poetry] [tool.poetry]
name = "leximpact-dotations-back" name = "leximpact-dotations-back"
version = "0.1.0" version = "0.1.1"
description = "" description = ""
authors = ["LexImpact <you@example.com>"] authors = ["LexImpact <you@example.com>"]
license = "AGPL" license = "AGPL"
...@@ -12,9 +12,23 @@ openfisca-france-dotations-locales = "^3.0.0" ...@@ -12,9 +12,23 @@ openfisca-france-dotations-locales = "^3.0.0"
fastapi = "^0.111.0" fastapi = "^0.111.0"
pandas = "^2.2.2" pandas = "^2.2.2"
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]
pytest = "^7" pytest = "^7"
flake8 = "^7.1.0"
autopep8 = "^2.3.1"
flake8-pyproject = "^1.2.3"
[tool.flake8] # configured here thanks to flake8-pyproject
max-line-length = 88 # = default black value
ignore = ["E501"]
[tool.autopep8]
in-place = true
recursive = true
aggressive = 2
max-line-length = 88 # = default black value
ignore = ["E501"]
[build-system] [build-system]
requires = ["poetry-core"] requires = ["poetry-core"]
......
...@@ -11,6 +11,7 @@ from leximpact_dotations_back.calculate import create_simulation_with_data ...@@ -11,6 +11,7 @@ from leximpact_dotations_back.calculate import create_simulation_with_data
def model(): def model():
return OpenFiscaFranceDotationsLocales() return OpenFiscaFranceDotationsLocales()
def test_create_simulation_with_data(model): def test_create_simulation_with_data(model):
period = 2024 period = 2024
data = DataFrame() data = DataFrame()
...@@ -18,4 +19,5 @@ def test_create_simulation_with_data(model): ...@@ -18,4 +19,5 @@ def test_create_simulation_with_data(model):
simulation = create_simulation_with_data(model, period, data) simulation = create_simulation_with_data(model, period, data)
assert_array_equal(simulation.calculate('dsu_montant', period), array([0.])) assert_array_equal(simulation.calculate(
'dsu_montant', period), array([0.]))
...@@ -7,6 +7,7 @@ logging.basicConfig(level=logging.DEBUG) ...@@ -7,6 +7,7 @@ logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
client = TestClient(app) client = TestClient(app)
def test_dependencies(): def test_dependencies():
response = client.get("/dependencies") response = client.get("/dependencies")
assert response.status_code == 200 # OK assert response.status_code == 200 # OK
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment