import pandas as pd
= [0] x
- Expose the issues related to the modularity of
Python
; - Introduce the daily working environment of data scientists, namely the notebook;
- Illustrate the use of notebooks within the framework of this course;
- Present the approach to take when encountering an error;
- Understand the importance of ongoing training in
Python
and discover some good resources for it.
1 Introduction
The richness of open-source languages comes from the ability
to use packages
developed by specialists. Python
is particularly
well-endowed in this area. To caricature, it is sometimes said
that Python
is the second-best language for all tasks, which makes it the best language.
Indeed, the malleability of Python
means that it can be
approached in very different ways
depending on whether you are a SysAdmin, a web developer, or a
data scientist. It is this latter profile that will interest us here.
However, this richness poses a challenge when starting to
learn Python
or reusing code
written by others. Proprietary statistical
languages operate in a top-down model, where, once a license is acquired,
you simply install the software and follow the documentation provided by
the developing company to proceed. In open source languages, the approach is more bottom up: the ecosystem is enriched by contributions and documentation from various sources. Several ways of doing the same thing coexist, and it is part of the work to take the time to choose the best approach.
There are primarily two ways to use Python
for data science1:
- Local installation: Install the
Python
software on your machine, configure a suitable environment (usually viaAnaconda
), and use a development tool, such asJupyter
orVSCode
, to write and executePython
code. We will revisit these three levels of abstraction in the next section. - Using an online environment: Access a preconfigured
Python
environment via your browser, hosted on a remote machine. This machine will execute the code you edit from your browser. This method is particularly recommended for beginners or for those who do not want to deal with system configuration.
The second approach is related to cloud services. In the context of this course, as will be explained later, we offer two solutions: SSPCloud
, a cloud developed by the French administration and provided free of charge to students, researchers, and public officials, or Google Colab
.
The two methods are briefly described below. The local installation method is presented to introduce useful concepts but is not detailed as it can be complex and prone to configuration issues. If you choose this route, be prepared to encounter technical difficulties and sometimes obscure configurations, often requiring tedious searches to resolve problems that you would probably not have encountered in an online environment.
2 Ingredients for a functional Python
environment
As mentioned earlier, using Python
locally for data science relies on three main elements: the Python
interpreter, a virtual environment, and an integrated development environment (IDE). Each of these components plays a complementary role in the development process.
2.1 Python
Interpreter
This refers to the programming language itself. Installing Python
on your machine is the essential first step, as it provides the interpreter needed to run your Python
code. At this stage, it is only the base language and a command-line tool.
If you have access to a command line in an environment where Python
is available and properly configured (including being added to the PATH
), you can already run Python
through the command line2
Linux terminal
--version
python -c "print(3+3)"
python 1"print(3+3)" > example.py && python example.py && rm example.py echo
- 1
-
An example of creating a file
example.py
from the command line and using it from the command line. Since creating files from the command line is not a realistic working mode, we will explain how to create files using a text editor.
2.2 Python
Environment
The Python
language is built, like other open-source languages, with a basic core and additional packages. It is these packages that form the rich and dynamic ecosystem of Python
, making it such a comfortable language.
Python
is an open-source language, which means that anyone can offer their code for reuse in the form of a package. There are platforms that centralize community packages, with the main one in the Python
ecosystem being PyPI
.
An environment refers to the collection of packages available to Python
for performing specific tasks. Python
is not installed with all the packages available on PyPI
, so it is up to you to enrich your environment by installing new packages as needed.
If your Python
is correctly configured, you can install new packages using pip install
3. We will use many packages in this course, so this command will come up regularly.
Once installed, a script must declare a package before using it (otherwise Python
won’t know where to find specific functions). This is the purpose of the import
command. For example, import pandas as pd
allows you to use the Pandas
package (provided it is already installed).
Here is an illustration of how package management works in Python
It is discouraged to use global imports like from pkg import *
. For example, consider two modules that provide a sqrt
function:
# Don't do that please !
from numpy import *
from math import *
4, 3]) sqrt([
Global imports load all functions and variables from the numpy
and math
modules into the global namespace, which can lead to naming conflicts.
First, this can lead to unpredictable results, as if the implementations differ, how do you know whether it’s the numpy
or math
package function that was used? Here, it will be the math
package function, imported last, which will cause an error: the math
module only handles integers, not vectors, unlike numpy
.
Second, it makes the code less readable, as it becomes difficult to know where each function or variable used comes from, complicating maintenance and debugging.
It is therefore preferable to import only the necessary functions or use explicit aliases, such as import numpy as np
and import math
, to avoid these issues.
import numpy as np
from math import sqrt
3,4])
np.sqrt([3) sqrt(
2.2.1 Development environments and notebooks
Using Python
via the command line is fundamental in the application world. However, it’s impractical to write your code directly in the command line on a daily basis. Fortunately, there are suitable editors known as IDEs. These are software that provide a convenient interface for writing and executing your code. They offer features to simplify code reading and writing: syntax highlighting, autocompletion, debugging, formal code quality diagnostics, etc.
Jupyter
notebooks4 offer an interactive interface that allows you to write Python
code, test it, and see the result below the instruction rather than in a separate console. Jupyter
notebooks are essential in the fields of data science and education and research because they greatly simplify exploration and experimentation.
They allow you to combine text in Markdown
format (a lighter markup text format than HTML
or \(\LaTeX\)), Python
code, and HTML
code for visualizations and animations in a single document.
Initially, Jupyter
was the only software offering these interactive features. Now, there are other ways to benefit from notebook advantages while having an IDE with more comprehensive features than Jupyter
. For this reason, as of 2024, it is more practical to use VSCode
[^vscode-python], a general-purpose code editor but offering excellent features in Python
, than Jupyter
. For more information on using notebooks in VSCode, refer to the official documentation.
3 Using notebooks in the context of this course
The best way to learn Python
or notebooks is through practice, so all chapters of this course will be executable in notebook format. The following buttons allow you to open this chapter in notebook format in different environments:
- On
Github
, only for viewing and downloading notebooks asGithub
is not a development and execution environment for notebooks; - On
SSPCloud
, a modern cloud platform developed by Insee and provided free of charge to public agents, students, researchers, and agents of European public statistical institutes. As mentioned in the dedicated box, this is the recommended entry point for notebooks for anyone who has access to it. Notebooks can be opened there viaVSCode
(recommended approach) orJupyter
, with command-line access and appropriate rights for package installation guaranteed in both interfaces. In addition to these already desirable features for discoveringPython
, other useful features for continuous learning are explored in more detail in the third year in the course on “Deploying Data Science Projects”: free GPU access, interfacing with other cloud technologies such as object storage systems, etc. - Google Colab is a free online service based on the
Jupyter
interface that provides access toPython
resources run on Google’s servers.
For public sector employees or students from partner schools, it is recommended to use the SSPCloud
button, which is a modern, powerful, and flexible cloud infrastructure developed by Insee and accessible at https://datalab.sspcloud.fr5.
As mentioned, VSCode
is a much more complete environment than Jupyter
for using notebooks.
4 Exercise to Explore Basic Notebook Features
All the chapters of this course are designed as a narrative aiming to address a problem with intermediate exercises serving as milestones. They are easily identifiable in the following format:
An example of an exercise box
The following exercise aims to familiarize you with using Jupyter
notebooks in Python
if you are not familiar with this environment. It illustrates basic features such as writing code, executing cells, adding text, and visualizing data.
To do this, open this chapter in a notebook-compatible environment: 6
- Under this exercise, create a code cell. Write a
Python
code that displays the phrase: “Welcome to a notebook!” and then execute the cell. Modify your code and re-run it. - By searching online, add a new cell and change its type to “Markdown”. In this cell, write a short text including the following elements:
- A small part in italics
- An unordered list
- A level 2 heading (equivalent to
<h2>
in HTML or\subsection
in \(\LaTeX\)) - An equation
- Create a code cell anywhere in the document. Create a list of integers from 1 to 10 named
numbers
. Display this list. - Create a new code cell. Use the code below this exercise to generate a figure.
The figure obtained at the end of the exercise will look like this:
5 How to solve errors?
Encountering errors is completely natural and expected when learning (and even after!) a programming language. Resolving these errors is a great opportunity to understand how the language works and to become self-sufficient in its use. Here is a suggested sequence of steps to follow (in this order) to resolve an error:
- Read the logs carefully, i.e., the outputs returned by
Python
in case of an error. Often, they are informative and may contain the answer directly. - Search on the internet (preferably in English and on
Google
). For example, providing the error name and part of the informative error message returned byPython
usually helps to orient the search results towards what you are looking for. - Often, the search will lead you to the Stackoverflow forum, which is designed for this purpose. If you really can’t find the answer to your problem, you can post on
Stackoverflow
by detailing the problem encountered so that forum users can reproduce it and find a solution. - Official documentation (of
Python
and various packages) is often a bit dry but generally exhaustive. It helps to understand how to use the various objects. For example, for functions: what they expect as input, parameters and their types, what they return as output, etc. - Code assistant AIs (
ChatGPT
,Github Copilot
) can be very helpful. By providing them with appropriate instructions and verifying the generated code to avoid hallucinations, you can save a lot of time with these tools.
Fix the cell below so that it no longer produces an error
pd.DataFrame(x)
0 | |
---|---|
0 | 0 |
6 How to continue learning after this course
6.1 Website content
This course is an introduction to data science with Python
. Most of the content is designed for those who are new to the subject or wish to explore a specific topic in this field, such as NLP.
However, this course also reflects my nearly decade-long experience 👴 with Python
on various data sources, infrastructures, and problems: it is thus somewhat editorialized (“opinionated” as the Anglo-Saxons would say) to highlight certain expectations for data scientists and to help you avoid the same pitfalls I encountered in the past.
This course also provides content to go beyond the first few months of learning. Not all the content on this site is taught; some advanced sections and even chapters are intended for continuous learning and can be used several months after discovering this course.
pythonds.linogaliana.fr is continuously updated to reflect the evolving Python
ecosystem. The notebooks will remain available beyond the teaching semester.
6.2 Technical monitoring
The rich and thriving Python
ecosystem means you must stay attentive to its developments to avoid becoming outdated. While with monolithic languages like SAS
or Stata
one could rely solely on official documentation without ongoing technical monitoring, this is not feasible with Python
or R
. This course itself is continuously evolving, which is quite demanding 😅, to keep up with the ecosystem changes.
Social networks like LinkedIn
or X
, and content aggregators like Medium
or Towards Data Science
, offer posts of varying quality, but maintaining continuous technical monitoring on these topics is not useless: over time, it can help identify new trends. The site Real Python
generally provides very good posts, comprehensive and educational. Github
can also be useful for technical monitoring: by looking at trending projects, you can see the trends that will emerge soon.
Regarding printed books, some are of very high quality. However, it is important to pay attention to their update dates: the rapid pace of some elements of the ecosystem can render them outdated very quickly. It is generally more useful to have a recent but non-exhaustive post than a complete but outdated book.
There are many well-crafted newsletters for regularly tracking developments in the data science ecosystem. For me, they are the primary source of fresh information.
If you had to subscribe to only one newsletter, the most important to follow is Andrew Ng’s, “The Batch”. It provides reflections on academic advances in neural networks, evolution of the software and institutional ecosystem, making it an excellent food for thought.
The newsletter by Christophe Bleffari, aimed at data engineers but also of great interest to data scientists, often presents very good content. The newsletter by Rami Krispin (data scientist at Apple) is also very useful, especially when working regularly with both Python
and and Quarto
, the reproducible publishing software.
Fairly technical, the videos by Andrej Karpathy (data scientist at OpenAI) are very informative for understanding the workings of state-of-the-art language models. Similarly, content produced by Sebastian Raschka helps in knowing the latest advancements in research on the topic.
General newsletters from Data Elixir and Alpha Signal keep you updated with the latest news. In the field of data visualization, newsletters from DataWrapper provide accessible content on the subject.
Informations additionnelles
environment files have been tested on.
Latest built version: 2025-03-19
Python version used:
'3.12.6 | packaged by conda-forge | (main, Sep 30 2024, 18:08:52) [GCC 13.3.0]'
Package | Version |
---|---|
affine | 2.4.0 |
aiobotocore | 2.21.1 |
aiohappyeyeballs | 2.6.1 |
aiohttp | 3.11.13 |
aioitertools | 0.12.0 |
aiosignal | 1.3.2 |
alembic | 1.13.3 |
altair | 5.4.1 |
aniso8601 | 9.0.1 |
annotated-types | 0.7.0 |
anyio | 4.8.0 |
appdirs | 1.4.4 |
archspec | 0.2.3 |
asttokens | 2.4.1 |
attrs | 25.3.0 |
babel | 2.17.0 |
bcrypt | 4.2.0 |
beautifulsoup4 | 4.12.3 |
black | 24.8.0 |
blinker | 1.8.2 |
blis | 1.2.0 |
bokeh | 3.5.2 |
boltons | 24.0.0 |
boto3 | 1.37.1 |
botocore | 1.37.1 |
branca | 0.7.2 |
Brotli | 1.1.0 |
bs4 | 0.0.2 |
cachetools | 5.5.0 |
cartiflette | 0.0.2 |
Cartopy | 0.24.1 |
catalogue | 2.0.10 |
cattrs | 24.1.2 |
certifi | 2025.1.31 |
cffi | 1.17.1 |
charset-normalizer | 3.4.1 |
chromedriver-autoinstaller | 0.6.4 |
click | 8.1.8 |
click-plugins | 1.1.1 |
cligj | 0.7.2 |
cloudpathlib | 0.21.0 |
cloudpickle | 3.0.0 |
colorama | 0.4.6 |
comm | 0.2.2 |
commonmark | 0.9.1 |
conda | 24.9.1 |
conda-libmamba-solver | 24.7.0 |
conda-package-handling | 2.3.0 |
conda_package_streaming | 0.10.0 |
confection | 0.1.5 |
contextily | 1.6.2 |
contourpy | 1.3.1 |
cryptography | 43.0.1 |
cycler | 0.12.1 |
cymem | 2.0.11 |
cytoolz | 1.0.0 |
dask | 2024.9.1 |
dask-expr | 1.1.15 |
databricks-sdk | 0.33.0 |
dataclasses-json | 0.6.7 |
debugpy | 1.8.6 |
decorator | 5.1.1 |
Deprecated | 1.2.14 |
diskcache | 5.6.3 |
distributed | 2024.9.1 |
distro | 1.9.0 |
docker | 7.1.0 |
duckdb | 1.2.1 |
en_core_web_sm | 3.8.0 |
entrypoints | 0.4 |
et_xmlfile | 2.0.0 |
exceptiongroup | 1.2.2 |
executing | 2.1.0 |
fastexcel | 0.11.6 |
fastjsonschema | 2.21.1 |
fiona | 1.10.1 |
Flask | 3.0.3 |
folium | 0.17.0 |
fontawesomefree | 6.6.0 |
fonttools | 4.56.0 |
fr_core_news_sm | 3.8.0 |
frozendict | 2.4.4 |
frozenlist | 1.5.0 |
fsspec | 2023.12.2 |
geographiclib | 2.0 |
geopandas | 1.0.1 |
geoplot | 0.5.1 |
geopy | 2.4.1 |
gitdb | 4.0.11 |
GitPython | 3.1.43 |
google-auth | 2.35.0 |
graphene | 3.3 |
graphql-core | 3.2.4 |
graphql-relay | 3.2.0 |
graphviz | 0.20.3 |
great-tables | 0.12.0 |
greenlet | 3.1.1 |
gunicorn | 22.0.0 |
h11 | 0.14.0 |
h2 | 4.1.0 |
hpack | 4.0.0 |
htmltools | 0.6.0 |
httpcore | 1.0.7 |
httpx | 0.28.1 |
httpx-sse | 0.4.0 |
hyperframe | 6.0.1 |
idna | 3.10 |
imageio | 2.37.0 |
importlib_metadata | 8.6.1 |
importlib_resources | 6.5.2 |
inflate64 | 1.0.1 |
ipykernel | 6.29.5 |
ipython | 8.28.0 |
itsdangerous | 2.2.0 |
jedi | 0.19.1 |
Jinja2 | 3.1.6 |
jmespath | 1.0.1 |
joblib | 1.4.2 |
jsonpatch | 1.33 |
jsonpointer | 3.0.0 |
jsonschema | 4.23.0 |
jsonschema-specifications | 2024.10.1 |
jupyter-cache | 1.0.0 |
jupyter_client | 8.6.3 |
jupyter_core | 5.7.2 |
kaleido | 0.2.1 |
kiwisolver | 1.4.8 |
langchain | 0.3.20 |
langchain-community | 0.3.9 |
langchain-core | 0.3.45 |
langchain-text-splitters | 0.3.6 |
langcodes | 3.5.0 |
langsmith | 0.1.147 |
language_data | 1.3.0 |
lazy_loader | 0.4 |
libmambapy | 1.5.9 |
locket | 1.0.0 |
loguru | 0.7.3 |
lxml | 5.3.1 |
lz4 | 4.3.3 |
Mako | 1.3.5 |
mamba | 1.5.9 |
mapclassify | 2.8.1 |
marisa-trie | 1.2.1 |
Markdown | 3.6 |
markdown-it-py | 3.0.0 |
MarkupSafe | 3.0.2 |
marshmallow | 3.26.1 |
matplotlib | 3.10.1 |
matplotlib-inline | 0.1.7 |
mdurl | 0.1.2 |
menuinst | 2.1.2 |
mercantile | 1.2.1 |
mizani | 0.11.4 |
mlflow | 2.16.2 |
mlflow-skinny | 2.16.2 |
msgpack | 1.1.0 |
multidict | 6.1.0 |
multivolumefile | 0.2.3 |
munkres | 1.1.4 |
murmurhash | 1.0.12 |
mypy-extensions | 1.0.0 |
narwhals | 1.30.0 |
nbclient | 0.10.0 |
nbformat | 5.10.4 |
nest_asyncio | 1.6.0 |
networkx | 3.4.2 |
nltk | 3.9.1 |
numpy | 2.2.3 |
opencv-python-headless | 4.10.0.84 |
openpyxl | 3.1.5 |
opentelemetry-api | 1.16.0 |
opentelemetry-sdk | 1.16.0 |
opentelemetry-semantic-conventions | 0.37b0 |
orjson | 3.10.15 |
outcome | 1.3.0.post0 |
OWSLib | 0.28.1 |
packaging | 24.2 |
pandas | 2.2.3 |
paramiko | 3.5.0 |
parso | 0.8.4 |
partd | 1.4.2 |
pathspec | 0.12.1 |
patsy | 1.0.1 |
Pebble | 5.1.0 |
pexpect | 4.9.0 |
pickleshare | 0.7.5 |
pillow | 11.1.0 |
pip | 24.2 |
platformdirs | 4.3.6 |
plotly | 5.24.1 |
plotnine | 0.13.6 |
pluggy | 1.5.0 |
polars | 1.8.2 |
preshed | 3.0.9 |
prometheus_client | 0.21.0 |
prometheus_flask_exporter | 0.23.1 |
prompt_toolkit | 3.0.48 |
propcache | 0.3.0 |
protobuf | 4.25.3 |
psutil | 7.0.0 |
ptyprocess | 0.7.0 |
pure_eval | 0.2.3 |
py7zr | 0.20.8 |
pyarrow | 17.0.0 |
pyarrow-hotfix | 0.6 |
pyasn1 | 0.6.1 |
pyasn1_modules | 0.4.1 |
pybcj | 1.0.3 |
pycosat | 0.6.6 |
pycparser | 2.22 |
pycryptodomex | 3.21.0 |
pydantic | 2.10.6 |
pydantic_core | 2.27.2 |
pydantic-settings | 2.8.1 |
Pygments | 2.19.1 |
PyNaCl | 1.5.0 |
pynsee | 0.1.8 |
pyogrio | 0.10.0 |
pyOpenSSL | 24.2.1 |
pyparsing | 3.2.1 |
pyppmd | 1.1.1 |
pyproj | 3.7.1 |
pyshp | 2.3.1 |
PySocks | 1.7.1 |
python-dateutil | 2.9.0.post0 |
python-dotenv | 1.0.1 |
python-magic | 0.4.27 |
pytz | 2025.1 |
pyu2f | 0.1.5 |
pywaffle | 1.1.1 |
PyYAML | 6.0.2 |
pyzmq | 26.3.0 |
pyzstd | 0.16.2 |
querystring_parser | 1.2.4 |
rasterio | 1.4.3 |
referencing | 0.36.2 |
regex | 2024.9.11 |
requests | 2.32.3 |
requests-cache | 1.2.1 |
requests-toolbelt | 1.0.0 |
retrying | 1.3.4 |
rich | 13.9.4 |
rpds-py | 0.23.1 |
rsa | 4.9 |
rtree | 1.4.0 |
ruamel.yaml | 0.18.6 |
ruamel.yaml.clib | 0.2.8 |
s3fs | 2023.12.2 |
s3transfer | 0.11.3 |
scikit-image | 0.24.0 |
scikit-learn | 1.6.1 |
scipy | 1.13.0 |
seaborn | 0.13.2 |
selenium | 4.29.0 |
setuptools | 76.0.0 |
shapely | 2.0.7 |
shellingham | 1.5.4 |
six | 1.17.0 |
smart-open | 7.1.0 |
smmap | 5.0.0 |
sniffio | 1.3.1 |
sortedcontainers | 2.4.0 |
soupsieve | 2.5 |
spacy | 3.8.4 |
spacy-legacy | 3.0.12 |
spacy-loggers | 1.0.5 |
SQLAlchemy | 2.0.39 |
sqlparse | 0.5.1 |
srsly | 2.5.1 |
stack-data | 0.6.2 |
statsmodels | 0.14.4 |
tabulate | 0.9.0 |
tblib | 3.0.0 |
tenacity | 9.0.0 |
texttable | 1.7.0 |
thinc | 8.3.4 |
threadpoolctl | 3.6.0 |
tifffile | 2025.3.13 |
toolz | 1.0.0 |
topojson | 1.9 |
tornado | 6.4.2 |
tqdm | 4.67.1 |
traitlets | 5.14.3 |
trio | 0.29.0 |
trio-websocket | 0.12.2 |
truststore | 0.9.2 |
typer | 0.15.2 |
typing_extensions | 4.12.2 |
typing-inspect | 0.9.0 |
tzdata | 2025.1 |
Unidecode | 1.3.8 |
url-normalize | 1.4.3 |
urllib3 | 1.26.20 |
uv | 0.6.8 |
wasabi | 1.1.3 |
wcwidth | 0.2.13 |
weasel | 0.4.1 |
webdriver-manager | 4.0.2 |
websocket-client | 1.8.0 |
Werkzeug | 3.0.4 |
wheel | 0.44.0 |
wordcloud | 1.9.3 |
wrapt | 1.17.2 |
wsproto | 1.2.0 |
xgboost | 2.1.1 |
xlrd | 2.0.1 |
xyzservices | 2025.1.0 |
yarl | 1.18.3 |
yellowbrick | 1.5 |
zict | 3.0.0 |
zipp | 3.21.0 |
zstandard | 0.23.0 |
View file history
SHA | Date | Author | Description |
---|---|---|---|
3f1d2f3f | 2025-03-15 15:55:59 | Lino Galiana | Fix problem with uv and malformed files (#599) |
388fd975 | 2025-02-28 17:34:09 | Lino Galiana | Colab again and again… (#595) |
2f96f636 | 2025-01-29 19:49:36 | Lino Galiana | Tweak callout for colab engine (#591) |
5f08b572 | 2024-08-29 10:33:57 | Lino Galiana | Traduction de l’introduction (#551) |
f8b04136 | 2024-08-28 15:15:04 | Lino Galiana | Révision complète de la partie introductive (#549) |
Footnotes
A third way to use
Python
, still under development, relies on a technology called WebAssembly. This approach allows executingPython
code directly in the browser, offering a local execution experience without requiring complex installation.↩︎You do not have direct access to the command line with
Google Colab
, only indirectly through the notebook interface by prefixing commands with!
. OnVSCode
services provided bySSPCloud
, the recommended interface for this course, you can access a command line by clicking☰ > Terminal > New Terminal
. On a personal installation ofVSCode
, this will be at the top in theTerminal > New Terminal
menu.↩︎More details on environments are available in the 3rd-year course “Deployment of Data Science Projects”, including different types of virtual environments (
conda
orvenv
) and their implications forPython
computing chains.↩︎Jupyter
originated from theIPython
project, an interactive environment forPython
developed by Fernando Pérez in 2001. In 2014, the project evolved to support other programming languages in addition to Python, leading to the creation of the Jupyter project. The name “Jupyter” is an acronym referring to the three main languages it supports:Julia
,Python
, andR
.Jupyter
notebooks are crucial in the fields of data science and education and research because they greatly simplify exploration and experimentation.↩︎For users of this infrastructure, the notebooks for this course are also listed, along with many other high-quality resources, on the Training page.↩︎
To use https://datalab.sspcloud.fr if you are eligible, you need to create an account.↩︎
Citation
@book{galiana2023,
author = {Galiana, Lino},
title = {Python Pour La Data Science},
date = {2023},
url = {https://pythonds.linogaliana.fr/},
doi = {10.5281/zenodo.8229676},
langid = {en}
}