Top 10 Python Libraries Every Data Scientist Must Know


Data science is hard. You’ll have to learn a handful of libraries as a beginner, even to solve the most fundamental tasks. Adding insult to injury, the libraries change and get updated constantly, and there’s almost always a better tool for the job.

The problem of not knowing which tool to use is simple to understand — it results in failing completely or not doing a task optimally. What’s also dangerous is not knowing libraries well enough. You end up implementing algorithms from scratch, completely unaware there’s already a function for that. Both cost you time, nerves, and potentially money.

If you find yourself overwhelmed by data science libraries, you’re in the right place. This article will show you 10 essential ones to kick-start your data science journey.

It’s crucial to understand that learning data science takes time. You can’t do it overnight. Reading books and watching videos is a good start, but solving problems you care about is the only long-term way.


It’s a no-brainer. Numpy stands for Numerical Python and is the best library for working with arrays, scientific computation, and math in general.

Numpy comes packed with functions for linear algebra (np.linalg), Fourier transform (np.fft), and pretty much every math related. It is considerably faster than traditional Python lists. Always use it when aiming for speed and efficiency.

So, give Numpy a try before deciding to implement the eigendecomposition algorithm from scratch.

Best places to start:


Think Excel on steroids. Pandas is an essential library for data analysis. Most of the time, it’s all you need to load and prepare datasets for machine learning. It integrates nicely with Numpy and Scikit-Learn.

Pandas is based on two essential data structures — Series and DataFrame. The first one is quite similar to arrays, and the latter is simply a collection of Series objects presented in a tabular format.

One piece of advice — spend as much time as possible learning Pandas. It provides endless options for manipulating data, filling missing values, and even data visualization. It’s impossible to learn it quickly, but once you learn it, the analysis possibilities are endless.

Best places to start:


In a world of static and awful-looking data visualizations, one library stands out — Plotly. It’s light years ahead of Matplotlib — the visualization library you’ll probably learn first.

Plotly does it better. The visualizations are interactive by default, and the options to tweak are endless. Visualizations are ready both for publications and dashboards. Their Dash library is the perfect example. I’ve used it countless times to build interactive dashboards around data or machine learning models.

Best places to start:


Every so often, you’ll need an ultra-specific dataset. You won’t find it online in a tabular format, but you know the data exists somewhere. The problem is — it’s listed on a website and isn’t formatted correctly (think product listing on Amazon).

That’s where BeautifulSoup comes in. It’s a library for pulling data out of HTML and XML files. You’ll have to download the HTML with the requests library and then use BeautifulSoup to parse it.

Web scraping is somewhat of a gray area. Some sites allow it, some don’t. It’s common to get blocked by sites if you make too many requests. Always make sure to check the robots.txt file (more info here) beforehand. Check if the website you want to scrape has an API available. In that case, there’s no point in scraping.

Best place to start:


Dask is really similar to Pandas, but offers one crucial advantage — it’s built for parallelism. Numpy and Pandas aren’t your best friends for large datasets. It’s impossible to fit 20 GB dataset into 16 GB of RAM by default, but Dask can do it.

There’s nothing wrong with Numpy and Pandas on small datasets. Things get out of hand when datasets get larger than available RAM and when computation time gets long. Dask can split the data into chunks and process them in parallel, taking care of both pain points.

The best part is — you don’t have to learn a new library from scratch. Arrays and DataFrames in Dask have almost identical function to those in Numpy and Pandas, so it should feel right at home.

You can also train machine learning models with Dask. It’s a 3-in-1 package best suited for large datasets.

Best places to start:


Statsmodels allows you to train statistical models and perform statistical tests. It’s a bit different from other Python libraries for statistical modeling because it’s quite similar to R. For example, you’ll have to use R-style formula syntax to train a linear regression model.

The library returns an extensive list of result statistics for each estimator, making models easily comparable.

It is by far the best library for training time series models. It provides every statistical model you can imagine — from moving averages and exponential smoothing, to seasonal ARIMA and GARCH. The only downside is — it can’t train time series models with deep learning algorithms.

Best places to start:

It’s the holy grail of machine learning with Python. The library is built on top of Numpy, Scipy, and Matplotlib, and provides implementation for most supervised and unsupervised learning algorithms. It also comes with a suite of functions and classes for data preparation — such as scalers and encoders.

There’s no getting around this library. You’ll find it used in any Python-based machine learning book or course. Almost all algorithms have identical APIs — you first have to train the model by calling the fit() function, and afterwards you can make predictions with the predict() function. This design choice makes the learning process much easier.

The library works out-of-the-box with Pandas, so you can pass prepared DataFrames directly to the model.

Best places to start:


Images are a big part of deep learning and AI. OpenCV packs the tools to do almost anything with them. Think of it as a non-GUI version of Photoshop.

The library can process both images and videos to detect objects, faces, and almost anything you can imagine. It doesn’t work as well as sophisticated object detection algorithms (think YOLO), but is a great starting step for newcomers to computer vision.

One thing I dislike is the syntax. It’s not Pythonic. The library uses camel case instead of snake case. For instance, the function is named getRotationMatrix2D() instead of get_rotation_matrix_2d(). The latter is more Pythonic, while the prior looks more like Java. It’s not a deal-breaker, and you’ll get used to it soon.

Best places to start:


Deep learning is an essential part of data science. TensorFlow, alongside the high-level Keras API, allows you to train deep learning models with little code.

The development possibilities are endless. You can use neural networks on tabular datasets, images, or audio. You can also build highly accurate detection and segmentation models, or experiment with image style transfer.

Another essential aspect of machine learning and deep learning is deployment. You don’t want your models sitting idle on a hard drive. TensorFlow allows you to deploy models to the cloud, on-premise, to browser, and devices. All grounds are covered.

Best places to start:


And finally, there’s Flask. It’s a library used for building web applications. I use it in almost every project, even though I don’t care about web development. In data science, Flask is a go-to library for building web APIs and applications around machine learning models.

You can stick with Flask for developing applications, but I recommend the Flask-RESTful for building APIs.

Best places to start:


Post a Comment