Neptune is a machine learning platform for managing and monitoring data science experiments.

A data science experiment can involve training and evaluation of machine learning models, anomaly detection, feature engineering and other tasks related to the field of data science.

Neptune organizes your work with machine learning experiments and makes it more efficient. Neptune comes with a simple and clean web user interface that allows you to track the running experiments and their metrics. You can also easily compare different machine learning models and track your experiments’ history and outputs.

Neptune emphasizes a non-disruptive approach and integrates seamlessly with your experiments. You only need to add a few lines to your source code to take advantage of Neptune’s features. Neptune does not impose any additional requirements on the libraries and the operating system you use. You are also free to choose the infrastructure to run your experiments on. Compute-intensive experiments can be scheduled using Neptune’s queue mechanism to be executed on a remote machine.

Neptune is a collaborative machine learning platform ready for multiple users to manage their experiments, share the ideas and compare the results!

Running an experiment with Neptune

A Glimpse of Neptune’s Features

Let’s take a look at a dashboard of a machine learning experiment. You can see live charts with metrics defined within your code. Information about the experiment such as its name, status and duration time, is displayed in the header.

Dashboard of a machine learning experiment

Using Neptune’s dashboards you can track the progress of model training or observe the dynamically calculated evaluation metrics. The charts displayed at the dashboard are fully customizable, so you are free to plot any values you want.

What’s more, Neptune’s dashboards support not only numeric values, but also images and text.

Images and text from the experiment viewed in a dashboard

You can compare the trained model’s performance against models from other experiments. Let’s take a look at the list of experiments.

Comparing different models in the list view

You can sort the experiments by values of metrics and filter them by various criteria, like experiment’s name, owner and tags. This allows you to easily select the best model from history containing numerous experiments.

Interested? Join our Early Adopters Program!

Next Steps

Getting started: Run a simple experiment using Neptune.

Examples: See how Neptune helps to solve machine learning problems.

Jobs and Experiments: Read Neptune’s reference guides.